query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
Fine-grained Entity Recognition (FgER) is the task of detecting and classifying entity mentions to a large set of types spanning diverse domains such as biomedical, finance and sports. We observe that when the type set spans several domains, detection of entity mention becomes a limitation for supervised learning models. The primary reason being lack of dataset where entity boundaries are properly annotated while covering a large spectrum of entity types. Our work directly addresses this issue. We propose Heuristics Allied with Distant Supervision (HAnDS) framework to automatically construct a quality dataset suitable for the FgER task. HAnDS framework exploits the high interlink among Wikipedia and Freebase in a pipelined manner, reducing annotation errors introduced by naively using distant supervision approach. Using HAnDS framework, we create two datasets, one suitable for building FgER systems recognizing up to 118 entity types based on the FIGER type hierarchy and another for up to 1115 entity types based on the TypeNet hierarchy. Our extensive empirical experimentation warrants the quality of the generated datasets. Along with this, we also provide a manually annotated dataset for benchmarking FgER systems. In the literature, the problem of recognizing a handful of coarse-grained types such as person, location and organization has been extensively studied BID18 , ]. We term this as Coarse-grained Entity Recognition (CgER) task. For CgER, there exist several datasets, including manually annotated datasets such as CoNLL BID28 ] and automatically generated datasets such as WP2 BID21. Manually constructing a dataset for FgER task is an expensive and time-consuming process as an entity mention could be assigned multiple types from a set of thousands of types. In recent years, one of the subproblems of FgER, the Fine Entity Categorization or Typing (Fine-ET) problem has received lots of attention particularly in expanding its type coverage from a handful of coarse-grained types to thousands of fine-grained types BID17 BID6. The primary driver for this rapid expansion is exploitation of cheap but fairly accurate annotations from Wikipedia and Freebase BID4 via the distant supervision process BID7. The Fine-ET problem assumes that the entity boundaries are provided by an oracle. We observe that the detection of entity mentions at the granularity of Fine-ET is a bottleneck. The existing FgER systems, such as FIGER BID12, follow a two-step approach in which the first step is to detect entity mentions and the second step is to categorize detected entity mentions. For the entity detection, it is assumed that all the fine-categories are subtypes of the following four categories: person, location, organization and miscellaneous. Thus, a model trained on the CoNLL dataset BID28 ] which is annotated with these types can be used for entity detection. Our analysis indicates that in the context of FgER, this assumption is not a valid assumption. As a face value, the miscellaneous type should ideally cover all entity types other than person, location, and organization. However, it only covers 68% of the remaining types of the FIGER hierarchy and 42% of the TypeNet hierarchy. Thus, the models trained using CoNLL data are highly likely to miss a significant portion of entity mentions relevant to automatic knowledge bases construction applications. Our work bridges this gap between entity detection and Fine-ET. We propose to automatically construct a quality dataset suitable for the FgER, i.e, both Fine-ED and Fine-ET using the proposed HAnDS framework. HAnDS is a three-stage pipelined framework wherein each stage different heuristics are used to combat the errors introduced via naively using distant supervision paradigm, including but not limited to the presence of large false negatives. The heuristics are data-driven and use information provided by hyperlinks, alternate names of entities, and orthographic and morphological features of words. Using the HAnDS framework and the two popular type hierarchies available for Fine-ET, the FIGER type hierarchy BID12 and TypeNet BID17, we automatically generated two corpora suitable for the FgER task. The first corpus contains around 38 million entity mentions annotated with 118 entity types. The second corpus contains around 46 million entity mentions annotated with 1115 entity types. Our extensive intrinsic and extrinsic evaluation of the generated datasets warrants its quality. As compared with existing automatically generated datasets, supervised learning models trained on our induced training datasets perform significantly better (approx 20 point improvement on micro-F1 score). Along with the automatically generated dataset, we provide a manually annotated corpora of around thousand sentences annotated with 117 entity types for benchmarking of FgER models. Our contributions are highlighted as follows:• We analyzed that existing practice of using models trained on CoNLL dataset has poor recall for entity detection in the Fine-ET setting, where the type set spans several diverse domains. (Section 3)• We propose HAnDS framework, a heuristics allied with the distant supervision approach to automatically construct datasets suitable for FgER problem, i.e., both fine entity detection and fine entity typing. (Section 4)• We establish the state-of-the-art baselines on our new manually annotated corpus, which covers 2.7 times more finer-entity types than the FIGER gold corpus, the current de facto FgER evaluation corpus. (Section 5)The rest of the paper is organized as follows. We describe the related work in Section 2, followed by a case study on entity detection problem in the Fine-ET setting, in Section 3. Section 4 describes our proposed HAnDS framework, followed by empirical evaluation of the datasets in Section 5. In Section 6 we conclude our work. We majorly divide the related work into two parts. First, we describe work related to the automatic dataset construction in the context of the entity recognition task followed by related work on noise reduction techniques in the context of automatic dataset construction task. In the context of FgER task, BID12 proposed to use distant supervision paradigm BID2 ] to automatically generate a dataset for the Fine-ET problem, which is a sub-problem of FgER. We term this as a Naive Distant Supervision (NDS) approach. In NDS, the linkage between Wikipedia and Freebase is exploited. If there is a hyperlink in a Wikipedia sentence, and that hyperlink is assigned to an entity present in Freebase, then the hyperlinked text is an entity mention whose types are obtained from Freebase. However, this process can only generate positive annotations, i.e., if an entity mention is not hyperlinked, no types will be assigned to that entity mention. The positive only annotations are suitable for Fine-ET problem but it is not suitable for learning entity detection models as there are large number of false negatives (Section 3). This dataset is publicly available as FIGER dataset, along with a manually annotated evaluation corpra. The NDS approach is also used to generate datasets for some variants of the Fine-ET problem such as the Corpus level Fine-Entity typing BID33 and FineEntity typing utilizing knowledge base embeddings BID31. Much recently, BID6 ] generated an entity typing dataset with a very large type set of size 10k using head words as a source of distant supervision as well as using crowdsourcing. In the context of CgER task, BID19 BID20 BID21 proposed an approach to create a training dataset for CgER task using a combination of bootstrapping process and heuristics. The bootstrapping was used to classify a Wikipedia article into five categories, namely PER, LOC, ORG, MISC and NON-ENTITY. The bootstrapping requires initial manually annotated seed examples for each type, which limits its scalability to thousands of types. The heuristics were used to infer additional links in un-linked text, however the proposed heuristics limit the scope of entity and non-entity mentions. For example, one of the heuristics used mostly restricts entity mentions to have at least one character capitalized. This assumption is not true in the context for FgER where entity mentions are from several diverse domains including biomedical domain. There are other notable work which combines NDS with heuristics for generating entity recognition training dataset, such as BID1 and BID9. However, their scope is limited to the application of CgER. Our work revisits the idea of automatic corpus construction in the context of FgER. In HAnDS framework, our main contribution is to design data-driven heuristics which are generic enough to work for over thousands of diverse entity types while maintaining a good annotation quality. An automatic dataset construction process involving heuristics and distant supervision will inevitably introduce noise and its characteristics depend on the dataset construction task. In the context of the Fine-ET task BID24 BID10, the dominant noise in false positives. Whereas, for the relation extraction task both false negatives and false positives noise is present BID25 BID22. In this section, we systematically analyzed existing entity detection systems in the setting of Fine-Entity Typing. Our aim is to answer the following question: How good are entity detection systems when it comes to detecting entity mentions belonging to a large set of diverse types? We performed two analysis. The first analysis is about the type coverage of entity detection systems and the second analysis is about actual performance of entity detection systems on two manually annotated FgER datasets.3.1 Is the Fine-ET type set an expansion of the extensively researched coarse-grained types?For this analysis we manually inspected the most commonly used CgER dataset, CoNLL 2003. We analyzed how many entity types in the two popular Fine-ET hierarchies, FIGER and TypeNet are actual descendent of the four coarse-types present in the CoNLL dataset, namely person, location, organization and miscellaneous. The are available in FIG0. We can observe that in the FIGER typeset, 14% of types are not a descendants of the CoNLL types. This share increases in TypeNet where 25% of types are not descendants Table 1: Performance of entity detection models trained on existing datasets evaluated on the FIGER and 1k-WFB-g datasets.of CoNLL types. These types are from various diverse domain, including bio-medical, legal processes and entertainment and it is important in the aspect of the knowledge base construction applications to detect entity mentions of these types. These differences can be attributed to the fact that since 2003, the entity recognition problem has evolved a lot both in going towards finer-categorization as well as capturing entities from diverse domains. For this analysis we evaluate two publicly available state-of-the-art entity detection systems, the Stanford CoreNLP BID15 and the NER Tagger system proposed in BID11. Along with these, we also train a LSTM-CNN-CRF based sequence labeling model proposed in BID13 on the FIGER dataset. The learning models were evaluated on a manually annotated FIGER corpus and 1k-WFB-g corpus, a new in-house developed corpus specifically for FgER model evaluations. The are presented in Table 1.From the , we can observe that a state-of-the-art sequence labeling model, LSTM-CNN-CRF trained on a dataset generated using NDS approach, such as FIGER dataset has lower recall compared with precision. On average the recall is 58% lower than precision. This is primarily because the NDS approach generates positive only annotations and the remaining un-annotated tokens contains large number of entity mentions. Thus the ing dataset has large false negatives. On the other hand, learning models trained on CoNLL dataset (CoreNLP and NER Tagger), have a much more balanced performance in precision and recall. This is because, being a manually annotated dataset, it is less likely that any entity mention (according to the annotation guidelines) will remain un-annotated. However, the recall is much lower (16% lower) on the 1k-WFB-g corpus as on the FIGER corpus. This is because, when designing 1k-WFB-g we insured that it has sufficient examples covering 117 entity types. Whereas, the FIGER evaluation corpus has only has 42 types of entity mentions and 80% of mentions are from person, location and organization coarse types. These also highlight the coverage issue, mentioned in section 3.1. When the evaluation set is balanced covering a large spectrum of entity types, the performance of models trained on the CoNLL dataset goes down because of presence out-of-scope entity types. An ideal entity detection system should be able to work on the traditional as well as other entities relevant to FgER problem, i.e., good performance across all types. A statistical comparison of FIGER and 1k-WFB-g corpus is provided in Table 2.The use of CoreNLP or learning models trained on CoNLL dataset is a standard practice to detect entity mentions in existing FgER research BID12. Our analysis conveys that this practice has its limitation in terms of detecting entities which are out of the scope of the CoNLL dataset. In the next section, we will describe our approach of automatically creating a training dataset for the FgER task. The same learning models, when trained on our generated training datasets will have a better and a balanced precision and recall. The objective of the HAnDS framework is to automatically create a corpus of sentences where every entity mention is correctly detected and is being characterized into one or more entity types. The scope of entities, i.e., what types of entities should be annotated is decided by a type hierarchy, which is one of the inputs of the framework. FIG2 gives an overview of the HAnDS framework. The framework requires three inputs, a linked text corpus, a knowledge base and a type hierarchy. Linked text corpus: A linked text corpus is a collection of documents where sporadically important concepts are hyperlinked to another document. For example, Wikipedia is a large-scale multi-lingual linked text corpus. The framework considers the span of hyperlinked text (or anchor text) as potential candidates for entity mentions. Knowledge base: A knowledge base (KB) captures concepts, their properties, and interconcept properties. Freebase, WikiData BID29 and UMLS BID3 are examples of popular knowledge bases. A KB usually has a type property where multiple fine-grained semantic types/labels are assigned to each concept. Type hierarchy: A type hierarchy (T) is a hierarchical organization of various entity types. For example, an entity type city is a descendant of type geopolitical entity. There have been various hierarchical organization schemes of fine-grained entity types proposed in literature, which includes, a 200 type scheme proposed in BID26, a 113 type scheme proposed in BID12, a 87 type scheme proposed in BID10 and a 1081 type scheme proposed in BID17. However, in our work, we use two such hierarchies, FIGER 2 and TypeNet. FIGER being the most extensively used hierarchy and TypeNet being the latest and largest entity type hierarchy. Automatic corpora creation using distant supervised methods inevitably will contain errors. For example, in the context of FgER, the errors could be at annotating entity boundaries, i.e, entity detection errors, or assigning an incorrect type, i.e., entity linking errors or both. The three-step process in our proposed HAnDS framework tries to reduce these errors. The objective of this stage is to reduce false positives entity mentions, where an incorrect anchor text is detected as an entity mention. To do so, we first categorize all hyperlinks of the document being processed as entity links and non-entity links. Further, every link is assigned a tag of being a referential link or not. Entity links: These are a subset of links whose anchor text represents candidate entity mentions. If the labels obtained by a KB for a link, belongs to T, we categorize that link as an entity link. Here, the T decides the scope of entities in the generated dataset. For example, if T is the FIGER type hierarchy, then the hyperlink photovoltaic cell is not an entity link as its labels obtained by Freebase is not present in T. However, if T is the TypeNet hierarchy, then photovoltaic cell is an entity link of type invention. Non-entity links: These are a subset of links whose anchor text does not represent an entity mention. Since knowledge bases are incomplete, if a link is not categorized as an entity link it does not mean that the link will not represent an entity. We exploit corpus level context to categorize a link as a non-entity link using the following criteria: across complete corpus, the link should be mentioned at least 50 times (support threshold) and at least 50% of times (confidence threshold) with a lowercase anchor text. The intuition of this criteria is that we want to be certain that a link actually represents a non-entity. For example, this heuristic categorizes RBI as a non-entity link as there is no label present for this link in Freebase. Here RBI refers to the term "run batted in", frequently used in the context of baseball and softball. Unlike, BID19 which discards non entity mentions to have capitalized word, our data-driven heuristics does not put any hard constraints. Referential links: A link is said to be referential if its anchor text has a direct caseinsensitive match with the list of allowed candidate names for the linked concept. A KB can provide such list. For example, for an entity Bill Gates, the candidate names provided by Freebase includes Gates and William Henry Gates. However, in Wikipedia, there exists hyperlinks such as Bill and Melinda Gates linking to Bill Gates page, which is erroneous as the hyperlinked text is not the correct referent of the entity Bill Gates. After categorization of links, except for referential entity links, we unlink all other links. Unlinking non-referential links such as Bill and Melinda Gates reduce entity detection errors by eliminating false positive entity mentions. The unlinked text span or a part of it can be referential mention for some other entities, as in the above example Bill and Melinda Gates. FIG2 also illustrates this process where Lahti, Finland get unlinked after this stage. The next stage tries to re-link the unlinked tokens correctly. The objective of this stage is to reduce false negative entity mentions, where an entity mention is not annotated. This is done by linking the correct referential name of the entity mention to the correct node in KB.To reduce entity linking errors, we use the document level context by restricting the candidate links (entities or non-entities) to the outgoing links of the current document being processed. For example, in FIG2, while processing an article about an FinnishAmerican luger Tristan Jeskanen, it is unlikely to observe mention of a 1903 German novel having the same name, i.e., Tristan. To reduce false negative entity mentions, we construct two trie trees capturing the outgoing links and their candidates referential names for each document. The first trie contains all links and the second trie only contains links of entities which are predominantly expressed in lowercase phrases 3 (e.g. names of diseases). For each non-linked uppercase character, we match the longest matching prefix string within the first trie and assign the matching link. In the remaining non-linked phrases, we match the longest matching prefix string within the second trie and assign the matching link. Linking the candidate entities in unlinked phrases reduces entity detection error, by eliminating false negative entity mentions. Unlike BID19, the two step string matching process ensures the possibility of a lowercase phrase being an entity mention (e.g. lactic acid, apple juice, bronchoconstriction, etc.) and a word with a first uppercase character being a non-entity (e.g. Jazz, RBI, 4 etc.). FIG2 shows an example of the input and output of this stage. In this stage, the phrases Tristan, Lahti, Finland and Jeskanen gets linked. The objective of this stage is to further reduce entity detection errors. This stage is motivated by the incomplete nature of practical knowledge bases. KBs do not capture all entities present in a linked text corpus and do not provide all the referential names for an entity mention. Thus, after stage-II there will be still a possibility of having both types of entity detection errors, false positives, and false negatives. To reduce such errors in the induced corpus, we select sentences where it is most likely that all entity mention are annotated correctly. The ant corpora of selected sentences Table 2: Statistics of the different datasets generated or used in this work.will be our final dataset. To select these sentences, we exploit sentence-level context by using POS tags and list of the frequent sentence starting words. We only select sentences where all unlinked tokens are most likely to be a non-entity mention. If an unlinked token has a capitalized characters, then it likely to be an entity mention. We do not select such sentences, except in the following cases. In the first case, the token is a sentence starter, and is either in a list of frequent sentence starter word 5 or its POS tag is among the list of permissible tags 6. In the second case, the token is an adjective, or belongs to occupational titles or is a name of day or month. FIG2 shows an example of the input and output of this stage. Here only the first sentence of the document is selected because in the other sentence the name Sami is not linked. The sentence selection stage ensures that the selected sentences have high-quality annotations. We observe that only around 40% of sentences are selected by stage III in our experimental setup. 7 Our extrinsic analysis in Section 5.2 shows that this stage helps models to have a significantly better recall. In the next section, we describe the dataset generated using the HAnDS framework along with its evaluations. Using the HAnDS framework we generated two datasets as described below: WikiFbF: A dataset generated using Wikipedia, Freebase and the FIGER hierarchy as an input for the HAnDS framework. This dataset contains around 38 million entity mentions annotated with 118 different types. WikiFbT: A dataset generated using Wikipedia, Freebase and the TypeNet hierarchy as an input for the HAnDS framework. This dataset contains around 46 million entity mentions annotated with 1115 different types. In our experiments, we use the September 2016 Wikipedia dump. Table 2 lists various statistics of these datasets. In the next subsections, we estimate the quality of the generated datasets, both intrinsically and extrinsically. Our intrinsic evaluation is focused on 5. 150 most frequent words were used in the list. 6. POS tags such as DT, IN, PRP, CC, WDT etc. that are least likely to be candidate for entity mention.7. An analysis of several characteristics of the discarded and retained sentences in available in the supplementary material at: https://github.com/abhipec/HAnDS. Table 3: Quantitative analysis of dataset generated using the HAnDS framework with the NDS approach of dataset generation. Here H denotes a set of entity mentions in Table 3a and set of entities in Table 3b generated by the HAnDS framework, and N denotes a set of entity mentions in Table 3a and set of entities in Table 3b generated by the NDS approach.quantitative analysis, and the extrinsic evaluation is used as a proxy to estimate precision and recall of annotations. In intrinsic evaluation, we perform a quantitative analysis of the annotations generated by the HAnDS framework with the NDS approach. The of this analysis is presented in Table 3. We can observe that on the same sentences, HAnDS framework is able to generate about 1.9 times more entity mention annotations and about 1.6 times more entities for the WikiFbT corpus compared with the NDS approach. Similarly, there are around 1.8 times more entity mentions and about 1.6 time more entities in the WikiFbF corpus. In Section 5.2.4, we will observe that despite around 1.6 to 1.9 times more new annotations, these annotations have a very high linking precision. Also, there is a large overlap among annotations generated using HAnDS framework and NDS approach. Around above 95% of entity mentions (and entities) annotations generated using the NDS approach are present in the HAnDS framework induced corpora. This indicated that the existing links present in Wikipedia are of high quality. The remaining 5% links were removed by the HAnDS framework as false positive entity mentions. In extrinsic evaluation, we evaluate the performance of learning models when trained on datasets generated using the HAnDS framework. Due to resource constraints, we perform this evaluation only for the WikiFbF dataset and its variants. Following BID12 we divided the FgER task into two subtasks: Fine-ED, a sequence labeling problem and Fine-ET, a multi-label classification problem. We use the existing state-of-the-art models for the respective sub-tasks. The FgER model is a simple pipeline combination of a Fine-ED model followed by a Fine-ET model. Fine-ED model: For Fine-ED task we use a state-of-the-art sequence labeling based LSTM-CNN-CRF model as proposed in BID13.Fine-ET model: For Fine-ET task we use a state-of-the-art LSTM based model as proposed in BID0. Please refer to the respective papers for model details. 8 The values of various hyperparameters used in the models along with the training procedure is mentioned in the supplementary material available at: https://github.com/abhipec/HAnDS. The two learning models are trained on the following datasets: Wiki-FbF: Dataset created by the HAnDS framework. Wiki-FbF-w/o-III: Dataset created by the HAnDS framework without using stage III of the pipeline. Wiki-NDS: Dataset created using the naive distant supervision approach with the same Wikipedia version used in our work.. FIGER: Dataset created using the NDS approach but shared by BID12.Except for the FIGER dataset, for other datasets, we randomly sampled two million sentences for model training due to computational constraints. However, during model training as described in the supplementary material, we ensured that every model irrespective of the dataset, is trained for approximately same number of examples to reduce any bias introduced due to difference in the number of entity mentions present in each dataset. All extrinsic evaluation experiments, subsequently reported in this section are performed on these randomly sampled datasets. Also, the same dataset is used to train Fine-ED and Fine-ET learning model. This setting is different from BID12 where entity detection model is trained on the CoNLL dataset. Hence, the reported in their work is not directly comparable. We evaluated the learning models on the following two datasets: FIGER: This is a manually annotated evaluation corpus which has been created by BID12. This contains 563 entity mentions and overall 43 different entity types. The type distribution in this corpus is skewed as only 11 entity types are mentioned more than 10 times. 1k-WFB-g: This is a new manually annotated evaluation corpus developed specifically to cover large type set. This contains 2420 entity mentions and overall 117 different entity types. In this corpus 84 entity types are mentioned more than 10 types. The sentences for this dataset construction were sampled from Wikipedia text. The statistics of these datasets is available in Table 2. For the Fine-ED task, we evaluated model's performance using the precision, recall and F1 metrics as computed by the standard conll evaluation script 9. For the Fine-ET and 8. Please note that there are several other models with competitive or better performance such as BID5 BID11 BID13 for sequence labeling problem and BID23 BID27 BID0 BID31 BID32 for multi-label classification problem. Our criteria for model selection was simple; easy to use publicly available efficient implementation. Table 4: Performance of the entity detection models on the FIGER and 1k-WFB-g datasets.the FgER task, we use the strict, loose-macro-average and loose-micro-average evaluation metrics described in BID12. The of the entity detection models on the two evaluation datasets are presented in Table 4. From these we perform two analysis. First, the effect of training datasets on model's performance and second, the performance comparison among the two manually annotated datasets. In the first analysis, we observe that the LSTM-CNN-CRF model when trained on WikiFbF dataset has the highest F1 score on both the evaluation corpus. Moreover, the average difference in precision and recall for this model is the lowest, which indicates a balanced performance across both evaluation corpus. When compared with the models trained on the NDS generated datasets (Wiki-NDS and FIGER), we observe that these models have best precision across both corpus, however, lowest recall. The indicates that large number of false negatives entity mentions are present in the NDS induced datasets. In the case of model trained on the dataset Wiki-FbF-w/o-III dataset the performance is in between the performance of model trained on Wiki-NDS and Wiki-FbF datasets. However, they have a significantly lower recall on average around 28% lower than model trained on Wiki-FbF. This highlights the role of stage-III, by selecting only quality annotated sentence, erroneous annotations are removed, ing in learning models trained on WikiFbF to have a better and a balanced performance. In the second analysis, we observe that models trained on datasets generated using Wikipedia as sentence source, performs better on the 1k-WFB-g evaluation corpus as compared to the FIGER evaluation corpus. These datasets are FIGER training corpus, WikiFbF, Wiki-NDS and Wiki-FbF-w/o-III. The primarily reason for better performance is that the sentences constituting the 1k-WFB-g dataset were sampled from Wikipedia. 10 Thus, this evaluation is a same domain evaluation. On the other hand, FIGER evaluation corpus is based on sentences sampled from news and specialized magazines (photography and veterinary domains). It has been observed in the literature that in a cross domain evaluation setting, learning model performance is reduced compared to the same domain evaluation BID20. Moreover, this also conveys that to some extent learning model trained on the large Wikipedia text corpus is also able to generalize on evaluation dataset consisting of sentences from news and specialized magazines. Our analysis in this section as well as in Section 3.1 indicates that although the type coverage of FIGER evaluation corpus is low (43 types), it helps to better measure model's generalizability in a cross-domain evaluation. Whereas, 1k-WFB-g helps to measure performance across a large spectrum of entity types (117 types). Learning models trained on Wiki-FbF perform best on both of the evaluation corpora. This warrants the usability of the generated corpus as well as the framework used to generate the corpus. We observe that for the Fine-ET task, there is not a significant difference between the performance of learning models trained on the Wiki-NDS dataset and models trained on the Wiki-FbF dataset. The later model performs approx 1% better in the micro-F1 metric computed on the 1k-WFB-g corpus. This indicates that in the HAnDS framework stage-II, where false negative entity mentions were reduced by relinking them to Freebase, has a very high linking precision similar to NDS, which is estimated to be about 97-98% BID30.The for the complete FgER system, i.e., Fine-ED followed by Fine-ET are available in TAB5. These supports our claim in Section 3.1, that the current bottleneck for the FgER task, is Fine-ED, specifically lack of resource with quality entity boundary annotations while covering large spectrum of entity types. Our work directly addressed this issue. In the FgER task performance measure, learning model trained on WikiFbF has an average absolute performance improvement of at least 18% on all of the there evaluation metrics. In this work, we initiate a push towards moving from CgER systems to FgER systems, i.e., from recognizing entities from a handful of types to thousands of types. We propose the HAnDS framework to automatically construct quality training dataset for different variants of FgER tasks. The two datasets constructed in our work along with the evaluation resource are currently the largest available training and testing dataset for the entity recognition problem. They are backed with empirical experimentation to warrants the quality of the constructed corpora. The datasets generated in our work opens up two new research directions related to the entity recognition problem. The first direction is towards an exploration of sequence labeling approaches in the setting of FgER, where each entity mention can have more than one type. The existing state-of-the-art sequence labeling models for the CgER task, can not be directly applied in the FgER setting due to state space explosion in the multi-label setting. The second direction is towards noise robust sequence labeling models, where some of the entity boundaries are incorrect. For example, in our induced datasets, there are still entity detection errors, which are inevitable in any heuristic approach. There has been some work explored in BID8 assuming that it is a priori known which tokens have noise. This information is not available in our generated datasets. Additionally, the generated datasets are much richer in entity types compared to any existing entity recognition datasets. For example, the generated dataset contains entities from several domains such as biomedical, finance, sports, products and entertainment. In several downstream applications where NER is used on a text writing style different from Wikipedia, the generated dataset is a good candidate as a source dataset for transfer learning to improve domain-specific performance.
We initiate a push towards building ER systems to recognize thousands of types by providing a method to automatically construct suitable datasets based on the type hierarchy.
1,400
scitldr
Implementing correct method invocation is an important task for software developers. However, this is challenging work, since the structure of method invocation can be complicated. In this paper, we propose InvocMap, a code completion tool allows developers to obtain an implementation of multiple method invocations from a list of method names inside code context. InvocMap is able to predict the nested method invocations which their names didn’t appear in the list of input method names given by developers. To achieve this, we analyze the Method Invocations by four levels of abstraction. We build a Machine Translation engine to learn the mapping from the first level to the third level of abstraction of multiple method invocations, which only requires developers to manually add local variables from generated expression to get the final code. We evaluate our proposed approach on six popular libraries: JDK, Android, GWT, Joda-Time, Hibernate, and Xstream. With the training corpus of 2.86 million method invocations extracted from 1000 Java Github projects and the testing corpus extracted from 120 online forums code snippets, InvocMap achieves the accuracy rate up to 84 in F1- score depending on how much information of context provided along with method names, that shows its potential for auto code completion. Writing code is a challenge for non-experienced software developers. To write the code that implements a specific task in a programming language, developers need to remember the syntax of that language and be familiar with how to implement method invocations. While the syntax of the language is easier to learn since it contains a permanent set of words in the vocabulary, implementing Method Invocations (MI)s is more challenging due to the following reasons. First of all, developers need to remember the structure and the combination of invocations depending on their purpose. Secondly, the implementation of method invocation is also depending on the surrounding context of the code. Thus, the code developed by non-experience developers may be in the risks of being semantic error. To help developers with interacting and analyzing by a given Java source code snippet, Java Development Tool (JDT) library defines a list of Abstract Syntax Tree (AST) Node types . With the list of these AST Node types, JDT is able to interact with the structure of each elements inside the source code. MI, which is defined as sub-type of Expression, is one of the fundamental AST Nodes that developers need to implement. MI has been used to make Application Programming Interface (API) calls from other libraries or from other methods inside a Java project. The structure of a syntactically correct MI contains method name, receiver and the list of arguments which could be empty. Since receiver and arguments are types of expression , the structure of an MI could be complicated as a deep AST tree. The reason for this issue is that expression can be composed by different types of AST Node including MI. An example of a complicated MI is shown in Listing 1. Within this Listing, the outside MI contains four nested MI in its implementation. Additionally, there are five positions that requires local variables inside the expression. Type casting to integer is embedded to this MI to provide a semantically correct MI. This MI is used along with other calculated MIs inside the body of method, providing the a specific surrounding context for this MI. Without doubt, the outer method name set is just one word while the respected MI is a deep AST tree. The representation of MI also relies on code context. Consider examples 2A and 2B on Listing 2 and Listing 3. These Listings show the implementation of API android.content.Intent.getBooleanExtra. Although 2 MIs share the same information about context of using the same local variable Intent and the false boolean literal, they are differ in the structure of AST. Since the MI in Listing 2 associates with the action of add or remove an application package from an android device, the MI on Listing 3 associates with actions of network status checking. The difference in contexts brings 2 MIs, which represents in 2 static Field Accesses Intent. EXTRA REPLACING and ConnectivityManager. EXTRA NO CONNECTIVITY. Listing 1: Example in Android (2019a) 1 p u b l i c v o i d s e t O f f s e t s (i n t n e w H o r i z o n t a l O f f s e t, i n t n e w V e r t i c a l O f f s e t) {2 . . . . . . 5 i n v a l i d a t e R e c t f . o f f s e t (− x o f f s e t, −y o f f s e t); 6 i n v a l i d a t e R e c t. s e t (( i n t) Math. f l o o r (i n v a l i d a t e R e c t f . l e f t), (i n t) Math. f l o o r (i n v a l i d a t e R e c t f . t o p), (i n t) Math. c e i l (i n v a l i d a t e R e c t f . r i g h t), (i n t) Math. c e i l (i n v a l i d a t e R e c t f . b o t t o m) ); 7... Listing 2: Example 2A in Android (2019b) 1 p u b l i c v o i d o n R e c e i v e (C o n t e x t c o n t e x t, I n t e n t i n t e n t) {2 . . . 3 i f ( ( I n t e n t . ACTION PACKAGE REMOVED . e q u a l s ( a c t i o n) | | 4 I n t e n t. ACTION PACKAGE 5 ADDED. e q u a l s (a c t i o n) ) 6 &&! i n t e n t. g e t B o o l e a n E x t r a (I n t e n t . EXTRA REPLACING, f a l s e) ) {7 . . . Listing 3: Example 2B in Android (2019c) 1 p u b l i c v o i d o n R e c e i v e (C o n t e x t c o n t e x t, I n t e n t i n t e n t) {2 . . . 3 i f ( a c t i v e N e t w o r k == n u l l) {4 . . . 5} e l s e i f (a c t i v e N e t w o r k . g e t T y p e == n e t w o r k T y p e ) {6 mNetworkUnmetered = f a l s e ; 7 mNetworkConnected = ! i n t e n t . g e t B o o l e a n E x t r a ( C o n n e c t i v i t y M a n a g e r . EXTRA NO CONNECTIVITY, f a l s e); 8... From the examples above, we recognize that implementing an effective method invocation requires strong and experiences of developers. Even two MIs that belong to the same API and share the same context of local variables and literal still have ambiguous in the way of implementation like Listing 2 and Listing 3. These challenges hinders the ability of writing a appropriate MI and as well as developers need to spend time to remember or identify the correct structure of AST in MI for software development. With this work, we want to tackle this problem by providing InvocMap, a code completion tool for helping developers to achieve the implementation of method invocation efficiently. InvocMap accepts input as a sequence of method names inside the code environment of a method declaration, then produce the output as the list of ASTs as translation for each input method names. The generated ASTs will only require developers to input information about local variables and literals in order to obtain the complete code. For instance, in Listing 2, developer can write the list of method names including the name getBooleanExtra. The output for the suggestion will be #.getBooleanExtra(Intent. EXTRA REPLACING,#), which can be completed manually by a variable of type android.content.Intent in the first "#" and a boolean literal in the second "#". Statistical Machine Translation (SMT) is a well-known approach in Natural Language Processing (NLP) for translating between languages . For taking advantage from SMT, we propose a direction of code completion for Method Invocation by a Statistical approach, which learn the translation from the abstract information of MIs to the their detail information, which are represented by AST with complicate structure. First and foremost, we analyze the information inside a typical MI. We divide the MI by four levels of abstraction. We also define information of context for each MI which can help to predict the AST structure. Next, we build an SMT engine specified for our work to infer from the very abstract layer of MI, means Method Name, to the third level of MI, which is an AST tree that requires to be fulfill by local variables and literals. In order to evaluate our approach, we do experiments to check the accuracy of our code completion technique in two data sets collected from Github and from online forums. Resources of this paper can be found in . This research has following contributions: 2. Designing rules for extracting code tokens for representing abstract level and details level for various types of AST nodes. 3. Proposing an algorithm for visiting a method invocation inside the code environment to abstract and encode their structure in AST as an object for statistical learning. 4. Building a SMT system for learning from the context of code environment, including MIs from large scale Github high quality projects. This SMT system is able to predict the sequences of AST structure given sequences of method name and context. We summarize the engines inside InvocMap on Figure 1. From the perspective of developers, InvocMap provides a plugin inside with Java code editor to allow them to write a single or multiple method names inside the code environment. Starting with this input, InvocMap translates each method names to respective ASTs. These ASTs reflect the complex structure of method invocations which might be inconvenient for developers to remember. They are abstracted at level 3 in our definition. That means they only require developers to add local variables, local methods or literals to obtain the final code. We will discuss about MI at level 3 of abstraction in the next section. The ability of inferring ASTs for code completion relies on the Statistical Translation module. The training process is done by the Statistical Learning module. This module learns information from the data extracted from large scale Github code corpus . In general, our statistical approach takes advantages of the knowledge of implementing MIs from experienced developers, representing it by a machine learning model to help non-experienced developers in retrieving effective implementation of MIs. Both the source code at developers side and code corpus are analyzed to extract sequences of tokens by the Train AST Visitor and Test AST Visitor modules we developed. Inside these visitors, we handle each AST Node types by functions of module Context Extractor and MI Abstractor, which we discuss in next sections. Definition 1 Level 1 of abstraction of a method invocation is the information about method name of that method invocation. Definition 2 Level 2 of abstraction of a method invocation is the information about type (or signature) of that method invocation. Definition 3 Level 3 of abstraction of a method invocation is the Abstract Syntax Tree of that method invocation with abstracted place holder for local variables, local methods and literal. Definition 4 Level 4 of abstraction of a method invocation is the complete Abstract Syntax Tree of that method invocation. Along with 4 levels of abstraction in MI, we have the definition of local context provided for each MI. An example of 4 levels is shown in Figure 2 (a). In this code snippet, we have level 1 as method name println. The level 2 of abstraction brings us information about type, which is java.io.PrintStream.println.The level 4 is the final source code which is compile-able. The level 3 is the AST that is having places which are local entities are abstracted by their type information. In the implementation, we represent this AST in level 3 by 4 fields: the code with abstracted places for local entities, the list of types of required arguments to add to get level 4, the list of imported APIs and the type of MI. These 4 fields will make an unique identification for the expression, which will serve as a representative token for the AST. Therefore, developers could know which types of local variables to obtain the final code along with the set of imported APIs when they receive an AST at level 3 of abstraction. In our work, we focus on the inference from level 1 to level 3 by translation. We will use information of local context to help developers who already remember what variables should run inside the MI and some words inside the MI to better retrieve the AST of implementation. In Figure 2 (a), we see 2 local entities, including the string literal "index" and the integer variable i. The suggested terms can be "System" and "+" sign. Definition 6 Level 1 of abstraction of other AST Nodes is the information about the Partial Qualified Name (PQN) of type of those nodes. Definition 7 Level 2 of abstraction of other AST Nodes is the information about Fully Qualified Name (FQN) of type of those nodes. In the context of this work, we call other AST Nodes as all kinds of AST except the MI that are defined in. According to definitions of , an example is the API java.io.File. In this API, we have File as PQN while we have java.io.File as FQN. Other AST Nodes tokens. We extract information about other AST Nodes to provide useful context for MIs prediction. In the source language, we extract all tokens of level 1 of abstraction for each AST Node, and extract all tokens in level 2 of that AST Node to put into target language. The implementation of the extraction is the Context Extractor module, which is called inside Train AST Visitor and Test AST Visitor. MI tokens. There are two types of information we want to embed for MI: the mapping between method name and the AST along with the information relate to local context. For the first type of information, the source language will store information about token as level 1 of abstraction of MI, while the target language stores information about level 3 of abstraction of MI. Besides, information about local context will be stored by level 1 of abstraction in the source and level 2 of abstraction in the target language. A sequence of tokens for MI in Figure 2 (a) is shown in Figure 2 } 21 e l s e {22 r e s u l t . s e t I m p o r t e d A P I s . add ( g e t T y p e ( node) ); 27 } 28 } We get information about level 3 of abstraction in MI by proposing an algorithm in Listings 4 and 5. The abstractMethodInvocation function is invoked when the Train AST Visitor or Test AST Visitor visit a MI and return the abstraction in level 3 by an instance of AST Level3 class. This function will use the child class of ASTVisitor called InvocAbstractVisitor defined in Listing 5 (line #12). This visitor will visit each element inside the MI, check and abstract if the element is a local entity. This visitor also stores other information about the code of AST, the list of required types for each local entities and the set of imported APIs. The handling strategy for each types of AST Node inside the MI is implemented in the visitStructure function in Listing 5(#23). After visiting and abstracting of MI to an AST Level3, this object is checked by the first four fields defined in Listing 5(#1-#10) to see if its exist in the dictionary or not. If yes, it will have the id of the existing object in the dictionary. Otherwise, it will generate a new unique id and will be added to the dictionary. The dictionary stores information about abstraction at layer 3 of MIs in the training step. An example of AST Level3 object is shown in Figure 2 (a). To learn the mapping between source and target language, we apply the SMT . SMT was built from two models: the language model and the translation model.. LM is used to predict the next token given a sequence of previous tokens . The more comprehensive corpus of target language we have, the higher quality of prediction the LM achieves. LM had been used widely in Software Engineering (SE) researches (; ;) with potential . The most basic LM in NLP is uni-gram LM, which calculates the probability of each word based on the number of the appearance of that word in the corpus. This LM provides drawbacks that it doesn't take into account the history of how a word was used in a sequence from training data. Here we use the n-gram language model, which proposed by. Assume that we have m tokens in the target language AST 1,..., AST m, the probability provided by LM is shown in the above equation of Equations 1. Translation Model. This model calculates the probability of a phrase from source language that can be translated to a phrase in a target language. If we have a sentence D as the translated of sentence S as tokens in the source language, the selection of D as the best candidate is calculated by the below equation in Equations 1. Since we infer from method names to MIs which are consistent in order, we don't apply the reordering probability in the translation model. Data Preparation. To do the evaluation, we select corpus on six well-known libraries. They are Java Development Kit (JDK), Android, GWT, Joda-Time, Hibernate, and XStream. These libraries were selected to generate the corpus for other research works . To generate corpus, we select 1000 highest stars Java projects from Github , which have most files used APIs from libraries in Table 1a. For each Java project, InvocMap parses each Java source files by an the Train ASTVisitor module on Figure 1. The number of pairs respected to each method body we collect is shown in Table 1a. Training and Testing Configuration. To train the SMT model, we use a high-end computer with core-i7 Intel processor and use 32 GB of memory. We apply our solution using. We allocate Phrasal with phrase length equals to 7. The total training time requires about 6 hours. For testing, we evaluate the ability of translation from a sequence of method names to ASTs in level 3 of abstraction. We simulate 3 configurations sequences of method names regarding to its local context defined in Table 1b. We can see the local context provided for method names is increasing from configurations at level 1 to level 3. At level 1, the input for translation contains only method names with the code context in the source language for translation. It simulates the case that developers write a list of method names inside the code environment. At level 2, information about partial class name of types of local entities is attached along with each method names. This case simulates the case developers remember and write method name and local variables they remember as part of the MI, but they don't remember the structure of AST. At level 3, each method names in the source language will be attached the information about local entities and half of words appeared inside the MI. This case simulates the case that developers remember some words inside the MI along with local entities. Metrics. Information about tokens of method name and MI can be recognized by the annotation #identifier in the source, and the expected can be recognized by prefix "E-Total" of tokens in the target. We use Precision and Recall as 2 metrics for the evaluation. Out of Vocabulary (OOV) is the case that the method name token does not in the corpus (Out of Source -OOS) or the expected AST in level 3 does not appear in the target corpus (Out of Target -OOT). We split the pairs of our parallel corpus for training and testing. We get 10% of the data for testing and the other with training and do ten-fold cross-validation to test the ability of prediction on our full data set. In total, there are 2.86 Million of MIs collected from 1000 projects from. The evaluation for intrinsic data is shown in Table 2. We show that from configuration 1 to configuration 3, the F1 score increases from 73.06% to 84.4%. This seems to be feasible, since the fact that if we provide more local context information along with method names, the ability to predict correctly AST in level 3 for the translation model is better. We see one observation is that the number of Out of Vocabulary expressions are higher in percentage, cause decreasing in recall compare to the research work that applied Machine Translation for inferring Fully Qualified Name from incomplete code . This is reasonable, since our work requires to infer the MI in level 3 of abstraction, which contains detail structure compared to output of , which only infers the type information of MI. We study an example in the Intrinsic Evaluation in Figure 2 (b). This example is a function collected from from our corpus. The testing for intrinsic evaluation simulates the case developers input only println inside the code environment, the output of this case will be the implementation of java.io.PrintWriter.println function. We can see that the surrounding code is useful to infer the correct expression. If we do not have the context information, which means developer input println in an empty method, the translated will return the most popular MI, System.out.println. To do this experiment, we collect the data as code snippets from Online Forums (; ;). A Software Engineer who has 5 years of experience in Java programming was hired to collect code snippets from 120 posts in Online Forums, with 20 posts for each library in Table 1a. The for extrinsic evaluation is shown in Table 2. We see that with level 1, since the case that only method names are provided in the source language, our approach stills predict correctly 68.5% in F1-score. With the configuration levels that developers add more information, the F1-score increases to 84%. For each library, we achieved the highest accuracy on GWT and lowest on Hibernate with input as detail information like configuration 3. This seems reasonable, since Hibernate is a bigger library compared to GWT but it is not as popular as JDK, causes the variety of ASTs for APIs in this library. In this evaluation, we analyze the relation of the expression prediction relates to the number of mapping of each method name from the parallel corpus. We use data collected for the Intrinsic Evaluation with configuration 3. The , which is shown in Table 1c, reveals that from the number of method name that has more than 100 mappings in the parallel corpus are about 72% of the total data. It proves the complexity of kinds of implementation for each method names. The total precision tends to decrease from 96.47 % to 87.68% from low to high number of mappings, means that the prediction is still acceptable although the method names are too ambiguous. Machine Learning has been applied widely in Software Engineering applications. Generating code by machine learning is an interesting but also confront challenges. There is a research by shows that the inference of code from documentation by machine translation achieved very low accuracy on both SMT and Neural Machine Translation (NMT) models learned from practical large scale code corpus. There are two reasons cause to this challenge. First, large scale code corpus contains noise data. Second, the structure of AST Node is complicate for a machine translation system to learn about the syntactically correct of generated code as shown in. propose an approach to achieve the implementation from in natural language description. However, the output of their tool consists only sequence of APIs which is in level 2 of our abstraction for MIs. In our work, we target the inference of MI in level 3 with the ability of complex AST structure of MIs. There are several other inputs to get the complete code in other researches. derive the code in C# language from code in Java language by machine translation. generate the code from natural language descriptions. In these works, they consider the textual description as the full information for the inference. We consider our code generation problem in a different angle, which we take advantage of the surrounding context along with the textual description of method name in our work. propose a graph based code completion tool that suggest the full code snippet when developers are writing an incomplete code. This work focuses on completing the code from a part of the code. We propose an inference from the skeleton of method invocations, which is in form of sequence of method names, to the implementation of method invocations. In this work, we proposed InvocMap, a SMT engine for inferring the ASTs of method invocations from a list of method names and code context. By the evaluation on corpus collected from Github projects and online forums, we demonstrated the potential of our approach for auto code completion. A major advantage of InvocMap is that it is built on the idea of abstracting method invocations by four different levels. We provided an algorithm to achieve AST of method invocations for the method invocations inference. As future works, we will work on extending the SMT model to support inputs from multiple natural language descriptions of multiple method invocations, along with investigation of machine learning techniques for improving the accuracy.
This paper proposes a theory of classifying Method Invocations by different abstraction levels and conducting a statistical approach for code completion from method name to method invocation.
1,401
scitldr
Adversaries in neural networks have drawn much attention since their first debut. While most existing methods aim at deceiving image classification models into misclassification or crafting attacks for specific object instances in the object setection tasks, we focus on creating universal adversaries to fool object detectors and hide objects from the detectors. The adversaries we examine are universal in three ways: They are not specific for specific object instances; They are image-independent; They can further transfer to different unknown models. To achieve this, we propose two novel techniques to improve the transferability of the adversaries: \textit{piling-up} and \textit{monochromatization}. Both techniques prove to simplify the patterns of generated adversaries, and ultimately in higher transferability. Despite the success of machine learning and deep learning models, recently it has been shown that these models are susceptible and sensitive to what is termed as adversarial examples, a.k.a. adversaries BID32 BID10. Adversaries are usually derived from ordinary data and retain the same semantic content, but can in wrong predictions. Previous studies have shown that adversarial examples can be crafted efficiently and successfully in some conditions, which poses significant security threats BID14. Formally speaking, given a model y = F (x), input X and original or ground-truth output Y = F (X), adversaries are modified versions of the original data, denoted as X + ∆X such that F (X + ∆X) = Y. Generally, ∆X is constrained by its norm value (e.g. L ∞) or other metrics to preserve the original semantic meaning of input X.Existing studies on adversarial examples focus on designing effective and efficient methods to craft ∆X, e.g. L-BFGS BID32, FGSM BID10, iterative methods BID13; defense methods including defensive distillation BID24, random transformation BID35, JPEG-compression and etc.; how to improve the transferability of attacks crafted on one model to deceive another model, both for differently initialized and trained models, and models of different architecture BID19 BID23 BID33 BID34. Up till now, these efforts mainly focus on image classification models. More recent work has studied the robustness of object detectors and tried to fool these models BID21 BID3 BID6 BID16 a; BID28. However, most of these works only attack specific object instances. Few proposed methods have attempted to attack multiple objects and images or verify the capacity to transfer to another model. In this work, we aim to craft universal and transferable adversaries to fool object detectors and conceal objects. As far as we know, we are the first to carry out such large-scale attacks on object detectors. Our target is three-fold: The adversary should work for different objects, regardless of their types, positions, sizes, and etc.. The adversary is not limited to one image only, i.e. achieving image-independence. The adversary should be able to attack detectors that they are not crafted on, i.e. achieving black-box attack. Specifically, we craft an adversarial mask of the same size as input image, denoted as ∆X ∈ Himage×Wimage×3, and impose a norm-value constraint, ||∆X|| ∞ ≤. Such an adversarial mask is in fact similar to what the community has used to fool image classification models. However, optimizing over it is a non-trivial task. A full-sized mask would introduce a total amount of 0.5M parameters, putting our method on risk of overfitting. Further, using the concept of Effective Receptive Field BID22, we found that gradients obtained through back propagation are sparse in spatial positions, making optimization difficult. To achieve our objective, we propose to use the following techniques: Optimizing ∆X over a set of images; Using identical small patches that are piled-up to form the full-sized mask ∆X; Crafting monochromatic masks instead of colorful ones as done in previous work. Our motivation is that piling-up identical small patches in a grid can incorporate translation invariance in a similar way to Convolutional Neural Networks (CNNs), which is also connected with the intuition that any part of the mask should perform equally to attack an object in any position. Constraining the adversarial mask to monochrome further forces the mask to learn coarse-grained patterns that may be universal. In experiments, we compare with decent baseline methods and found that our methods can consistently surpasses them. While our adversarial mask can conceal as many as 80% objects from YOLO V3 BID25, on which it is crafted, it can also hide more than 40% objects from the eyes of Faster-RCNN BID27, in a black-box setting. Further, we compare the patterns generated by different methods and carry out detailed analysis. We found that our techniques did help in crafting more coarse-grained patterns. These patterns have generic appearance, which we attribute as the key for good transferability. In , we make the following contributions in this work: We successfully craft universal adversarial mask that can fool object detectors that are independent in object-level, image-level and model-level. We show that, with the proposed techniques, we can learn and generate masks that have generic and coarse-grained patterns. The pattern we generate is different from those in previous works by large, which may be the key for better transferability. Norm-Ball Attack BID29 first demonstrates how deep learning models can be fooled by images, denoted as X ∈ H×W ×3, that are mixed with imperceptible perturbations, denoted as ∆X ∈ R H×W ×3. Later, various methods for crafting such perturbations have been proposed BID32 BID10 BID19 BID13 BID0 BID2 BID33 BID5. A major common characteristic for these methods is that, the crafted perturbations satisfied the following constraint: ||∆X|| ∞ ≤, where measures how much the images are perturbed. These efforts mainly focus on image classification models. Few shed light on object detectors. We also refer readers to these comprehensive surveys for more detailed introduction BID11 BID7 BID14. In real world application, the attackers usually have no knowledge about the target models, including their architecture, hyper-parameters, and learned parameters. Such situation is termed as black-box attack. Transferability between different models is thus a proxy for black-box methods, and several methods have been proposed. Ensemble attack BID33 is based on the assumption that if an adversary can fool a set of N models, it is more likely to be able to generalize well and fool a N + 1-th one. BID34 analyze the cosine similarity between gradient obtained from different models and propose to smooth the loss landscape BID31 to improve the generalization capacity among models. Specifically, they optimize over a set of data points sampled from the norm-ball of the target image. Another similar work BID1 demonstrates how to generate image-independent adversaries for image classification. They optimize an adversarial patch that has not norm-value constraint but can only modify a small region of the target images. By optimizing over a set of images, the trained patch can transfer to new images successfully. Attack on Object Detector Methods to attack object detectors can be categorized into two classes: stickers that are glued onto target objects to interfere with classification or onto s as counterfeit objects BID3 BID6; perturbation masks that are aligned to and trained for one specific object BID21 or one image only BID15 BID1. In a nutshell, these methods are specific to designated object instances, which means that to successfully fool detectors, one needs to craft adversaries and attacks the target objects one-by-one. BID20 is the first to explore the possibility of transferability. However, the success rate it not very promising. Recently, BID28 demonstrate the effects of feature inference, where randomly transplanted generic objects prove to have non-local adversarial effects, distorting detection even far from the original transplantation position. More concretely, features attained from areas that do not belong to the object of interest have an impact on the detectors' behavior. This holds true both for pixels inside the region-of-interest (ROI) of the object and for those outside of it. The wide-range existence of such phenomenon is a proof that the obejct detectors are fragile and sensitive. Note that the probing approaches used in BID28 are not practical attack strategies, as the authors' method is a type of random search and the are random. Such method may also be dependent on the architecture of target models, as we implemented this method on YOLO V3 but did not observe similar . Besides, BID28 did not study how the objectiveness is influenced in this setting. Extending from BID28, we use a learned mask to probe how to hide an object by modifying its surroundings in a systematic way. Object detection aims to localize the existence of objects of interest, and recognize the categories of them. There are mainly two branches, i.e. region-proposal based methods, including RCNN by BID9, Fast-RCNN by BID8 and Faster-RCNN by BID27, and unified methods including SSD by BID18 and YOLO by BID26. In our research, we experiment with YOLOv3 as it runs the fastest and also performs at state-of-the-art level. We do experiment to see how well the adversaries crafted on YOLOv3, representative of unified methods, can transfer to Faster-RCNN, which is also a representative method for regionproposal based methods. We briefly introduce the core concept of YOLOv3 and Faster-RCNN.YOLO V3 performs two functions: spotting the existence of objects of interest, i.e. those in a pre-defined list; classifying spotted objects into the correct categories. Input images are first fed into the backbone network, producing a sequence of H × W × C feature maps. Each 1 × 1 × C vector represents the potential object at the corresponding position. Classifiers, which are 1 × 1 convolutional layers in practice, predict the existence of objects, its types and positions. Non-Maximal Suppression (NMS) is performed to deliver the final . YOLO V3 has a set of 3 classifiers, each deployed in different layers and aimed at objects of different sizes. In total, there are N P = 10647 such prediction points, also termed as anchor. In essence, YOLO V3 can be viewed as a multi-head image classifier. Faster-RCNN incorporates a Region-Proposal Network (RPN) to make detection proposals, which are bounding boxes indicating the existence of objects. Sub-regions are cropped from a shared feature map to perform classification. However, the detection and localization of objects in Faster-RCNN is solely dependent on RPN, which works in the same way as YOLO V3. In this section, we introduce how to obtain adversarial masks, DISPLAYFORM0 H×W ×3, and further introduce the two techniques we propose to generate adversarial masks that better transfer to other settings. Note that the attack is performed on YOLO V3, and therefore H = W = 416. The simplest way is to follow the tradition in adversaries for image classification, model the mask as a 416 × 416 × 3 parameter, and optimize over some metric. We denote it as DISPLAYFORM0 416×416×3. To conceal objects, we minimize the objectiveness score produced by the model. We set the minimization target as the average log-likelihood of the top-200 anchors of highest scores in YOLO V3. This is an adaptation of Online Hard Example Mining (OHEM) BID30 to balance the number of different categories. In our case, such Online Hard Positive Mining (OHPM) can avoid the overwhelming effects of the large number of negative anchors 1. Optimization is done over a set of training images in the way of BID1. Data augmentation is used to improve the robustness of the trained mask. Specifically, we minimize the following target: DISPLAYFORM1 where x's are images sampled from the training set, A is a randomly composed data augmentation scheme(rotation, translation, scaling), p top−i is the probability value of the i−th highest scored anchor, N = 200, Clip is a per-pixel clipping to ensure the attacked image is still in valid scope, and other symbols are as defined above. In practice, the norm-value constraint is done by applying an element-wise tanh function to the parametrized mask and then multiply it with a designated distortion rate, which is proposed in BID2. Training is continued until performance on a held-out test-set is not improved further. As there are no other baselines, the basic setting of full-mask will in practice serve as a baseline for the two newly proposed techniques. The baseline setting of parameterization would in a total number of 519K parameters. Although it allows for fine-grained patterns and thus stronger capacity, such exploitation of details may make it difficult to transfer to other models BID13. Besides, an ideal adversarial mask should be translation-invariant, as it should be able to attack objects in any positions. To explicitly encode such intuitions, we propose a pile-up configuration to obtain adversarial masks. As shown in FIG0, we parametrize a much smaller mask, denoted as m pile ∈ R 416r × 416r ×3, where r ∈ measures the size of the mask. To obtain a full-sized mask, we duplicate and pile up these small masks in a grid-aligned way. Specifically, we stack them into a 1 r × 1 r grid. We denote the aforementioned pile-up process as a function: y = pile(x).In the case of pile-up configuration, the adversarial mask is obtained by: DISPLAYFORM0 During training, the mask is applied to input images by addition followed by clipping: x * = Clip max=1 min=0 {x + ∆X}. Other training details are the same as Section 4.1. Gradients are averaged over the grid cells. The motivation for a monochromatic adversarial mask is two-fold. On the one hand, such adversaries require much fewer parameters. On the other hand, monochromatic patterns are less conspic- Green bounding boxes represent objects that are detected both before and after attack; Red ones stand for those detected before attack, but concealed after attack.uous and may better blend into objects, which may show potential in physical world application. Monochromatic mask can further be interpreted as changes of brightness. We implement monochromatic adversarial masks by setting the values of the three color channels as the same. Training is the same as the one described above. Note that technique 1 and technique 2 can be combined together. In such case, there are only 10816 parameters, and only 2.1% of the baseline full-mask setting. Later we would show that such constraints in simplified and even stylish adversarial patterns. We design experiments to answer the following questions: (Q1) How effective are our trained adversarial masks on YOLO V3 and Faster-RCNN respectively? How successful it the transfer to Faster-RCNN? (Q2) How do the techniques we employ help in generating effective attacks?Empirically, we show that: All three methods can achieve decent performance in the task of hiding objects from detectors. The two proposed techniques improve transferability significantly. Adversaries generated with the two proposed techniques demonstrate repetitive and coarse-grained patterns, which seems more robust than the finer ones. Samples for detection are shown in FIG1. All experiments are based on off-the-shelf PyTorch implementations of YOLO V3 2 and Faster-RCNN 3. Models are pretrained on COCO Detection dataset BID17, with an mAP value of 33.0 for YOLO V3 and 37.0 for Faster-RCNN on test set. For the pile-up configuration, we set r = 0.25. We randomly sample images from the validation set of the COCO Detection dataset as training set and test set for our methods, 512 for each. The adversaries are constructed by applying mini-batch SGD with a batch size of 16, a learning rate of 1e + 2, and momentum of 5e − 1, until convergence. As we aim at hiding objects from detectors, we propose to use the average number of detected objects per image as our main evaluation metric. Previous work on interfering with object detectors at large scale BID28 ) uses more detailed evaluation method, taking into accounts cases like mis-classification of detected objects. We argue that our metric is suitable enough for our task, as it directly measures our main goal of making objects disappear. To make comparison easier, we use a derived metric in practice, where we compute the ratio of attacked image to clean image. We term it as Detection Rate, measuring the proportion of objects that are still detected when being attacked. Here, the lower the measure, the better. For a more comprehensive comparison, we train the adversaries with different values of, and plot a curve to characterize its dynamics. Samples for different levels of distortion are shown appendix. B.Overall performance evaluations are shown in FIG2.Further, we perform experiments under the setting of black box attack to truly evaluate the effectiveness of our methods: we transfer the adversaries trained on YOLO V3 to Faster-RCNN and compute the detection rate for each sample. Similarly, we evaluate over different levels of distortion. Results are shown in FIG2 (Right). The second observation is that, while the full-mask attack has much more parameters than the other two methods, it achieves slightly lower success rate 4 at concealing objects. This may seem unreasonable at first glance as more parameters means stronger capacity. However, as we show in the next section, this may be due to the fact that the full-mask attack is much harder in training. We also notice that the colorful pile-up setting performs better that the other two methods by a significant margin. This can be explained by the fact that it has more parameters but not too many, therefore containing enough capacity and still being easy to train. From FIG2, one basic observation is that, even with mild distortion (≤ 16 255), the best performing method can still conceal 40% of the objects. The success rate of the proposed methods perform better than the white noise baseline, demonstrating some inherent potential in transferring. The most important observation is that, the two proposed techniques are significantly better than the full-mask baseline. At the same time, the monochromatic is better than the pile-up configuration. The comparison of the supports our arguments that, pile-up and monochromatization are both effective technique in improving transferability, while monochromatization can further push the envelope. Specifically, pile-up setting can reduce detection rate by a large margin over the fullmask baseline, ranging from 5% to 30%, depending on the value. Monochromatization can further reduce by 5%.We also notice that the detection rate bounces back for pile-up setting when ≥ 24 255. We manually check the attacked images and detection , and found that the repetitive circle patterns in the trained pattern are sometimes mistaken as round objects, e.g. apple and orange, ing in higher detection rate. We consider that this is one defect in the evaluation metric we use. We manually check all the attacked images again and found that this phenomenon only occurs in the colorful pile-up masks when ≥ 24 255. In this section, we visually evaluate how the two techniques play their role in improving transferability. Especially, we discuss about how pile-up helps in significantly improve transferability, as is shown in the experiments. Further, we study a strong method for comparison to provide further insight into adversaries for object detectors. To observe what impact the two proposed techniques actually have, we visualize the trained adversaries in FIG4 for full-mask, pile-up, and monochromitization respectively. We notice that, as we train with our techniques, the generated adversaries are much different from naively trained ones. Specifically, when we use pile-up configuration, the adversaries are repetitive as expected by design, and more smooth. We zoom in to compare the pixel-level landscape, and found that while full-mask consists of tiny color lumps that are nearly as complex as white noise, the pile-up mask is much more smooth, containing less fine-grained details. When we monochromatize the mask, the mask becomes highly repetitive and stylish. The zoom-in view shows that the patterns are even more coarse-grained than using pile-up solely. We attribute the success in improving transferability to such highly simplified patterns for adversarial attacks. In this section, we try to give a theoretical explanation as to why the less parametrized pile-up configuration can perform better even in the white-box setting. It would not be surprising for performing worse on Faster-RCNN, which can be attributed as overfitting, due to the large number of parameters. Here we introduce our hypothesis that the full-mask adversary is harder to train, and the reason may lie in the sparsity of gradients. For crafting adversaries, most existing methods, including ours, are based on gradients propagated from the last layer, and thus the quality of the gradients are important. We found that for each image, the gradients propagated from the classifier layer only cover a small region of the adversarial mask. The large majority of the parameters in full-mask setting are not getting gradients at all 5. Under such condition, using adaptive training methods e.g. Adam would in erroneous estimation of amounts of recent updates; using momentum based methods would in over-update; using vanilla SGD would only update a small fraction of the mask. There is a related concept named Effective Receptive Field (ERF) BID22, which essentially computes the gradient of a certain neuron over the input image. In fact, the objectiveness score is the activation value of that classifier neuron. Therefore, we can compute ERF for each anchor to analyze how the full-mask attack is updated. Some examples is shown in appendix. A. We notice that the gradients basically only cover the object region (not even the bounding box!). Significant variance exists for each pixel across different samples. As objects in images usually take up small fractions, the updates of such a large mask may thus be inefficient and difficult. Different from full-mask setting, piled-up small and identical patches, in turn, can gather the gradients up, making the updates more efficient and accurate. We assume this is the main reason why pile-up configuration can beat full-mask setting by large. As far as we know, there are no other methods that perform similar tasks to our target setting. Therefore, we use white noise of the same distortion level as the baseline for our three methods. We also contend that our main contribution rests in proposing and exploring two techniques to improve transferability, and that therefore, the full-mask method itself is a strong and appropriate baseline such that improvements over it provide appropriate experimental analysis. However, we also provide experimentation below with a method adapted from adversaries for image classification approaches, to establish another benchmark for universal attack on object detectors. Following Adversarial Patch approach in BID1, we train a patch to conceal neighboring objects. One may argue that by placing the patch onto objects to conceal it can also serve as a baseline. While there is existing work on specific objects by applying stickers, they are specially designed for each object instance and still change the object a lot. One could argue that altering a scene with such large distortion does not make sense in the real world as it's too conspicuous or one could simply cover the object with a cloth. Therefore, we consider it interesting if we can design an object that can conceal neighboring objects contactlessly. Basically we follow the original training method BID1. We found that the patch is indeed able to conceal other objects contactlessly. The training setting and more detailed and are in appendix. C.We notice that the success rate depends on the distance to the objects. For objects that are close enough, the success rate can be as high as 50%. An interesting observation is that the trained patch contains circular patterns that are similar to those in the pile-up and monochromatization setting. Overall, it's a well-performing baseline, but it's essentially another type of attack. We have just included it here to provide a better understanding of the task of concealing objects. This paper provides effective methods to fool object detectors. We admit that one major limitation is: object detectors themselves are not robust enough yet. Current image classification can attain a top-1 accuracy higher than 80% and top-5 accuracy, which has surpassed the human level. Therefore, the wide-range existence of adversaries are intriguing: how are these intricate models fooled? On the contrary, the performance of object detectors are still far from human level. Though the experiments presented here show that our methods can beat baselines directly adapted from methods for attacking image classification models, the mechanism behind errors in object detectors still remains unknown. We randomly pick 100 samples to analyze how gradients are being propagated. Among the samples we analyzed, we randomly pick 4 as representatives and show them in Fig.5. For each image sample, we randomly select one object that's detected, compute gradients of its objectiveness scores, and visualize the gradients propagated to the adversarial mask. We manually check all the samples and found that only the object area can obtain gradients that are not negligible. We need more detailed specification of being negligible. For object area, i.e. pixels belonging to the object we study, the average norm of gradient has a magnitude from 1e1 to 1e2. For area outside object, the magnitude drops to 1e − 6. Although some current optimization methods suggest that different parameters can have different level of gradients, e.g. Adam BID12, this is not the case for us. In our case, parameters are not receiving gradients of different magnitudes. On the contrary, parameters are receiving gradients of unstable magnitudes, while intuitively, there should be some level of symmetry or repetition in the pattern of the mask. We argue that such unstable gradient flow may make it hard to train, and finally in lower performance despite larger potential capacity. For better illustration of how much images are distorted, we randomly select some samples ans show them in FIG5 respectively. (Zoom-in for better view.) APPENDIX C ADVERSARIAL PATCH THAT HIDES NEIGHBORING OBJECTS C.1 TRAINING We parametrize the artificial object as a tensor p ∈ h×w×3 in round shape, and train the patch with gradient descent methods on a training set of images. Standard data augmentations are performed to improve robustness, including scaling and rotation. However, we need to adapt the training details for better suitability. Specifically, the artificial object is randomly placed around objects, but not overlapping with objects. Specifically, for each image, we first randomly select one object, around which we place the artificial object. Then we rescale the artificial object to a proper size: r = max(min(0.25, w object, h object), 0.1) × U (0.9, 1.1) where U (a, b) is a uniform random variable used as scaling factor for data augmentation, w object, h object ∈ are the size of the selected object as proportion to input image size, r is the ratio of the proper size to the image size. The size ratio makes sure that the size of the artificial object is neither too big nor too small. Then we randomly rotate the object, ranging from − 1 8 π to 1 8 π. The last step is to pick a point to place the artificial object. We enlarge the bounding box of the selected object in-place by 10%, and uniformly sample one point from the periphery. We place the artificial such that the center of it are located at the sampled point. We denote the transformation aforementioned as function A. For training, we minimize the expected log probability of objectiveness of all predictions in YOLOv3 over transformations and images in the training set: DISPLAYFORM0 where p i denote the probability of being an object for prediction point i in YOLOv3, and N denotes the number of prediction points in the model 6. In practice, most prediction points (around 99.9%) in YOLOv3 are negative, which would obfuscate the positives that are in minority and be disastrous. We alternatively optimize over the top 12 positive prediction points. We explore the effect of different sizes and distances to target objects. We measure the size of the object as proportion of its diameter to the side length of the input image. The distance is computed as the absolute distance between the center of the trained adversarial object, and the center of the bounding box of the detected object, divided by the length of the bounding box's diagonal. For better vision, we take the logarithm of distance.6 N = 10647 in the case of YOLOv3Original Image Gradient Map Figure 5: Left: detection for original image; Red BBox: the target object from which we back-prop gradients; Right: the gradient flowing from the object bounded by red bounding box. Pixel values are normalized using the min-max rule. Figure 10: Overall performance of adversarial patch of different sizes and distances from target objects. We use two baselines: black-hole that replaces all pixels in a sub-region with 0; white-noise that replaces all pixels in a sub-region with random Gaussian noise. We evaluate the performance with the using held-out test set. Quantitative are shown in FIG0. One important is that, although baseline methods similarly change the image significantly by replacing a sub-region completely, they can barely affect the detection . We also include some samples from the test set in FIG0. The trained object can indeed conceal other objects in a contactless way. When we look can the effect of size, we notice that the trained object, of a reasonable size (0.28), can conceal more than 50% of the existence of other objects when simply placed around them. We did not consider patches of size larger than 0.30, as we consider it impractical under the real world setting. Distance also plays an important role. As we randomly place the object for training, where the actual size ranges from 0.10 to 0.25, we notice that for neighboring objects, more than half the objects can be concealed. As the distance to the target objects grows, success rate drops. Figure 11: Demonstration of successful attack. Top: original image and its detection ; Bottom: attacked image and its detection .
We focus on creating universal adversaries to fool object detectors and hide objects from the detectors.
1,402
scitldr
This work presents the Poincaré Wasserstein Autoencoder, a reformulation of the recently proposed Wasserstein autoencoder framework on a non-Euclidean manifold, the Poincaré ball model of the hyperbolic space H n. By assuming the latent space to be hyperbolic, we can use its intrinsic hierarchy to impose structure on the learned latent space representations. We show that for datasets with latent hierarchies, we can recover the structure in a low-dimensional latent space. We also demonstrate the model in the visual domain to analyze some of its properties and show competitive on a graph link prediction task. Variational Autoencoders (VAE) (17; 28) are an established class of unsupervised machine learning models, which make use of amortized approximate inference to parametrize the otherwise intractable posterior distribution. They provide an elegant, theoretically sound generative model used in various data domains. Typically, the latent variables are assumed to follow a Gaussian standard prior, a formulation which allows for a closed form evidence lower bound formula and is easy to sample from. However, this constraint on the generative process can be limiting. Real world datasets often possess a notion of structure such as object hierarchies within images or implicit graphs. This notion is often reflected in the interdependence of latent generative factors or multimodality of the latent code distribution. The standard VAE posterior parametrizes a unimodal distribution which does not allow structural assumptions. Attempts at resolving this limitation have been made by either "upgrading" the posterior to be more expressive or imposing structure by using various structured priors,. Furthermore, the explicit treatment of the latent space as a Riemannian manifold has been considered. For instance, the authors of show that the standard VAE framework fails to model data with a latent spherical structure and propose to use a hyperspherical latent space to alleviate this problem. Similarly, we believe that for datasets with a latent tree-like structure, using a hyperbolic latent space, which imbues the latent codes with a notion of hierarchy, is beneficial. There has recently been a number of works which explicitly make use of properties of non-Euclidean geometry in order to perform machine learning tasks. The use of hyperbolic spaces in particular has been shown to yield improved on datasets which either present a hierarchical tree-like structure such as word ontologies or feature some form of partial ordering. However, most of these approaches have solely considered deterministic hyperbolic embeddings. In this work, we propose the Poincaré Wasserstein Autoencoder (PWA), a Wasserstein autoencoder model which parametrizes a Gaussian distribution in the Poincaré ball model of the hyperbolic space H n. By treating the latent space as a Riemannian manifold with constant negative curvature, we can use the norm ranking property of hyperbolic spaces to impose a notion of hierarchy on the latent space representation, which is better suited for applications where the dataset is hypothesized to possess a latent hierarchy. We demonstrate this aspect on a synthetic dataset and evaluate it using a distortion measure for Euclidean and hyperbolic spaces. We derive a closed form definition of a Gaussian distribution in hyperbolic space H n and sampling procedures for the prior and posterior distributions, which are matched using the Maximum Mean Discrepancy (MMD) objective. We also compare the PWA to the Euclidean VAE visually on an MNIST digit generation task as well quantitatively on a semi-supervised link prediction task. The rest of this paper is structured as follows: we review related work in Section 2, give an overview of the mathematical tools required to work with Riemannian manifolds as well as define the notion of probability distributions on Riemannian manifolds in Section 3. Section 4 describes the model architecture as well as the intuition behind the Wasserstein autoencoder approach. Furthermore, we derive a method to obtain samples from prior and posterior distributions in order to estimate the PWA objective. We present the performed experiments in and discuss the observed in Section 5 and a summary of our in Section 6. Amortized variational inference There has been a number of extensions to the original VAE framework. These extensions address various problematic aspects of the original model. The first type aims at improving the approximation of the posterior by selecting a richer family of distributions. Some prominent examples include the Normalizing Flow model as well as its derivates,,. A second direction aims at imposing structure on the latent space by selecting structured priors such as the mixture prior,, learned autoregressive priors or imposing informational constraints on the objective,. The use of discrete latent variables has been explored in a number of works. The approach conceptually most similar to ours but with a hyperspherical latent space and a von-Mises variational distribution has been presented in. Hyperbolic geometry The idea of graph generation in hyperbolic space and analysis of complex network properties has been studied in. The authors of have recently used both the Poincaré model and the Lorentz model of the hyperbolic space to develop word ontology embeddings which carry hierarchical information encoded by the embedding norm. The general idea of treating the latent space as a Riemannian manifold has been explored in. A model for Bayesian inference for Riemannian manifolds relying on particle approximations has been proposed in. Finally the natural gradient method is a prime example for using the underlying information geometry imposed by the Fisher information metric to enhance learning performance. Three concurrent works have explored an idea similar to ours. propose to train a VAE with a hyperbolic latent space using the traditional evidence lower bound (ELBO) formulation. They approximate the ELBO using MCMC samples as opposed to our approach, which uses a Wasserstein formulation of the problem. propose to use a wrapped Gaussian distribution to obtain samples on the Lorentz model of hyperbolic latent space. The samples are generated in Euclidean space using classical methods and then projected onto the manifold under a concatenation of a parallel transport and the exponential map at the mean. The authors of also propose a similar approach but use an adversarial autoencoder model in their work instead. In this section, we briefly outline some of the concepts from differential geometry, which are necessary to formally define our model. A Riemannian manifold is defined as a the tuple (M, g), where for every point x belonging to the manifold M, a tangent space T x M is defined, which corresponds to a first order local linear approximation of M at point x. The Riemannian metric g is a collection of inner products ·|· x: T x M × T x M → R on the tangent spaces T x M. We denote by α(t) ∈ M to be smooth curves on the manifold. By computing the speed vectorα(t) at every point of the curve, the Riemannian metric allows the computation of the curve length: Given a smooth curve α(a, b) → M, the distance is defined by the infimum over α(The smooth curves of shortest distance between two points on a manifold are called geodesics. Given a point x ∈ M, the exponential map exp x (v): T x M → M gives a way map a vector v in the tangent space T x M at point x to the corresponding point on the manifold M. For the Poincaré ball model of the hyperbolic space, which is geodesically complete, this map is well defined on the whole tangent space T x M. The logarithmic map log x (v) is the inverse mapping from the manifold to the tangent space. The parallel transport P x0→x: T x0 M → T x M defines a linear isometry between two tangent spaces of the manifold and allows to move tangent vectors along geodesics. Hyperbolic spaces are one of three existing types of isotropic spaces: the Euclidean spaces with zero curvature, the spherical spaces with constant positive curvature and the hyperbolic spaces which feature constant negative curvature. The Poincaré ball is one of the five isometric models of the hyperbolic space. The model is defined by the tuple (B n, g H) where B n is the open ball of radius 1, 1 g H is the hyperbolic metric and g E = I n is the Riemannian metric on the flat Euclidean manifold. The geodesic distance on the Poincaré ball is given by In order to perform arithmetic operations on the Poincaré ball model, we rely on the concept of gyrovector spaces, which is a generalization of Euclidean vector spaces to models of hyperbolic space based on Möbius transformations. First proposed by, they have been recently used to describe typical neural network operations in the Poincaré ball model of hyperbolic space. In order to perform the reparametrization in hyperbolic space, we use the gyrovector addition and Hadamard product defined as a diagonal matrix-gyrovector multiplication. Furthermore, we make use of the exponential exp µ and logarithm log µ map operators in order to map points onto the manifold and perform the inverse mapping back to the tangent space. The Gaussian decoder network is symmetric to the encoder network. The Gaussian distribution is a common choice of prior for VAE style models. Similarly to the VAE, we can select a generalization of the Gaussian distribution in the hyperbolic space as prior for our model. In particular, we choose the maximum entropy generalization of the Gaussian distribution on the Poincaré ball model. The Gaussian probability density function in hyperbolic space is defined via the Fréchet mean µ and dispersion parameter σ > 0, analogously to the density in the Euclidean space. The main difference compared to Euclidean space is the use of the geodesic distance d(x, µ) in the exponent and a different dispersion dependent normalization constant Z(σ) which accounts for the underlying geometry. In order to compute the normalization constant, we use hyperbolic 1 this can be generalized to radius for curvature c. Throughout this paper, we assume the Poincaré ball radius to be c = 1 and omit it from the notation. polar coordinates where r = d(x, µ) is the geodesic distance between the x and µ. This allows the decomposition of Z(σ) into radial and angular components. We derive the closed form of the normalization constant in appendix A. For a two-dimensional space, the normalization constant is given by: Dispersion representation The closed form of the hyperbolic Gaussian distribution is only defined for a scalar dispersion value. This can be a limitation on the expressivity of the learned representations. However, the variational family which is implicitly given by the hyperbolic reparametrization allows for vectorial or even full covariance matrix representations, which can be more expressive. Since the maximum mean discrepancy can be estimated via samples, we do not require a closed form definition of the posterior density as is the case with training using the evidence lower bound. This allows the model to learn richer latent space representations. Our model mimics the general architecture of a variational autoencoder. The encoder parametrizes the posterior variational distribution q φ (z|x) and the decoder parametrizes the unit variance Gaussian likelihood p θ (x|z). In order to accomodate the change in the underlying geometry of the latent space, we introduce the maps into hyperbolic space and back to the tangent space. Both the encoder and decoder network consist of three fully-connected layers with ReLU activations. We use the recently proposed hyperbolic feedforward layer for the encoding of the variational family parameters (µ H, σ). For the decoder f θ (x|z), we use the logarithm map at the origin log 0 (z) to map the posterior sample z back into the tangent space. Mean and variance parametrization In order to obtain posterior samples in hyperbolic space, the parametrization of the mean uses a hyperbolic feedforward layer (W, b H) as the last layer of the encoder network (proposed in). The weight matrix parameters are Euclidean and are subject to standard Euclidean optimization procedures (we use Adam) while the bias parameters are hyperbolic, requiring the use of Riemannian stochastic gradient descent (RSGD). The outputs of the underlying Euclidean network h are projected using the exponential map at the origin and transformed using the hyperbolic feedforward layer map where ϕ h is the hyperbolic nonlinearity 2: The reparametrization trick is a common method to make the sampling operation differentiable by using a differentiable function g(, θ) to obtain a reparametrization gradient for backpropagation through the stochastic layer of the network. For the location-scale family of distributions, the reparametrization function g(, θ) can be written as z = µ + σ in the Euclidean space where ∼ N (0, I). We adapt the reparametrization trick for the Gaussian distribution in the hyperbolic space by using the framework of gyrovector operators. We obtain the posterior samples for the parametrized mean µ H (x) and dispersion σ(x) using the following relation: We can motivate the reparametrization with the help of Fig. 1, which depicts the reparametrization in a graphical fashion. In a first step, we sample from the hyperbolic standard prior ∼ N H using a rejection sampling procedure we describe in Algorithm 1. The samples are projected to the tangent space using the logarithm map log 0 at the origin, where they are scaled using the dispersion parameter. The scaled samples are then projected back to the manifold using the exponential map exp 0 and translated using µ. We choose the hyperbolic standard prior N H (0, I) as prior p(z). In order to generate samples from the standard prior, we use an approach based on the volume ratio of spheres in H d to obtain the quasi-uniform samples on the Poincaré disk and subsequently use a rejection sampling procedure to obtain radius samples. We use the quasi-uniform distribution as a proposal distribution for the radius. Using the decomposition into radial and angular components, we can sample a direction from the unit sphere uniformly and simply scale using the sampled radius to obtain the samples from the prior. An alternative choice of prior is the wrapped Gaussian distribution. Evidence Lower Bound The variational autoencoder relies on the evidence lower bound (ELBO) reformulation in order to perform tractable optimization of the Kullback-Leibler divergence (KLD) between the true and approximate posteriors. In the Euclidean VAE formulation, the KLD integral has a closed-form expression, which simplifies the optimization procedure considerably. The definition of the evidence lower bound can be extended to non-Euclidean spaces by using the following formulation with the volume element of the manifold dvol g H induced by the Riemannian metric g H. By substituting the hyperbolic Gaussian into we obtain the following expressions for E q φ (z) log q φ (z|x): Due to the nonlinearity of the geodesic distance in the exponent, we cannot derive a closed form solution of the expectation expression E q φ (z) [log q φ (z)]. One possibility is to use a Taylor expansion of the first two moments of the expectation of the squared logarithm. This is however problematic from a numerical standpoint due to the small convergence radius of the Taylor expansion. The ELBO can be approximated using Monte-Carlo samples, as is done in. We have considered this approach to be suboptimal due to large variance associated with one-sample MC approximations of the integral. Wasserstein metric In order to circumvent the high variance associated with the MC approximation we propose to use a Wasserstein Autoencoder (WAE) formulation of the variational inference problem. The authors of the WAE framework propose to solve the optimal transport problem for matching distributions in the latent space instead of the more difficult problem of matching the data distribution p(x) to the distribution generated by the model p y (z) as is done in the generative adversarial network (GAN) literature. Kantorovich's formulation of the optimal transport problem is given by: Γ∈p(x∼px,y∼py) where c(x, y) is the cost function, p(x ∼ p x, y ∼ p y) is the set of joint distributions of the variables x ∼ p x and y ∼ p y. Solving this problem requires a search over all possible couplings Γ of the two distributions which is very difficult from an optimization perspective. The issue is circumvented in a WAE model as follows. The generative model of a variational autoencoder is defined by two steps. First we sample a latent variable z from the latent space distribution p(z). In a second step, we map it to the output space using a deterministic parametric decoder f θ (x|z). The ing density is given by: Under this model, the optimal transport cost takes the following simpler form due to the fact that the transportation plan factors through the map f θ. The optimization procedure is over the encoders q φ (x) instead of the couplings between p x and p y. The WAE objective is derived from the optimal transport cost by relaxing the constraint on the posterior q. The constraint is relaxed by using a Lagrangian multiplier and an appropriate divergence measure. The Maximum Mean Discrepancy (MMD) metric with an appropriate positive definite RKHS 3 kernel is an example of such a divergence measure. MMD is known to perform well when matching high-dimensional standard normal distributions. MMD is a metric on the space of probability distributions under the condition that the selected RKHS kernel is characteristic. Geodesic kernels are generally not positive definite, however it has been shown that the Laplacian kernel k(x, y) = exp(−λ(d H (x, y))) is positive definite if the metric of the underlying space is conditionally negative definite. In particular, this holds for hyperbolic spaces. In practice, there is a high probability that a geodesic RBF kernel is also positive definite depending on the dataset topology. We choose the Laplacian kernel as it also features heavier tails than the Gaussian RBF kernel, which has a positive effect on outlier gradients. The MMD loss function is defined over two probability measures p and q in an RKHS unit ball F as follows: There exists an unbiased estimator for D MMD (p, q φ). A finite sample estimate can be computed based on minibatch samples from the prior z ∼ p(z) via the rejection sampling procedure described Parameter updates The hyperbolic geometry of the latent space requires us to perform Riemannian stochastic gradient descent (RSGD) updates for a subset of the model parameters, specifically the bias parameters of µ. We perform full exponential map updates using gyrovector arithmetic for the gradients with respect to the hyperbolic parameters similar to instead of using a retraction approximation as in. In order to avoid numerical problems at the origin and far away from the origin of the Poincaré ball, we perturb the operands if the norm is close to 0 or 1 respectively. The Euclidean parameters are updated in parallel using the Adam optimization procedure. To determine the capability of the model to retrieve an underlying hierarchy, we have setup two experiments in which we measure the average distortion of the respective latent space embeddings. We measure the distortion between the input and latent spaces using the following distortion metric, where subscript U denotes the distances in the input space and V the distances in the latent space. Noisy trees The first dataset is a set of synthetically generated noisy binary trees. The vertices of the main tree are generated from a normal distribution where the mean of the child nodes corresponds to the parent sample x i = N (x p(i), σ i ) and p(i) denotes the index of the parent node. In addition to the main tree, we add To encourage a good embedding in a hyperbolic space, we enforce the norms of the tree vertices to grow monotonously with the depth of the tree by rejecting samples whose norms are smaller than the norm of the parent vertices. We have trained our model on 100 generated trees for 100 epochs. The tree vertex variance was set to σ i = 1 and the noise variance to σ j = 0.1. We have also normalized the generated vertices to zero mean and unit variance. Table 1 compares the distortion values of the test set latent space embeddings obtained by using the Euclidean VAE model compared to the PWA model. We can see that the PWA model shows less distortion when embedding trees into the latent space of dimension d = 2, which confirms our hypothesis that a hyperbolic latent space is better suited to data with latent hierarchies. As a reference, we provide the distortion scores obtained by the classical T-SNE dimensionality reduction technique. In this experiment, we apply our model to the task of generating MNIST digits in order to get an intuition for the properties of the latent hyperbolic geometry. In particular, we are interested in the visual distibution of the latent codes in the Poincaré disk latent space. While the MNIST latent space is not inherently hierarchically structured -there is no obvious norm ranking that can be imposed -we can use it to compare our model to the Euclidean VAE approach. We train the models on dynamically binarized MNIST digits and evaluate the generated samples qualitatively as well as quantitatively via the reconstruction error scores. We can observe in Appendix B that the samples present a deteriorating quality as the dimensionality increases despite the lower reconstruction error. This can be explained by the issue of dimension mismatch between the selected latent space dimensionality d z and the intrinsic latent space dimensionality d I documented in and can be alleviated by an additional p-norm penalty on the variance. We have not observed a significant improvement by applying the L2-penalty for higher dimensions. We have also performed an experiment using a two-dimensional latent space. We can observe that the structure imposed by the Poincaré disk pushes the samples towards the outside of the disk. This observation can be explained by the fact that hyperbolic spaces grow exponentially. In order to generate quality samples using the prior, some overlap is required with the approximate posterior in the latent space. The issue is somewhat alleviated in higher dimensions as the distribution shifts towards the ball surface. In this experiment, we aim at exploring the advantages of using a hyperbolic latent space on the task of predicting links in a graph. We train our model on three different citation network datasets: Cora, Citeseer and Pubmed. We use the Variational Graph Auto-Encoder (VGAE) framework and train the model in an unsupervised fashion using a subset of the links. The performance is measured in terms of average precision (AP) and area under curve (AUC) on a test set of links that were masked during training. Table 1 shows a comparison to the baseline with a Euclidean latent space (N -VGAE), showing improvements on the Cora and Citeseer datasets. We also compare our to the obtained using a hyperspherical autoencoder (S-VGAE). It should be noted that we have used a smaller dimensionality for the hyperbolic latent space (16 vs 64 and 32 for the Euclidean and hyperspherical cases respectively), which could be attributed to the fact that a dataset with a hierarchical latent manifold requires latent space embeddings of smaller dimensionality to efficiently encode the information (analogously to the of). We can observe that the PWA outperforms the Euclidean VAE on two of the three datasets. The hyperspherical graph autoencoder (S-VGAE) outperforms our model. One hypothesis which explains this is the fact that the structure of the citation networks has a tendency towards a positive curvature rather than a negative one. It is worth noting that it is not entirely transparent whether the use of Graph Convolutional Networks, which present a very simple local approximation of the convolution operator on graphs, allows to preserve the curvature of the input data. We have presented an algorithm to perform amortized variational inference on the Poincaré ball model of the hyperbolic space. The underlying geometry of the hyperbolic space allows for an improved performance on tasks which exhibit a partially hierarchical structure. We have discovered certain issues related to the use of the MMD metric in hyperbolic space. Future work will aim to circumvent these issues as well as extend the current . In particular, we hope to demonstrate the capabilities of our model on more tasks hypothesized to have a latent hyperbolic manifold and explore this technique for mixed curvature settings. A PRIOR REJECTION SAMPLING H (r|0, 1) Result: n samples from prior p(z) while i < n do sampleφ ∼ N (0, I d); compute direction on the unit sphereφ =φ ||φ||; sample u ∼ U; get uniform radius samples r i ∈ [0, r max] via ratio of hyperspheres; where erfc is the complementary error function. In this list of gyrovector operations and throughout this paper, we assume the Poincaré ball radius to be c = 1 and omit it from the notation. Gyrovector addition: x ⊕ y = (1 + 2 x, y + ||y|| 2)x + (1 − ||x|| 2)y 1 + 2 x, y + ||x|| 2 ||y|| 2 ] Matrix-gyrovector product:
Wasserstein Autoencoder with hyperbolic latent space
1,403
scitldr
In this paper, we introduce a method to compress intermediate feature maps of deep neural networks (DNNs) to decrease memory storage and bandwidth requirements during inference. Unlike previous works, the proposed method is based on converting fixed-point activations into vectors over the smallest GF finite field followed by nonlinear dimensionality reduction (NDR) layers embedded into a DNN. Such an end-to-end learned representation finds more compact feature maps by exploiting quantization redundancies within the fixed-point activations along the channel or spatial dimensions. We apply the proposed network architecture to the tasks of ImageNet classification and PASCAL VOC object detection. Compared to prior approaches, the conducted experiments show a factor of 2 decrease in memory requirements with minor degradation in accuracy while adding only bitwise computations. Recent achievements of deep neural networks (DNNs) make them an attractive choice in many computer vision applications including image classification BID6 and object detection BID9. The memory and computations required for DNNs can be excessive for low-power deployments. In this paper, we explore the task of minimizing the memory footprint of DNN feature maps during inference and, more specifically, finding a network architecture that uses minimal storage without introducing a considerable amount of additional computations or on-the-fly heuristic encoding-decoding schemes. In general, the task of feature map compression is tightly connected to an input sparsity. The input sparsity can determine several different usage scenarios. This may lead to substantial decrease in memory requirements and overall inference complexity. First, a pen sketches are spatially sparse and can be processed efficiently by recently introduced submanifold sparse CNNs BID4. Second, surveillance cameras with mostly static input contain temporal sparsity that can be addressed by Sigma-Delta networks BID15. A more general scenario presumes a dense input e.g. video frames from a high-resolution camera mounted on a moving autonomous car. In this work, we address the latter scenario and concentrate on feature map compression in order to minimize memory footprint and bandwidth during DNN inference which might be prohibitive for high-resolution cameras. We propose a method to convert intermediate fixed-point feature map activations into vectors over the smallest finite field called the Galois field of two elements (GF) or, simply, binary vectors followed by compression convolutional layers using a nonlinear dimensionality reduction (NDR) technique embedded into DNN architecture. The compressed feature maps can then be projected back to a higher cardinality representation over a fixed-point (integer) field using decompression convolutional layers. Using a layer fusion method, only the compressed feature maps need to be kept for inference while adding only computationally inexpensive bitwise operations. Compression and decompression layers over GF can be repeated within the proposed network architecture and trained in an end-to-end fashion. In brief, the proposed method resembles autoencoder-type BID7 structures embedded into a base network that work over GF. Binary conversion and compression-decompression layers are implemented in the Caffe BID12 framework and publicly available 1.The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 gives notation for convolutional layers, describes conventional fusion and NDR methods, and explains the proposed method including details about network training and the derived architecture. Section 4 presents experimental on ImageNet classification and PASCAL VOC object detection using SSD BID13, memory requirements, and obtained compression rates. Feature Map Compression using Quantization. Unlike a weight compression, surprisingly few papers consider feature map compression. This can most likely be explained by the fact that feature maps have to be compressed for every network input as opposed to offline weight compression. Previous feature map compression methods are primarily developed around the idea of representation approximation using a certain quantization scheme: fixed-point quantization BID2 BID5, binary quantization BID10 BID17 BID20 BID19, and power-of-two quantization BID14. The base floatingpoint network is converted to the approximate quantized representation and, then, the quantized network is retrained to restore accuracy. Such methods are inherently limited in finding more compact representations since the base architecture remains unchanged. For example, the dynamic fixed-point scheme typically requires around 8-bits of resolution to achieve baseline accuracy for state-of-the-art network architectures. At the same time, binary networks experience significant accuracy drops for large-scale datasets or compact (not over-parametrized) network architectures. Instead, our method can be considered in a narrow sense as a learned quantization using binary representation. Embedded NDR Layers. Another interesting approach is implicitly proposed by BID11. Although the authors emphasized weight compression rather than feature map compression, they introduced NDR-type layers into network architecture that allowed to decrease not only the number of weights but also feature map sizes by a factor of 8, if one keeps only the outputs of socalled "squeeze" layers. The latter is possible because such network architecture does not introduce any additional convolution recomputations since "squeeze" layer computations with a 1×1 kernel can be fused with the preceding "expand" layers. Our work goes beyond this approach by extending NDR-type layers to work over GF to find a more compact feature map representation. Hardware Accelerator Architectures. BID8 estimated that off-chip DRAM access requires approximately 100× more power than local on-chip cache access. Therefore, currently proposed DNN accelerator architectures propose various schemes to decrease memory footprint and bandwidth. One obvious solution is to keep only a subset of intermediate feature maps at the expense of recomputing convolutions BID0. The presented fusion approach seems to be oversimplified but effective due to high memory access cost. Our approach is complementary to this work but proposes to keep only compressed feature maps with minimum additional computations. Another recent work BID16 exploits weight and feature map sparsity using a more efficient encoding for zeros. While this approach targets similar goals, it requires having high sparsity, which is often unavailable in the first and the largest feature maps. In addition, a special control and encoding-decoding logic decrease the benefits of this approach. In our work, compressed feature maps are stored in a dense form without the need of special control and enconding-decoding logic. The input feature map of lth convolutional layer in commonly used DNNs can be represented by a tensor X l−1 ∈ RĆ ×H×W, whereĆ, H and W are the number of input channels, the height and the width, respectively. The input X l−1 is convolved with a weight tensor W l ∈ R C×Ć×H f ×W f, where C is the number of output channels, H f and W f are the height and the width of filter kernel, respectively. A bias vector b ∈ R C is added to the of convolution operation. Once all C channels are computed, an element-wise nonlinear function is applied to the of the convolution operations. Then, the cth channel of the output tensor X l ∈ R C×H×W can be computed as DISPLAYFORM0 where * denotes convolution operation and g is some nonlinear function. In this paper, we assume g is the most commonly used rectified linear unit (ReLU) defined as g(x) = max (0, x) such that all activations are non-negative. We formally describe previously proposed methods briefly reviewed in Section 2 using the unified model illustrated in FIG0. To simplify notation, biases are not shown. Consider a network built using multiple convolutional layers and processed according to. Similar to BID0, calculation of N sequential layers can be fused together without storing intermediate feature maps DISPLAYFORM0 For example, fusion can be done in a channel-wise fashion using memory buffers which are much smaller than the whole feature map. Then, feature map X l ∈ R can be quantized intoX l ∈ Q using a nonlinear quantization function q where Q is a finite field over integers. The quantization step may introduce a drop in accuracy due to imperfect approximation. The network can be further finetuned to restore some of the original accuracy BID2 BID5. The network architecture is not changed after quantization and feature maps can be compressed only up to a certain suboptimal bitwidth resolution. The next step implicitly introduced by Iandola et al. FORMULA0 is to perform NDR using an additional convolutional layer. A mappingX DISPLAYFORM1 ×H×W can be performed using projection weights P l ∈ RC ×C×H f ×W f, where the output channel dimensionC < C. Then, only compressed feature mapŶ l needs to be stored in the memory buffer. During the inverse steps, the compressed feature map can be projected back onto the higher-dimensional tensorX l+1 ∈ Q using weights R l ∈ R C×C×H f ×W f and, lastly, converted back to X l+1 ∈ R using an inverse quantization function q −1 . In the case of a fully quantized network, the inverse quantization can be omitted. In practice, the number of bits for the feature map quantization step depends on the dataset, network architecture and desired accuracy. For example, over-parameterized architecture like AlexNet may require only 1 or 2 bits for small-scale datasets (CIFAR-10, MNIST, SVHN), but experience significant accuracy drops for large-scale datasets like ImageNet. In particular, the modified AlexNet (with the first and last layers kept in full-precision) top-1 accuracy is degraded by 12.4% and 6.8% for 1-bit XNOR-Net BID17 and 2-bit DoReFa-Net BID20, respectively. At the same time, efficient network architectures e.g. BID11 using NDR layers require 6-8 bits for the fixed-point quantization scheme on ImageNet and fail to work with lower precision activations. In this paper, we follow the path to select an efficient base network architecture and then introduce additional compression layers to obtain smaller feature maps as opposed to initially selecting an over-parametrized network architecture for quantization. Consider a scalar x from X l ∈ R. A conventional feature map quantization step can be represented as a scalar-to-scalar mapping or a nonlinear functionx = q(x) such that DISPLAYFORM0 wherex is the quantized scalar, Q is the GF(2 B) finite field for fixed-point representation and B is the number of bits. We can introduce a newx representation by a linear binarization function b defined bŷ DISPLAYFORM1 where ⊗ is a bitwise AND operation, DISPLAYFORM2 An inverse linear function b −1 can be written as DISPLAYFORM3 Equations FORMULA4 - FORMULA6 show that a scalar over a higher cardinality finite field can be linearly converted to and from a vector over a finite field with two elements. Based on these derivations, we propose a feature map compression method shown in Figure 2. Similar to BID2, we quantize activations to obtainX l and, then, apply transformation. The ing feature map can be represented asX DISPLAYFORM4 B×C×H×W. For implementation convenience, a new bit dimension can be concatenated along channel dimension ing in the feature mapX l ∈ B BC×H×W. Next, a single convolutional layer using weights P l or a sequence of layers with P l i weights can be applied to obtain a compressed representation over GF. Using the fusion technique, only the compressed feature mapsỸ l ∈ B need to be stored in memory during inference. Non-compressed feature maps can be processed using small buffers e.g. in a sequential channel-wise fashion. Lastly, the inverse function b −1 from using convolutional layers R l i and inverse of quantization q −1 undo the compression and quantization steps. Figure 2: Scheme of the proposed method. The graph model shown in FIG1 explains details about the inference (forward pass) and backpropagation (backward pass) phases of the newly introduced functions. The inference phase represents- as explained above. Clearly, the backpropagation phase may seem not obvious at a first glance. One difficulty related to the quantized network is that quantization function itself is not differentiable. But many studies e.g. BID2 show that a mini-batch-averaged floating-point gradient practically −1 can be represented as gates that make hard decisions similar to ReLU. The gradient of b −1 can then be calculated using of BID1 aŝ DISPLAYFORM0 Lastly, the gradient of b is just a scaled sum of the gradient vector calculated bỹ DISPLAYFORM1 where x 0 is a gradient scaling factor that represents the number of nonzero elements inx. Practically, the scaling factor can be calculated based on statistical information only once and used as a static hyperparameter for gradient normalization. Since the purpose of the network is to learn and keep only the smallestỸ l, the choice of P l and R l initialization is important. Therefore, we can initialize these weight tensors by an identity function that maps the non-compressed feature map to a truncated compressed feature map and vice versa. That provides a good starting point for training. At the same time, other initializations are possible e.g. noise sampled from some distribution studied by BID1 can be added as well. To highlight benefits of the proposed approach, we select a base network architecture with the compressed feature maps according to Section 3.2. The selected architecture is based on a quantized SqueezeNet network. It consists of a sequence of "fire" modules where each module contains two concatenated "expand" layers and a "squeeze" layer illustrated in FIG2. The "squeeze" layers perform NDR over the field of real or, in case of the quantized model, integer numbers. Specifically, the size of concatenated "expand 1×1" and "expand 3×3" layers is compressed by a factor of 8 along channel dimension by "squeeze 1×1" layer. Activations of only the former one can be stored during inference according to the fusion method. According to the analysis presented in BID5, activations quantized to 8-bit integers do not experience significant accuracy drop. The given base architecture is extended by the proposed method. The quantized "squeeze" layer feature map is converted to its binary representation following Figure 2. Then, the additional compression rate is defined by selecting parameters of P l i. In the simplest case, only a single NDR layer can be introduced with the weights P l. In general, a number of NDR layers can be added with 1×1, 3×3 and other kernels with or without pooling at the expense of increased computational cost. For example, 1×1 kernels allow to learn optimal quantization and to compensate redundancies along channel dimension only. But 3×3 kernels can address spatial redundancies and, while being implemented with stride 2 using convolutional-deconvolutional layers, decrease feature map size along spatial dimensions. We implemented the new binarization layers from Section 3 as well as quantization layers using modified BID5 code in the Caffe BID12 framework. The latter code is modified to accurately support binary quantization during inference and training. SqueezeNet v1.1 is selected as a base floating-point network architecture, and its pretrained weights were downloaded from the publicly available source 2. In the experiments, we compress the "fire2/squeeze" and "fire3/squeeze" layers which are the largest of the "squeeze" layers due to high spatial dimensions. The input to the network has a resolution of 227×227, and the weights are all floating-point. The quantized and compressed models are retrained for 100,000 iterations with a mini-batch size of 1024 on the ImageNet BID18 (ILSVRC2012) training dataset, and SGD solver with a step-policy learning rate starting from 1e-3 divided by 10 every 20,000 iterations. Although this large mini-batch size was used by the original model, it helps the quantized and compressed models to estimate gradients as well. The compressed models were derived and retrained iteratively from the 8-bit quantized model. TAB0 reports top-1 and top-5 inference accuracies of 50,000 images from ImageNet validation dataset. The fourth column in TAB0 represents speed in terms of frames per second (FPS) of network forward pass with mini-batch size of 1 using NVIDIA Tesla P100 GPU. All quantized computations are emulated on GPU using fp32. Hence, it shows a measure of emulation speed for additional layers rather than speed of the optimized layers. According to TAB0, the quantized models after retraining experience -0.2%, 0.6% and 3.5% top-1 accuracy drops for 8-bit, 6-bit and 4-bit quantization, respectively. For comparison, the quantized models without retraining are shown in parentheses. The proposed compression method using 1 × 1 kernels allows us to restore corresponding top-1 accuracy by 1.0% and 2.4% for 6-bit and 4-bit versions at the expense of a small increase in the number of weights and bitwise convolutions. Moreover, we evaluated a model with a convolutional layer followed by a deconvolutional layer both with a 3 × 3 stride 2 kernel at the expense of a 47% increase in weight size for 6-bit activations. That allowed us to decrease feature maps in spatial dimensions by exploiting local spatial quantization redundancies. Then, the feature map size is further reduced by a factor of 4, while top-1 accuracy dropped by 4.3% and 4.6% for 8-bit and 6-bit activations, respectively. A comprehensive comparison with the state-of-the-art networks is given in Appendix A. Accuracy experiments. We evaluate object detection using Pascal VOC BID3 dataset which is a more realistic application for autonomous cars where the high-resolution cameras emphasize feature map compression benefits. The VOC2007 test dataset contains 4,952 images and a training dataset of 16,551 images is a union of VOC2007 and VOC2012. We adopted SSD512 model BID13 for the proposed architecture. SqueezeNet pretrained on ImageNet is used as a feature extractor instead of the original VGG-16 network. This reduces number of parameters and overall inference time by a factor of 4 and 3, respectively. The original VOC images are rescaled to 512×512 resolution. As with ImageNet experiments, we generated several models for comparisons: a base floating-point model, quantized models, and compressed models. We apply quantization and compression to the "fire2/squeeze" and "fire3/squeeze" layers which represent, if the fusion technique is applied, more than 80% of total feature map memory due to their large spatial dimensions. Typically, spatial dimensions decrease quadratically because of max pooling layers compared to linear growth in the depth dimension. The compressed models are derived from the 8-bit quantized model, and both are retrained for 10,000 mini-batch-256 iterations using SGD solver with a step-policy learning rate starting from 1e-3 divided by 10 every 2,500 iterations. TAB1 presents mean average precision (mAP) for the original VGG-16 and SqueezeNetbased models as well as size of the weights, feature maps to compress and the speed metric with Section 4.1 assumptions. The 8-bit quantized model with retraining drops accuracy by less than 0.04%, while 6-bit, 4-bit and 2-bit models decrease accuracy by 0.5%, 2.2% and 12.3%, respectively. For reference, mAPs for the quantized models without retraining are shown in parentheses. Using the proposed compression-decompression layers with a 1×1 kernel, mAP for the 6-bit model is increased by 0.5% and mAP for the 4-bit is decreased by 0.5%. We conclude that compression along channel dimension is not beneficial for SSD unlike ImageNet classification either due to low quantization redundancy in that dimension or the choice of hyper-parameters e.g. mini-batch size. Then, we evaluate the models with spatial-dimension compression which is intuitively appealing for high-resolution images. Empirically, we found that a 2×2 kernel with stride 2 performs better than a corresponding 3×3 kernel while requiring less parameters and computations. According to TAB1, an 8-bit model with 2×2 kernel and downsampling-upsampling layers achieves 1% higher mAP than a model with 3×3 kernel and only 3.7% lower than the base floating-point model. Memory Requirements. TAB2 summarizes memory footprint benefits for the evaluated SSD models. Similar to the previous section, we consider only the largest feature maps that represent more than 80% of total activation memory. Assuming that the input frame is stored separately, the fusion technique allows to compress feature maps by a factor of 19. Note that no additional recomputations are needed. Second, conventional 8-bit and 4-bit fixed-point models decrease the size of feature maps by a factor of 4 and 8, respectively. Third, the proposed model with 2×2 stride 2 kernel gains another factor of 2 compression compared to 4-bit fixed-point model with only 1.5% degradation in mAP. This is similar to ImageNet experiments which showed relatively limited compression gain along channel dimension only. At the same time, learned quantization along combined channel and spatial dimensions pushes further compression gain. In total, the memory footprint for this feature extractor is reduced by two orders of magnitude. We introduced a method to decrease memory storage and bandwidth requirements for DNNs. Complementary to conventional approaches that use fused layer computation and quantization, we presented an end-to-end method for learning feature map representations over GF within DNNs. Such a binary representation allowed us to compress network feature maps in a higher-dimensional space using autoencoder-inspired layers embedded into a DNN along channel and spatial dimensions. These compression-decompression layers can be implemented using conventional convolutional layers with bitwise operations. To be more precise, the proposed representation traded cardinality of the finite field with the dimensionality of the vector space which makes possible to learn features at the binary level. The evaluated compression strategy for inference can be adopted for GPUs, CPUs or custom accelerators. Alternatively, existing binary networks can be extended to achieve higher accuracy for emerging applications such as object detection and others. We compare recently reported ImageNet for networks that compress feature maps as well as several configurations of the proposed approach. Most of the works use the over-parametrized AlexNet architecture while ours is based on SqueezeNet. TAB3 shows accuracy for base networks as well as their quantized versions. Binary XNOR-Net BID17 estimates based on AlexNet as well as ResNet-18. DoReFa-Net BID20 ) is more flexible and can adjust the number of bits for weights and activations. Since its accuracy is limited by the number of activation bits, we present three cases with 1-bit, 2-bit, and 4-bit activations. The most recent work BID19 solves the problem of binarizing the last layer weights, but weights of the first layer are full-precision. Overall, AlexNet-based low-precision networks achieve 43.6%, 49.8%, 53.0% top-1 accuracy for 1-bit, 2-bit and 4-bit activations, respectively. Around 70% of the memory footprint is defined by the first two layers of AlexNet. The fusion technique is difficult in such architectures due to large kernel sizes (11×11 and 5×5 for AlexNet) which can cause extra recomputations. Thus, activations require 95.4KB, 190.0KB and 381.7KB of memory for 1-bit, 2-bit and 4-bit models, respectively. The NIN-based network from BID19 with 2-bit activations achieves 51.4% top-1 accuracy, but its activation memory is larger than AlexNet due to late pooling layers. BID17 whether the activations were binarized or not for the first and the last layer. 3 Weights are not binarized for the first layer. 4 Number of bits for the compressed "fire2,3/squeeze" layers and, in parentheses, for quantized only layers. 5 For comparison, accuracy in parentheses represents for the corresponding model in TAB0 The SqueezeNet-based models in TAB3 are finetuned from the corresponding models in TAB0 for 40,000 iterations with a mini-batch size of 1024, and SGD solver with a step-policy learning rate starting from 1e-4 divided by 10 every 10,000 iterations. The model with fusion and 8-bit quantized weights and activations, while having an accuracy similar to floating-point model, outperforms the state-of-the-art networks in terms of weight and activation memory. The proposed four models from TAB0 further decrease activation memory by adding compression-decompression layers to "fire2,3/squeeze" modules. This step allowed to shrink memory from 189.9KB to 165.4KB, 140.9KB, 116.4KB and 110.3KB depending on the compression configuration. More compression is possible, if apply the proposed approach to other "squeeze" layers.
Feature map compression method that converts quantized activations into binary vectors followed by nonlinear dimensionality reduction layers embedded into a DNN
1,404
scitldr
Adversarial training is one of the strongest defenses against adversarial attacks, but it requires adversarial examples to be generated for every mini-batch during optimization. The expense of producing these examples during training often precludes adversarial training from use on complex image datasets. In this study, we explore the mechanisms by which adversarial training improves classifier robustness, and show that these mechanisms can be effectively mimicked using simple regularization methods, including label smoothing and logit squeezing. Remarkably, using these simple regularization methods in combination with Gaussian noise injection, we are able to achieve strong adversarial robustness -- often exceeding that of adversarial training -- using no adversarial examples. Deep Neural Networks (DNNs) have enjoyed great success in many areas of computer vision, such as classification BID7, object detection BID4, and face recognition BID11. However, the existence of adversarial examples has raised concerns about the security of computer vision systems BID16 BID1. For example, an attacker may cause a system to mistake a stop sign for another object BID3 or mistake one person for another BID14. To address security concerns for high-stakes applications, researchers are searching for ways to make models more robust to attacks. Many defenses have been proposed to combat adversarial examples. Approaches such as feature squeezing, denoising, and encoding BID19 BID13 BID15 BID10 have had some success at pre-processing images to remove adversarial perturbations. Other approaches focus on hardening neural classifiers to reduce adversarial susceptibility. This includes specialized non-linearities BID20, modified training processes BID12, and gradient obfuscation BID0.Despite all of these innovations, adversarial training BID5, one of the earliest defenses, still remains among the most effective and popular strategies. In its simplest form, adversarial training minimizes a loss function that measures performance of the model on both clean and adversarial data as follows DISPLAYFORM0 where L is a standard (cross entropy) loss function, (x i, y i) is an input image/label pair, θ contains the classifier's trainable parameters, κ is a hyper-parameter, and x i,adv is an adversarial example for image x. BID9 pose adversarial training as a game between two players that similarly requires computing adversarial examples on each iteration. A key drawback to adversarial training methods is their computational cost; after every mini-batch of training data is formed, a batch of adversarial examples must be produced. To train a network that resists strong attacks, one needs to train with the strongest adversarial examples possible. For example, networks hardened against the inexpensive Fast Gradient Sign Method can be broken by a simple two-stage attack BID17. Current state-of-theart adversarial training on MNIST and CIFAR-10 use expensive iterative adversaries BID9, such as the Projected Gradient Descent (PGD) method, or the closely related Basic Iterative Method (BIM) BID8. Adversarial training using strong attacks may be 10-100 times more time consuming than standard training methods. This prohibitive cost makes it difficult to scale adversarial training to larger datasets and higher resolutions. In this study, we show that it is possible to achieve strong robustness -comparable to or greater than the robustness of adversarial training with a strong iterative attack -using fast optimization without adversarial examples. We achieve this using standard regularization methods, such as label smoothing BID18 and the more recently proposed logit squeezing BID6. While it has been known for some time that these tricks can improve the robustness of models, we observe that an aggressive application of these inexpensive tricks, combined with random Gaussian noise, are enough to match or even surpass the performance of adversarial training on some datasets. For example, using only label smoothing and augmentation with random Gaussian noise, we produce a CIFAR-10 classifier that achieves over 73% accuracy against black-box iterative attacks, compared to 64% for a state-of-the-art adversarially trained classifier BID9. In the white-box case, classifiers trained with logit squeezing and label smoothing get ≈ 50% accuracy on iterative attacks in comparison to ≈ 47% for adversarial training. Regularized networks without adversarial training are also more robust against non-iterative attacks, and more accurate on non-adversarial examples. Our goal is not just to demonstrate these defenses, but also to dig deep into what adversarial training does, and how it compares to less expensive regularization-based defenses. We begin by dissecting adversarial training, and examining ways in which it achieves robustness. We then discuss label smoothing and logit squeezing regularizers, and how their effects compare to those of adversarial training. We then turn our attention to random Gaussian data augmentation, and explore the importance of this technique for adversarial robustness. Finally, we combine the regularization methods with random Gaussian augmentation, and experimentally compare the robustness achievable using these simple methods to that achievable using adversarial training. Adversarial training injects adversarial examples into the training data as SGD runs. During training, adversarial perturbations are applied to each training image to decrease the logit corresponding to its correct class. The network must learn to produce logit representations that preserve the correct labeling even when faced with such an attack. At first glance, it seems that adversarial training might work by producing a large "logit gap," i.e., by producing a logit for the true class that is much larger than the logit of other classes. Surprisingly, adversarial training has the opposite effect -we will see below that it decreases the logit gap. To better understand what adversarial training does, and how we can replicate it, we now break down the different strategies for achieving robustness. This section presents a simple metric for adversarial robustness that will help us understand adversarial training. Consider an image x, and its logit representation z (i.e. pre-softmax activation) produced by a neural network. Let z y denote the logit corresponding to the correct class y. If we add a small perturbation δ to x, then the corresponding change in logits is approximately δ T ∇ x z y under a linear approximation, where ∇ x z y is the gradient of z y with respect to x. Under a linearity assumption, we can calculate the step-size L needed to move an example from class y to another classȳ. A classifier is susceptible to adversarial perturbation δ if the perturbed logit of the true class is smaller than the perturbed logit of any other class: DISPLAYFORM0 Assuming a one-step ∞ attack such as FGSM, the perturbation δ can be expressed as DISPLAYFORM1 where L is the ∞ -norm of the perturbation. Using this choice of δ, Equation 2 becomes Table 1: Experimental and predicted accuracy of classifiers for MNIST and CIFAR-10. The predicted accuracy is the percentage of images for which < L. The empirical accuracy is the percent of images that survive a perturbation of size. Attacks on both the cross-entropy (X-ent) and logits as in BID2 (CW) are presented. DISPLAYFORM2 where · 1 denotes the 1 -norm of a vector. Therefore the smallest ∞ -norm of the perturbation required is the ratio of "logit gap" to "gradient gap", i.e., DISPLAYFORM3 Equation FORMULA4 measures robustness by predicting the smallest perturbation size needed to switch the class of an image. While the formula for L makes linearity assumptions, the approximation L fairly accurately predicts the robustness of classifiers of the CIFAR-10 dataset (where perturbations are small and linearity assumptions cause little distortion). It is also a good ballpark approximation on MNIST, even after adversarial training (see Table 1).Maximal robustness occurs when L is as large as possible. From equation 4, we observe 3 different strategies for hardening a classifier:• Increase the logit gap: Maximize the numerator of equation 4 by producing a classifier with relatively large z y.• Squash the adversarial gradients: Train a classifier that has small adversarial gradients ∇ x zȳ for any classȳ. In this case a large perturbation is needed to significantly change zȳ.• Maximize gradient coherence: Produce adversarial gradients ∇ x zȳ that are highly correlated with the gradient for the correct class ∇ x z y. This will shrink the denominator of equation 4, and produce robustness even if adversarial gradients are large. In this case, one cannot decrease z y without also decreasing zȳ, and so large perturbations are needed to change the class label. The most obvious strategy for achieving robustness is to increase the numerator in equation 4 while fixing the denominator. Remarkably, our experimental investigation reveals that adversarial training does not rely on this strategy at all, but rather it decreases the logit gap and gradient gap simultaneously. This can be observed in FIG0, which shows distributions of logit gaps for naturally and adversarially trained models on MNIST. Note that the cross entropy loss actually limits adversarial training from increasing logit gaps. The accuracy of the classifier goes down in the presence of adversarial examples, and so the cross entropy loss is minimized by smaller logit gaps that reflect the lower level of certainty in the adversarial training environment. Adversarial training succeeds by minimizing the denominator in Equation 4; it simultaneously squeezes the logits and crushes the adversarial gradients. FIG0 shows that the adversarial gradients shrink dramatically more than the logit gaps, and so the net effect is an increase in robustness. If we closely examine the phenomenon of shrinking the logit gaps, we find that this shrink is due in part to an overall shrink in the size of the logits themselves, (i.e., |z i | for any class i). To see this, we plot histograms of the logits when classifiers are adversarially trained with strong adversaries 1, weak adversaries 2, and with no adversarial examples. FIG1 shows that adversarial training does indeed squash the logits, although not enough to fully explain the decrease in |z y − zȳ| in FIG0. 3 We have seen that adversarial training works by squashing adversarial gradients and slightly increasing gradient coherence. But the fact that it cannot do this without decreasing the logit gap leads us to suspect that these quantities are inherently linked. This leads us to ask an important question: If we directly decrease the logit gap, or the logits themselves, using an explicit regularization term, will this have the desired effect of crushing the adversarial gradients? There are two approaches to replicating the effect on the logits produced by adversarial training. The first is to replicate the decrease in logit gap seen in FIG0. This can be achieved by label smoothing. A second approach to replicating adversarial training is to just directly crush all logit values and mimic the behavior in FIG1. This approach is known as "logit squeezing," and works by adding a regularization term to the training objective that explicitly penalizes large logits. Label smoothing Label smoothing converts "one-hot" label vectors into "one-warm" vectors that represents a low-confidence classification. Because large logit gaps produce high-confidence classifications, label-smoothed training data forces the classifier to produce small logit gaps. Label smoothing is a commonly used trick to prevent over-fitting on general classification problems, and it was first observed to boost adversarial robustness by BID18, where it was used as an inexpensive replacement for the network distillation defense BID12. A one-hot label vector y hot is smoothed using the formula DISPLAYFORM0 where α ∈ is the smoothing parameter, and N c is the number of classes. If we pick α = 0 we get a hard decision vector with no smoothing, while α = 1 creates an ambiguous decision by assigning equal certainty to all classes. Logit squeezing A second approach to replicating adversarial training is to just directly crush all logit values and mimic the behavior in FIG1. This approach is known as "logit squeezing," and works by adding a regularization term to the training objective that explicitly penalizes large logits. BID6 were the first to introduce logit-squeezing as an alternative to a "logit pairing" defense. Logit squeezing relies on the loss function DISPLAYFORM1 where β is the squeezing parameter (i.e., coefficient for the logit-squeezing term) and ||.|| F is the Frobenius norm of the logits for the mini-batch. Can such simple regularizers really replicate adversarial training? Our experimental suggest that simple regularizers can hurt adversarial robustness, which agrees with the findings in BID20. However, these strategies become highly effective when combined with a simple trick from the adversarial training literature -data augmentation with Gaussian noise. Adding Gaussian noise to images during training (i.e, Gaussian noise augmentation) can be used to improve the adversarial robustness of classifiers BID6 BID20. However, the effect of Gaussian noise is not well understood BID6. Here, we take a closer look at the behavior of Gaussian augmentation through systematic experimental investigations, and see that its effects are more complex than one might think. Label smoothing and logit squeezing become shockingly effective at hardening networks when they are combined with Gaussian noise augmentation. From the robustness plots in FIG2, we can see that training with Gaussian noise alone produces a noticeable change in robustness, which seems to be mostly attributable to a widening of the logit gap and slight decrease in the gradient gap (∇ x z y − ∇ x zȳ 1). The small increase in robustness from random Gaussian augmentation was also reported by BID6. We also see that label smoothing alone causes a very slight drop in robustness; the shrink in the gradient gap is completely offset by a collapse in the logit gap. Surprisingly, Gaussian noise and label smoothing have a powerful synergistic effect. When used together they cause a dramatic drop in the gradient gap, leading to a surge in robustness. A similar effect happens in the case of logit squeezing, and are shown in Appendix B (Figure 8). Regularization methods have the potential to squash or align the adversarial gradients, but these properties are only imposed during training on images from the manifold that the "true" data lies on. At test time, the classifier sees adversarial images that do not "look like" training data because they lie off of, but adjacent to, the image manifold. By training the classifier on images with random perturbations, we teach the classifier to enforce the desired properties for input images that lie off the manifold. The generalization property of Gaussian augmentation seems to be independent from, and sometimes conflicting with, the synergistic properties discuss above. In our experiments below, we find that smaller noise levels lead to a stronger synergistic effect, and yield larger L and better robustness to FGSM attacks. However, larger noise levels enable the regularization properties to generalize further off the manifold, ing in better robustness to iterative attacks or attacks that escape the flattened region by adding an initial random perturbation. See the on MNIST in Table 2 and the on CIFAR-10 in TAB3 for various values of σ (standard deviation of Gaussian noise). For more comprehensive experiments on the different parameters that contribute to the regularizers see Table 6 for MNIST and Tables 7 & 8 Label smoothing (i.e. reducing the variance of the logits) is helpful because it causes the gradient gap to decrease. The decreased gradient gap may be due to smaller element-wise gradient amplitudes, the alignments of the adversarial gradients, or both. To investigate the causes, we plot the 1 norm of the gradients of the logits with respect to the input image 4 and the cosine of the angle between the gradients FIG3. We see that in label smoothing (with Gaussian augmentation), both the gradient magnitude decreases and the gradients get more aligned. Larger smoothing parameters α cause the gradient to be both smaller and more aligned. When logit squeezing is used with Gaussian augmentation, the magnitudes of the gradients decrease. The distribution of the cosines between gradients widens, but does not increase like it did for label smoothing. These effects are very similar to the behavior of adversarial training in FIG0. Interestingly, in the case of logit squeezing with Gaussian noise, unlike label smoothing, the numerator of Equation 4 increases as well. This increase in the logit gap disappears once we take away Gaussian augmentation (See Appendix B Figure 8). Simultaneously increasing the numerator and decreasing the denominator of Equation 4 potentially gives a slight advantage to logit squeezing. There are multiple factors that can affect the robustness of the MNIST classifier 5. While regularizers do not yield more robustness than adversarial training for MNIST, the are promising given that these relatively high values of robustness come at a cheap cost in comparison to adversarial training. In Table 2 we notice that as we increase the number of training iterations k, we get more robust models for both logit squeezing and label smoothing 6. We get more robust models when we use larger smoothing (α) and squeezing (β) parameters, and when Gaussian augmentation is used with standard deviation σ that is greater than the desired (the maximum perturbation size). Table 2: Accuracy of different MNIST classifiers against PGD and FGSM attacks on X-ent and CW losses under the white-box and black-box threat models. Attacks have maximum ∞ perturbation = 0.3. The iterative white-box attacks have an initial random step. The naturally trained model is used for generating black-box attacks. We use CW loss for the black-box attack. We trained Wide-Resnet CIFAR-10 classifiers (depth=28 and k=10) using aggressive values for the smoothing and squeezing parameters on the CIFAR10 data set. Similar to BID9, we use the standard data-augmentation techniques and weight-decay. We compare our to those of BID9. Note that the adversarially trained model from BID9 has been trained for 80,000 iterations on adversaries which are generated using a 7-step PGD. Keeping in mind that each step requires a forward and backward pass, the running time of training for 80,000 iterations on 7-step PDG examples is equivalent to 640,000 iterations of training with label smoothing or logit squeezing. A short version of our on white-box attacks are summarized in TAB3. The of some of our black-box experiments are summarized in Table 4 7. While logit squeezing seems to outperform label smoothing in the white-box setting, label smoothing is slightly better under the black-box threat. We see that aggressive logit squeezing with squeezing parameter β = 10 and σ = 20 in a model that is more robust than the adversarially trained model from BID9 BID9 63.39%* 64.38%* 63.39%* 64.38%* 67.00% 67.25% Table 4: Black-box attacks on CIFAR-10 models. Attacks are ∞ with = 8. Similar to BID9, We build 7-step PGD attacks and FGSM attacks for a public adversarially trained model. *Values taken from the original paper by BID9. Some defenses work by obfuscating gradients and masking the gradients. BID0 suggest these models can be identified by performing "sanity checks" such as attacking them with Figure 5: The cross-entropy landscape of the first eight images of the validation set for the model trained for 160k iterations with hyper-parameters β = 10 and σ = 30. To plot the loss landscape we take two random directions r 1 and r 2 (i.e. r 1, r 2 ∼ Rademacher(0.5) ). We plot the cross-entropy (i.e. xent) loss at different points x = x i + 1 · r 1 + 2 · r 2. Where x i is the clean image and −10 ≤ 1, 2 ≤ 10.unbounded strong adversaries (i.e. unbounded with many iterations). By attacking our robust models using these unbounded attacks, we verify that the unbounded adversary can degrade the accuracy to 0.00% which implies that the adversarial example generation optimization attack (PGD) is working properly. Also, it is known that models which do not completely break the PGD attack (such as us) can possibly lead to a false sense of security by creating a loss landscape that prevents an -bounded but weak adversary from finding strong adversarial examples. This can be done by convoluting the loss landscape such that many local-optimal points exist. This false sense of security can be identified by increasing the strength of the adversary using methods such as increasing the number of steps and random restarts of the PGD attack. We run the stronger PGD attacks on a sample model from TAB3 with hyper-parameters k = 160k, β = 10, and σ = 30. We notice that performing 9 random restarts for the PGD attack on the cross-entropy loss only drops the accuracy slightly to 49.86%. Increasing the number of PGD steps to 1000 decreases the accuracy slightly more to 40.94% 8. While for such strong iterative white-box attacks our sample model is less robust than the adversarially trained model, there are other areas where this hardened model is superior to the adversarially trained model: Very high robustness in the black box setting (roughly 3% higher than that for adversarial training according to Table 4) and against white-box non-iterative (or less iterative) attacks (roughly 15%); high test accuracy on clean data (roughly 3%); and, very fast training time compared to adversarial training. To further verify that our robust model is not falsely creating a sense of robustness by "breaking" the optimization problem that generates adversarial examples by either masking the gradients or making the loss landscape convoluted, we visualize the loss landscape of our sample model from TAB3. We plot the classification (e.g., cross-entropy) loss for points surrounding the first eight validation images that belong to the subspace spanned by two random directions 9 in Figure 5. It seems that the loss landscape has not become convoluted. This observation is backed up by the fact that increasing the number of PGD attack steps does not substantially affect accuracy. To check the performance of our proposed regularizers on more complicated datasets with more number of classes, we perform aggressive logit squeezing on the CIFAR-100 dataset which contains 100 categories. We use the same architecture and settings used for training the CIFAR-10 classifiers. The white-box performance of two hardened models with logit squeezing and a PGD adversarially trained model for the same architecture are summarized in Table 5 Table 5: White-box iterative attacks on CIFAR-100 models. We use ∞ attacks with = 8. For brevity, we only report the for attacking the cross-entropy loss. We attack the models with adversaries having different strengths by varying the number of PGD steps. Similar to CIFAR-10, aggressive logit squeezing can in models as robust as those that are adversarially trained at a fraction of the cost. The logit-squeezed model that is trained for only 80k iterations achieves high classification accuracy for natural/clean examples. It is also more robust against white-box PGD attacks compared to the adversarially trained model that requires much more training time. The logit-squeezed model with k = 160k improves the robustness and clean accuracy even further and still trains faster than the adversarially trained model (28.8 hours vs 34.3 hours). We studied the robustness of adversarial training, label smoothing, and logit squeezing through a linear approximation L that relates the magnitude of adversarial perturbations to the logit gap and the difference between the adversarial directions for different labels. Using this simple model, we observe how adversarial training achieves robustness and try to imitate this robustness using label smoothing and logit squeezing. The ing methods perform well on MNIST, and can get on CIFAR-10 and CIFAR-100 that can excel over adversarial training in both robustness and accuracy on clean examples. By demonstrating the effectiveness of these simple regularization methods, we hope this work can help make robust training easier and more accessible to practitioners. Similarly to what we observed about adversarial training on MNIST, adversarial training on CIFAR-10 works by greatly shrinking the adversarial gradients and also shrinking the logit gaps. The shrink in the gradients is much more dramatic than the shrink in the logit gap, and overwhelms the decrease in the numerator of Equation 4. See FIG4. As shown empirically in Table 6, and analytically using the linear approximation in Equation 4 evaluated in Figure 8, logit squeezing worsens robustness when Gaussian augmentation is not used. However, when fused with Gaussian augmentation, logit squeezing achieves good levels of robustness. This addition of Gaussian augmentation has three observed effects: the gradients get squashed, the logit gap increases, and the gradients get slightly more aligned. The increase in the logit gaps increases the numerator in Equation 4. This gives a slight edge to logit squeezing in comparison to label smoothing, that mostly works by decreasing the denominator in Equation 4. The for all of our experiments on MNIST are summarized in Table 6. As can be seen, Gaussian random augmentation by itself (β = α = 0) is effective in increasing robustness on black-box attacks. It does not, however, significantly increase robustness on white-box attacks. Models that are only trained with either logit squeezing or label smoothing without random Gaussian augmentation (σ = 0), can be fooled by an adversary that has knowledge of the model parameters (white-box) with accuracy 100%. In the black-box setting, they are also not robust. Increasing the magnitude of the noise (σ) generally increases robustness but degrades the performance on the clean examples. Keeping the number of iterations k constant, for extremely large σ Table 6: Accuracy of different models trained on MNIST with a 40 step PGD attack on the crossentropy (X-ent) loss and the Carlini-Wagner (CW) loss under the white-box and black-box threat models. Attacks are ∞ attacks with a maximum perturbation of = 0.3. The iterative white-box attacks have an initial random step. The naturally trained model was used for generating the attacks for the black-box threat model. We use the CW loss for the FGSM attack in the blackbox case. k is the number of training iterations. Here we take a deeper look at reguarlized training for CIFAR-10. The that can be drawn in this case are parallel with those of MNIST discussed in Appendix C. It is worth noting that while the of logit squeezing outperform those from label smoothing in the white-box setting, training with large squeezing coefficient β often fails and in low accuracy on test data. This breakdown of training rarely happens for label smoothing (even for very large smoothing parameters α). BID9 100.00% 87.25% 45.84% 46.90% 56.22% 55.57% Table 7: White-box attacks on the CIFAR-10 models. All attacks are ∞ attacks with = 8. For the 20-step PGD, similar to BID9, we use an initial random perturbation. While it seems that logit squeezing, label smoothing, and adversarial training have a lot in common when we look at quantities affecting the linear approximation L, we wonder whether they still look similar with respect to other metrics. Here, we look at the sum of the activations in the logit layer for every logit FIG0 ) and the sum of activations for every neuron of the penultimate layer (FIG0). The penultimate activations are often seen as the "feature representation" that the neural network learns. By summing over the absolute value of the activations of all test examples for every neuron in the penultimate layer, we can identify how many of the neurons are effectively inactive. When we perform natural training, all neurons become active for at least some images. After adversarial training, this is no longer the case. Adversarial training is causing the effective dimensionality of the deep feature representation layer to decrease. One can interpret this as adversarial training learning to ignore features that the adversary can exploit (400 out of the 1024 neurons of the penultimate layer are deactivated). Shockingly, both label smoothing and logit squeezing do the samethey deactivate roughly 400 neurons from the deep feature representation layer. BID9 63.39%* 64.38%* 63.39%* 64.38%* 67.00% 67.25% Table 8: Black-box attacks on the CIFAR-10 models. All attacks are ∞ attacks with = 8. Similar to BID9, We build 7-step PGD attacks and FGSM attacks for the public adversarial trained model of MadryLab. We then use the built attacks for attacking the different models. *: Since we do not have the Madry model, we cannot evaluate it under the PGD attack with and without random initialization and therefore we use the same value that is reported by them for both. As a sanity check, and to verify that the robustness of our models are not due to degrading the functionality of PGD attacks, here we verify that our models indeed have zero accuracy when the adversary is allowed to make huge perturbations. In Table 9 by performing an unbounded PGD attack on a sample logit-squeezed model for the CIFAR-10 (β = 10, σ = 30, and k = 160k), we verify that our attack is not benefiting from obfuscating the gradients. Table 9: The effect of unbounded on the accuracy. The decline in the accuracy as a sanity check shows that the sample model is at least not completely breaking the PGD attack and is not obfuscating the gradients. We attack our sample hardened CIFAR-10 model (β = 10, Gaussian augmentation σ = 30 model, and k = 160k iterations), by performing many random restarts. Random restarts can potentially increase the strength of the PGD attack that has a random initialization. As shown in Table 10, increasing the number of random restarts does not significantly degrade the robustness. It should be noted that attacking with more iterations and more random restarts hurts adversarially trained models as well (see the leaderboard in Madry's Cifar10 Github Repository). Table 10: The effect of the number of random restarts while generating the adversarial examples on the accuracy of a model trained with the logit squeezing. It shows that the accuracy plateaus at 9 random restarts. Another way to increase the strength of an adversary is by increasing the number of PGD steps for the attack. We notice that increasing the number of steps does not greatly affect the robustness of our sample logit-squeezed model (See Table 11). Table 11: The effect of the number of steps of the white-box PGD attack on the CW loss (worst case based on TAB3) for the model trained in 160k steps with logit squeezing parameters β = 10 and σ = 30 on CIFAR-10 dataset. The model remains resistant against ∞ attacks with = 8 Figure 9: The cross-entropy landscape of the first eight images of the validation set for the model trained for 160k iteration and hyper-parameters β = 10 and σ = 30. To plot the loss landscape we take walks in one random direction r 1 ∼ Rademacher(0.5) and the adversarial direction a 2 = sign(∇ x xent) where xent is the cross-entropy loss. We plot the cross-entropy (i.e. xent) loss at different points x = x i + 1 · r 1 + 2 · a 2. Where x i is the clean image and −10 ≤ 1, 2 ≤ 10. As it can be seen moving along the adversarial direction changes the loss value a lot and moving along the random direction does not make any significant major changes. Similar to Figure 5, we plot the classification loss landscape surrounding the input images for the first eight images of the validation set. Unlike in Figure 5 which we changed the clean image along two random directions, in Figure 9, we wander around the clean image by moving in the adversarial direction and one random direction. From Figure 9 we observe that the true direction that the classification loss changes is along the adversarial direction which illustrates that the logit-squeezed model is not masking the gradients. 2) with random noise, and logit squeezing (β = 0.5) with random noise. In all cases, the noise is Gaussian with σ = 0.5. Interestingly, the combination of Gaussian noise and label smoothing, similar to the combination of Gaussian noise and logit squeezing, deactivates roughly 400 neurons. This is similar to adversarial training. In some sense it seems that all three methods are causing the "effective" dimensionality of the deep feature representation layer to shrink. FIG0: For MNIST, we plot the cumulative sum of activation magnitudes for all neurons in logit layer of a network produced by natural training, adversarial training, natural training with random noise, label smoothing (LS = 0.2) with random noise, and logit squeezing (β = 0.5) with random noise. In all cases, the noise is Gaussian with σ = 0.5.
Achieving strong adversarial robustness comparable to adversarial training without training on adversarial examples
1,405
scitldr
A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this involves minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target later desired tasks by meta-learning an unsupervised learning rule which leads to representations useful for those tasks. Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to different neural network architectures, datasets, and data modalities. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We further show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task. Supervised learning has proven extremely effective for many problems where large amounts of labeled training data are available. There is a common hope that unsupervised learning will prove similarly powerful in situations where labels are expensive, impractical to collect, or where the prediction target is unknown during training. Unsupervised learning however has yet to fulfill this promise. One explanation for this failure is that unsupervised representation learning algorithms are typically mismatched to the target task. Ideally, learned representations should linearly expose high level attributes of data (e.g. object identity) and perform well in semi-supervised settings. Many current unsupervised objectives, however, optimize for objectives such as log-likelihood of a generative model or reconstruction error, producing useful representations only as a side effect. Unsupervised representation learning seems uniquely suited for meta-learning BID0 ). Unlike most tasks where meta-learning is applied, unsupervised learning does not define an explicit objective, which makes it impossible to phrase the task as a standard optimization problem. It is possible, however, to directly express a meta-objective that captures the quality of representations produced by an unsupervised update rule by evaluating the usefulness of the representation for candidate tasks. In this work, we propose to meta-learn an unsupervised update rule by meta-training on a meta-objective that directly optimizes the utility of the unsupervised representation. Unlike hand-designed unsupervised learning rules, this meta-objective directly targets the usefulness of a representation generated from unlabeled data for later supervised tasks. By recasting unsupervised representation learning as meta-learning, we treat the creation of the unsupervised update rule as a transfer learning problem. Instead of learning transferable features, we learn a transferable learning rule which does not require access to labels and generalizes across both data domains and neural network architectures. Although we focus on the meta-objective of semi-supervised classification here, in principle a learning rule could be optimized to generate representations for any subsequent task. Unsupervised learning is a topic of broad and diverse interest. Here we briefly review several techniques that can lead to a useful latent representation of a dataset. In contrast to our work, each method imposes a manually defined training algorithm or loss function whereas we learn the algorithm that creates useful representations as determined by a meta-objective. Autoencoders work by first compressing and optimizing reconstruction loss. Extensions have been made to de-noise data (; 2010), as well as compress information in an information theoretic way . further explored scaling up these unsupervised methods to large image datasets. Generative adversarial networks take another approach to unsupervised feature learning. Instead of a loss function, an explicit min-max optimization is defined to learn a generative model of a data distribution. Recent work has shown that this training procedure can learn unsupervised features useful for few shot learning (; ;).Other techniques rely on self-supervision where labels are easily generated to create a non-trivial'supervised' loss. Domain knowledge of the input is often necessary to define these losses. use unscrambling jigsaw-like crops of an image. Techniques used by and rely on using temporal ordering from videos. Another approach to unsupervised learning relies on feature space design such as clustering. showed that k-means can be used for feature learning. jointly learn features and cluster assignments. develop a scalable technique to cluster by predicting noise. Other techniques such as , , and define various desirable properties about the latent representation of the input, such as predictability, complexity of encoding mapping, independence, or sparsity, and optimize to achieve these properties. Most meta-learning algorithms consist of two levels of learning, or'loops' of computation: an inner loop, where some form of learning occurs (e.g. an optimization process), and an outer loop or metatraining loop, which optimizes some aspect of the inner loop, parameterized by meta-parameters. The performance of the inner loop computation for a given set of meta-parameters is quantified by a meta-objective. Meta-training is then the process of adjusting the meta-parameters so that the inner loop performs well on this meta-objective. Meta-learning approaches differ by the computation performed in the inner loop, the domain, the choice of meta-parameters, and the method of optimizing the outer loop. Some of the earliest work in meta-learning includes work by , which explores a variety of meta-learning and self-referential algorithms. Similarly to our algorithm, Bengio et al. (1990; 1992) propose to learn a neuron local learning rule, though their approach differs in task and problem formulation. meta-learn supervised learning rules which mix local and global network information. A number of papers propose meta-learning for few shot learning (; ; ; ;), though these do not take advantage of unlabeled data. Others make use of both labeled and unlabeld data . uses a task created with no supervision to then train few-shot detectors. use meta-learning for unsupervised learning, primarily in the context of clustering and with a small number of meta-parameters. Figure 1: Left: Schematic for meta-learning an unsupervised learning algorithm. The inner loop computation consists of iteratively applying the UnsupervisedUpdate to a base model. During metatraining the UnsupervisedUpdate (parameterized by θ) is itself updated by gradient descent on the MetaObjective. Right: Schematic of the base model and UnsupervisedUpdate. Unlabeled input data, x 0, is passed through the base model, which is parameterised by W and colored green. The goal of the UnsupervisedUpdate is to modify W to achieve a top layer representation x L which performs well at few-shot learning. In order to train the base model, information is propagated backwards by the UnsupervisedUpdate in a manner analogous to backprop. Unlike in backprop however, the backward weights V are decoupled from the forward weights W. Additionally, unlike backprop, there is no explicit error signal as there is no loss. Instead at each layer, and for each neuron, a learning signal is injected by a meta-learned MLP parameterized by θ, with hidden state h. Weight updates are again analogous to those in backprop, and depend on the hidden state of the pre-and postsynaptic neurons for each weight. To allow easy comparison against other existing approaches, we present a more extensive survey of previous work in meta-learning in table form in TAB1, highlighting differences in choice of task, structure of the meta-learning problem, choice of meta-architecture, and choice of domain. To our knowledge, we are the first meta-learning approach to tackle the problem of unsupervised representation learning, where the inner loop consists of unsupervised learning. This contrasts with transfer learning, where a neural network is instead trained on a similar dataset, and then fine tuned or otherwise post-processed on the target dataset. We additionally believe we are the first representation meta-learning approach to generalize across input data modalities as well as datasets, the first to generalize across permutation of the input dimensions, and the first to generalize across neural network architectures (e.g. layer width, network depth, activation function). We consider a multilayer perceptron (MLP) with parameters φ t as the base model. The inner loop of our meta-learning process trains this base model via iterative application of our learned update rule. See Figure 1 for a schematic illustration and Appendix A for a more detailed diagram. In standard supervised learning, the'learned' optimizer is stochastic gradient descent (SGD). A supervised loss l (x, y) is associated with this model, where x is a minibatch of inputs, and y are the corresponding labels. The parameters φ t of the base model are then updated iteratively by performing SGD using the gradient ∂l(x,y) ∂φt. This supervised update rule can be written as φ t+1 = SupervisedUpdate(φ t, x t, y t ; θ), where t denotes the inner-loop iteration or step. Here θ are the meta-parameters of the optimizer, which consist of hyper-parameters such as learning rate and momentum. In this work, our learned update is a parametric function which does not depend on label information, φ t+1 = UnsupervisedUpdate(φ t, x t ; θ). This form of the update rule is general, it encompasses many unsupervised learning algorithms and all methods in Section 2.1.In traditional learning algorithms, expert knowledge or a simple hyper-parameter search determines θ, which consists of a handful of meta-parameters such as learning rate and regularization constants. In contrast, our update rule will have orders of magnitude more meta-parameters, including the weights of a neural network. We train these meta-parameters by performing SGD on the sum of the MetaObjective over the course of (inner loop) training in order to find optimal parameters θ *, DISPLAYFORM0 that minimize the meta-objective over a distribution of training tasks. Note that φ t is a function of θ since θ affects the optimization trajectory. In the following sections, we briefly review the main components of this model: the base model, the UnsupervisedUpdate, and the MetaObjective. See the Appendix for a complete specification. Additionally, code and meta-trained parameters θ for our meta-learned UnsupervisedUpdate is available 1. Our base model consists of a standard fully connected multi-layer perceptron (MLP), with batch normalization BID6, and ReLU nonlinearities. We chose this as opposed to a convolutional model to limit the inductive bias of convolutions in favor of learned behavior from the UnsupervisedUpdate. We call the pre-nonlinearity activations z 1, · · ·, z L, and post-nonlinearity activations DISPLAYFORM0 where L is the total number of layers, and x 0 ≡ x is the network input (raw data). The parameters are DISPLAYFORM1 where W l and b l are the weights and biases (applied after batch norm) for layer l, and V l are the corresponding weights used in the backward pass. We wish for our update rule to generalize across architectures with different widths, depths, or even network topologies. To achieve this, we design our update rule to be neuron-local, so that updates are a function of pre-and post-synaptic neurons in the base model, and are defined for any base model architecture. This has the added benefit that it makes the weight updates more similar to synaptic updates in biological neurons, which depend almost exclusively on local pre-and post-synaptic neuronal activity BID7. In practice, we relax this constraint and incorporate some cross neuron information to decorrelate neurons (see Appendix G.5 for more information).To build these updates, each neuron i in every layer l in the base model has an MLP, referred to as an update network, associated with it, with output h DISPLAYFORM0; θ where b indexes the training minibatch. The inputs to the MLP are the feedforward activations (x l & z l) defined above, and feedback weights and an error signal (V l and δ l, respectively) which are defined below. All update networks share meta-parameters θ. Evaluating the statistics of unit activation over a batch of data has proven helpful in supervised learning BID6. It has similarly proven helpful in hand-designed unsupervised learning rules, such as sparse coding and clustering. We therefore allow h l bi to accumulate statistics across examples in each training minibatch. During an unsupervised training step, the base model is first run in a standard feed-forward fashion, populating x l bi, z l bi. As in supervised learning, an error signal δ l bi is then propagated backwards through the network. Unlike in supervised backprop, however, this error signal is generated by the corresponding update network for each unit. It is read out by linear projection of the per-neuron hidden state h, δ l bi = lin h l bi, and propogated backward using a set of learned'backward weights' (V l) T, rather than the transpose of the forward weights (W l) T as would be the case in backprop (diagrammed in Figure 1). This is done to be more biologically plausible.Again as in supervised learning, the weight updates (∆W l) are a product of pre-and post-synaptic signals. Unlike in supervised learning however, these signals are generated using the per-neuron update networks: DISPLAYFORM1 bj, W ij. The full weight update (which involves normalization and decorrelation across neurons) is defined in Appendix G.5. The meta-objective determines the quality of the unsupervised representations. In order to meta-train via SGD, this loss must be differentiable. The meta-objective we use in this work is based on fitting a linear regression to labeled examples with a small number of data points. In order to encourage the learning of features that generalize well, we estimate the linear regression weights on one minibatch {x a, y a} of K data points, and evaluate the classification performance on a second minibatch {x b, y b} also with K datapoints, DISPLAYFORM0 are features extracted from the base model on data x a, x b, respectively. The target labels y a, y b consist of one hot encoded labels and potentially also regression targets from data augmentation (e.g. rotation angle, see Section 4.2). We found that using a cosine distance, CosDist, rather than unnormalized squared error improved stability. Note this meta-objective is only used during meta-training and not used when applying the learned update rule. The inner loop computation is performed without labels via the UnsupervisedUpdate. We choose to meta-optimize via SGD as opposed to reinforcement learning or other black box methods, due to the superior convergence properties of SGD in high dimensions, and the high dimensional nature of θ. Training and computing derivatives through long recurrent computation of this form is notoriously difficult BID9. To improve stability and reduce the computational cost we approximate the gradients DISPLAYFORM0 via truncated backprop through time BID10. Many additional design choices were also crucial to achieving stability and convergence in meta-learning, including the use of batch norm, and restricting the norm of the UnsupervisedUpdate update step (a full discussion of these and other choices is in Appendix B). Generalization in our learned optimizer comes from both the form of the UnsupervisedUpdate (Section 3.2), and from the meta-training distribution. Our meta-training distribution is composed of both datasets and base model architectures. We construct a set of training tasks consisting of CIFAR10 BID11 ) and multi-class classification from subsets of classes from Imagenet BID12 as well as from a dataset consisting of rendered fonts (Appendix H.1.1). We find that increased training dataset variation actually improves the meta-optimization process. To reduce computation we restrict the input data to 16x16 pixels or less during meta-training, and resize all datasets accordingly. For evaluation, we use MNIST BID13, Fashion MNIST BID14, IMDB BID15, and a hold-out set of Imagenet classes. We additionally sample the base model architecture. We sample number of layers uniformly between 2-5 and the number of units per layer logarithmically between 64 to 512.As part of preprocessing, we permute all inputs along the feature dimension, so that the UnsupervisedUpdate must learn a permutation invariant learning rule. Unlike other work, we focus explicitly on learning a learning algorithm as opposed to the discovery of fixed feature extractors that generalize across similar tasks. This makes the learning task much harder, as the UnsupervisedUpdate has to discover the relationship between pixels based solely on their joint statistics, and cannot "cheat" and memorize pixel identity. To provide further dataset variation, we additionally augment the data with shifts, rotations, and noise. We add these augmentation coefficients as additional regression targets for the meta-objective-e.g. rotate the image and predict the rotation angle as well as the image class. For additional details, see Appendix H.1.1. We implement the above models in distributed TensorFlow BID16. Training uses 512 workers, each of which performs a sequence of partial unrolls of the inner loop UnsupervisedUpdate, and computes gradients of the meta-objective asynchronously. Training takes ∼8 days, and consists of ∼200 thousand updates to θ with minibatch size 256. Additional details are in Appendix C. First, we examine limitations of existing unsupervised and meta learning methods. Then, we show meta-training and generalization properties of our learned optimizer and finally we conclude by visualizing how our learned update rule works. For details of the experimental setup, see Appendix H. To illustrate the negative consequences of objective function mismatch in unsupervised learnin algorithms, we train a variational autoencoder on 16x16 CIFAR10. Over the course of training we evaluate classification performance from few shot classification using the learned latent representations. Training curves can be seen in FIG0. Despite continuing to improve the VAE objective throughout training (not shown here), the classification accuracy decreases sharply later in training. To demonstrate the reduced generalization that from learning transferable features rather than an update algorithm, we train a prototypical network with and without the input shuffling described in Section 4.2. As the prototypical network primarily learns transferrable features, performance is significantly hampered by input shuffling. Results are in Continuing to optimize a variational auto-encoder (VAE) hurts few-shot accuracy after some number of steps (dashed line). Right: Prototypical networks transfer features rather than a learning algorithm, and perform poorly if tasks don't have consistent data structure. Training a prototypical network with a fully connected architecture (same as our base model) on a MiniImagenet 10-way classification task with either intact inputs (light purple) or by permuting the pixels before every training and testing task (dark purple). Performance with permuted inputs is greatly reduced (gray line). Our performance is invariant to pixel permutation. While training, we monitor a rolling average of the meta-objective averaged across all datasets, model architectures, and the number of unrolling steps performed. In FIG2 the training loss is continuing to decrease after 200 hours of training, which suggests that the approximate training techniques still produce effective learning. In addition to this global number, we measure performance obtained by rolling out the UnsupervisedUpdate on various meta-training and meta-testing datasets. We see that on held out image datasets, such as MNIST and Fashion Mnist, the evaluation loss is still decreasing. However, for datasets in a different domain, such as IMDB sentiment prediction BID15, we start to see meta-overfitting. For all remaining experimental , unless otherwise stated, we use meta-parameters, θ, for the UnsupervisedUpdate ing from 200 hours of meta-training. The goal of this work is to learn a general purpose unsupervised representation learning algorithm. As such, this algorithm must be able to generalize across a wide range of scenarios, including tasks that are not sampled i.i.d. from the meta-training distribution. In the following sections, we explore a subset of the factors we seek to generalize over. In FIG3, we compare performance on few shot classification with 10 examples per class. We evaluate test performance on holdout datasets of MNIST and Fashion MNIST at 2 resolutions: 14×14 and 28×28 (larger than any dataset experienced in meta-training). On the same base model architecture, our learned UnsupervisedUpdate leads to performance better than a variational autoencoder, supervised learning on the labeled examples, and random initialization with trained readout layer. The learned UnsupervisedUpdate generalizes to unseen datasets. Our learned update rule produces representations more suitable for few shot classification than those from random initialization or a variational autoecoder and outperforms fully supervised learning on the same labeled examples. Error bars show standard error. Right: Early in meta-training (purple), the UnsupervisedUpdate is able to learn useful features on a 2 way text classification data set, IMDB, despite being meta-trained only from image datasets. Later in meta-training (red) performance drops due to the domain mismatch. We show inner-loop training, consisting of 5k applications of the UnsupervisedUpdate evaluating the MetaObjective each iteration. Error bars show standard error across 10 runs. To further explore generalization limits, we test our learned optimizer on data from a vastly different domain. We train on a binary text classification dataset: IMDB movie reviews BID15, encoded by computing a bag of words with 1K words. We evaluate using a model 30 hours and 200 hours into meta-training (see FIG3 . Despite being trained exclusively on image datasets, the 30 hour learned optimizer improves upon the random initialization by almost 10%. When meta-training for longer, however, the learned optimizer "meta-overfits" to the image domain ing in poor performance. This performance is quite low in an absolute sense, for this task. Nevertheless, we find this very exciting as we are unaware of any work showing this kind of transfer of learned rules from images to text. We train models of varying depths and unit counts with our learned optimizer and compare at different points in training ( FIG4). We find that despite only training on networks with 2 to 5 layers and 64 to 512 units per layer, the learned rule generalizes to 11 layers and 10,000 units per layer. ReLU Leaky ReLU ELU Softplus Swish Tanh The learned UnsupervisedUpdate is capable of optimizing base models with hidden sizes and depths outside the meta-training regime. As we increase the number of units per layer, the learned model can make use of this additional capacity despite never having experienced it during meta-training. Right: The learned UnsupervisedUpdate generalizes across many different activation functions not seen in training. We show accuracy over the course of training on 14x14 MNIST.Next we look at generalization over different activation functions. We apply our learned optimizer on base models with a variety of different activation functions. Performance evaluated at different points in training (FIG4). Despite training only on ReLU activations, our learned optimizer is able to improve on random initializations in all cases. For certain activations, leaky ReLU BID17 and Swish BID18, there is little to no decrease in performance. Another interesting case is the step activation function. These activations are traditionally challenging to train as there is no useful gradient signal. Despite this, our learned UnsupervisedUpdate is capable of optimizing as it does not use base model gradients, and achieves performance double that of random initialization. To analyze how our learned optimizer functions, we analyze the first layer filters over the course of meta-training. Despite the permutation invariant nature of our data (enforced by shuffling input image pixels before each unsupervised training run), the base model learns features such as those shown in Figure 6, which appear template-like for MNIST, and local-feature-like for CIFAR10. Early in training, there are coarse features, and a lot of noise. As the meta-training progresses, more interesting and local features emerge. In an effort to understand what our algorithm learns to do, we fed it data from the two moons dataset. We find that despite being a 2D dataset, dissimilar from the image datasets used in meta-training, the learned model is still capable of manipulating and partially separating the data manifold in a purely unsupervised manner (Figure 6). We also find that almost all the variance in the embedding space is dominated by a few dimensions. As a comparison, we do the same analysis on MNIST. In this setting, the explained variance is spread out over more of the principal components. This makes sense as the generative process contains many more latent dimensions -at least enough to express the 10 digits. Figure 6: Left: From left to right we show first layer base model receptive fields produced by our learned UnsupervisedUpdate rule over the course of meta-training. Each pane consists of first layer filters extracted from φ after 10k applications of UnsupervisedUpdate on MNIST (top) and CIFAR10 (bottom). For MNIST, the optimizer learns image-template-like features. For CIFAR10, low frequency features evolve into higher frequency and more spatially localized features. For more filters, see Appendix D. Center: Visualization of learned representations before (left) and after (right) training a base model with our learned UnsupervisedUpdate for two moons (top) and MNIST (bottom). The UnsupervisedUpdate is capable of manipulating the data manifold, without access to labels, to separate the data classes. Visualization shows a projection of the 32-dimensional representation of the base network onto the top three principal components. Right: Cumulative variance explained using principal components analysis (PCA) on the learned representations. The representation for two moons data (red) is much lower dimensional than MNIST (blue), although both occupy a fraction of the full 32-dimensional space. In this work we meta-learn an unsupervised representation learning update rule. We show performance that matches or exceeds existing unsupervised learning on held out tasks. Additionally, the update rule can train models of varying widths, depths, and activation functions. More broadly, we demonstrate an application of meta-learning for learning complex optimization tasks where no objective is explicitly defined. Analogously to how increased data and compute have powered supervised learning, we believe this work is a proof of principle that the same can be done with algorithm design-replacing hand designed techniques with architectures designed for learning and learned from data via metalearning. Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule. In Figure App.1: Schematic for meta-learning an unsupervised learning algorithm. We show the hierarchical nature of both the meta-training procedure and update rule. a) Meta-training, where the meta-parameters, θ, are updated via our meta-optimizer (SGD). b) The gradients of the MetaObjective with respect to θ are computed by backpropagation through the unrolled application of the UnsupervisedUpdate. c) UnsupervisedUpdate updates the base model parameters (φ) using a minibatch of unlabeled data. d) Each application of UnsupervisedUpdate involves computing a forward and "backward" pass through the base model. The base model itself is a fully connected network producing hidden states x l for each layer l. The "backward" pass through the base model uses an error signal from the layer above, δ, which is generated by a meta-learned function. e.) The weight updates ∆φ are computed using a convolutional network, using δ and x from the pre-and post-synaptic neurons, along with several other terms discussed in the text. Training and computing derivatives through recurrent computation of this form is notoriously difficult BID9. Training parameters of recurrent systems in general can lead to chaos. We used the usual techniques such as gradient clipping BID19, small learning rates, and adaptive learning rate methods (in our case Adam ), but in practice this was not enough to train most UnsupervisedUpdate architectures. In this section we address other techniques needed for stable convergence. When training with truncated backprop the problem shifts from pure optimization to something more like optimizing on a Markov Decision Process where the state space is the base-model weights, φ, and the'policy' is the learned optimizer. While traversing these states, the policy is constantly meta-optimized and changing, thus changing the distribution of states the optimizer reaches. This type of non-i.i.d training has been discussed at great length with respect to on and off-policy RL training algorithms BID21. Other works cast optimizer meta-learning as RL BID2 for this very reason, at a large cost in terms of gradient variance. In this work, we partially address this issue by training a large number of workers in parallel, to always maintain a diverse set of states when computing gradients. For similar reasons, the number of steps per truncation, and the total number of unsupervised training steps, are both sampled in an attempt to limit the bias introduced by truncation. We found restricting the maximum inner loop step size to be crucial for stability. BID22 studied the effect of learning rates with respect to the stability of optimization and showed that as the learning rate increases gradients become chaotic. This effect was later demonstrated with respect to neural network training in. If learning rates are not constrained, we found that they rapidly grew and entered this chaotic regime. Another technique we found useful in addressing these problems is the use of batch norm in both the base model and in the UnsupervisedUpdate rule. Multi-layer perceptron training traditionally requires very precise weight initialization for learning to occur. Poorly scaled initialization can make learning impossible BID23. When applying a learned optimizer, especially early in meta-training of the learned optimizer, it is very easy for the learned optimizer to cause high variance weights in the base model, after which recovery is difficult. Batch norm helps solve this issues by making more of the weight space usable. We implement the described models in distributed Tensorflow BID16. We construct a cluster of 512 workers, each of which computes gradients of the meta-objective asynchronously. Each worker trains on one task by first sampling a dataset, architecture, and a number of training steps. Next, each worker samples k unrolling steps, does k applications of the UnsupervisedUpdate(·; θ), computes the MetaObjective on each new state, computes DISPLAYFORM0 and sends this gradient to a parameter server. The final base-model state, φ, is then used as the starting point for the next unroll until the specified number of steps is reached. These gradients from different workers are batched and θ is updated with asynchronous SGD. By batching gradients as workers complete unrolls, we eliminate most gradient staleness while retaining the compute efficiency of asynchronous workers, especially given heterogeneous workloads which arise from dataset and model size variation. An overview of our training can be seen in algorithm F. Due to the small base models and the sequential nature of our compute workloads, we use multi core CPUs as opposed to GPUs. Training occurs over the course of ∼8 days with ∼200 thousand updates to θ with minibatch size 256. In this section we describe the details of our base model and learned optimizer. First, we describe the inner loop, and then we describe the meta-training procedure in more depth. In the following sections, λ name denotes a hyper-parameter for the learned update rule. The design goals of this system are stated in the text body. At the time of writing, we were unaware of any similar systems, so as such the space of possible architectures was massive. Future work consists of simplifying and improving abstractions around algorithmic components. An open source implementation of the UnsupervisedUpdate can be found at https: //github.com/tensorflow/models/tree/master/research/learning_ unsupervised_learning. The inner loop computation consists of iterative application of the UnsupervisedUpdate on a Base Model parameterized by φ, DISPLAYFORM0 where φ consists of forward weights, W l, biases, b l, parameterizing a multi-layer perceptron as well as backward weights, V l used by UnsupervisedUpdate when inner-loop training. This computation can be broken down further as a forward pass on an unlabeled batch of data, DISPLAYFORM1 where z l, and x l are the pre-and post-activations on layer l, and x 0 is input data. We then compute weight updates: DISPLAYFORM2 Finally, the next set of φ forward weights (W l), biases (b l), and backward weights (V l), are computed. We use an SGD like update but with the addition of a decay term. Equivalently, this can be seen as setting the weights and biases to an exponential moving average of the ∆W l, ∆V l, and ∆b l terms. DISPLAYFORM3 We use λ φlr = 3e − 4 in our work. ∆W l, ∆V l, ∆b l are computed via meta-learned functions (parameterized by θ).In the following sections, we describe the functional form of the base model, f, as well as the functional form of ComputeDeltaWeight (·; θ). The base model, the model our learned update rule is training, is an L layer multi layer perception with batch norm. We define φ as this model's parameters, consisting of weights (W l) and biases (b l) as well as the backward weights (V l) used only during inner-loop training (applications of UnsupervisedUpdate). We define N 1.. N L to be the sizes of the layers of the base model and N 0 to be the size of the input data. The forward computation parameterized by φ, consumes batches of unlabeled data from a dataset D: DISPLAYFORM0 DISPLAYFORM1 We define f (x; φ) as a function that returns the set of internal pre-and post-activation function hidden states as well asf (x; φ) as the function returning the final hidden state: DISPLAYFORM2 We define the MetaObjective to be a few shot linear regression. To increase stability and avoid undesirable loss landscapes, we additionally center, as well as normalize the predicted target before doing the loss computation. The full computation is as follows: DISPLAYFORM3 where N classes is the number of classes, and y, y are one hot encoded labels. First, the inputs are converted to embeddings with the base model, DISPLAYFORM4 x L =f (x ; φ) (App.19) Next, we center and normalize the prediction targets. We show this for y, but y is processed identically.ȳ DISPLAYFORM5 We then solve for the linear regression weights in closed form with features: x L and targets:ŷ. We account for a bias by concatenating a 1's vector to the features. DISPLAYFORM6 We then use these inferred regression weights C to make a prediction on the second batch of data, normalize the ing prediction, and compute a final loss, DISPLAYFORM7 Note that due to the normalization of predictions and targets, this corresponds to a cosine distance (up to an offset and multiplicative factor). The learned update rule is parameterized by θ. In the following section, we denote all instances of θ with a subscript to be separate named learnable parameters of the UnsupervisedUpdate. θ is shared across all instantiations of the unsupervised learning rule (shared across layers). It is this weight sharing that lets us generalize across different network architectures. The computation is split into a few main components: first, there is the forward pass, defined in f (x; φ). Next, there is a "backward" pass, which operates on the hidden states in a reverse order to propagate an error signal δ l back down the network. In this process, a hidden state h l is created for each layer. These h are tensors with a batch, neuron and feature index: h l ∈ R B,N l,λ hdims where λ hdims = 64. Weight updates to W l and b l are then readout from these h l and other signals found both locally, and in aggregate along both batch and neuron. In backprop, there exists a single scalar error signal that gets backpropagated back down the network. In our work, this error signal does not exist as there is no loss being optimized. Instead we have a learned top-down signal, dL, at the top of the network. Because we are no longer restricted to the form of backprop, we make this quantity a vector for each unit in the network, rather than a scalar, DISPLAYFORM0 In this work we set λ deltadims = 32. The architecture of TopD is a neural network that operates along every dimension of x L. The specification can be found in G.7.We structure our error propagation similarly to the structure of the backpropagated error signal in a standard MLP with contributions to the error at every layer in the network, DISPLAYFORM1 where δ l has the same shape as d l, and θ errorP ropW ∈ R λ hdims,λ deltadims and θ errorP ropB ∈ R λ deltadims. Note both θ errorP ropW and θ errorP ropB are shared for all layers l. In a similar way to backprop, we move this signal down the network via multiplying by a backward weight matrix (V l). We do not use the previous weight matrix transpose as done in backprop, instead we learn a separate set of weights that are not tied to the forward weights and updated along with the forward weights as described in G.5. Additionally, we normalize the signal to have fixed second moment,d DISPLAYFORM2 The internal h l ∈ R B,N l,λ hdims vectors are computed via: DISPLAYFORM3 The architecture of ComputeH is a neural network that operates on every dimension of all the inputs. It can be found in G.8. These definitions are recursive, and are computed in order: DISPLAYFORM4 With these computed, weight updates can be read out (Section G.5). When the corresponding symbols are not defined (e.g. z 0) a zeros tensor with the correct shape is used instead. The following is the implementation of ComputeDeltaWeight(DISPLAYFORM0 For a given layer, l, our weight updates are a mixture of multiple low rank readouts from h l and h l−1 . These terms are then added together with a learnable weight, in θ to form a final update. The final update is then normalized mixed with the previous weights. We update both the forward, W l, and the backward, V l, using the same update rule parameters θ. We show the forward weight update rule here, and drop the backward for brevity. For convenience, we define a low rank readout function LowRR that takes h l like tensors, and outputs a single lower rank tensor. DISPLAYFORM1 Here, Θ, is a placehoder for the parameters of the given readout. LowRR is defined as: DISPLAYFORM2 where λ gradc = 4 and is the rank of the readout matrix (per batch element). This sequence of terms allow the weight to be adjusted as a function of state in the pre-and postsynaptic neurons. They should be viewed as a basis function representation of the way in which the weight changes as a function of pre-and post-synaptic neurons, and the current weight value. We express each of these contributions to the weight update as a sequence of weight update planes, with the ith plane written ∆W DISPLAYFORM0 Each of these planes will be linearly summed, with coefficients generated as described in Equation App.57, in order to generate the eventual weight update. Ŵ DISPLAYFORM1 Additional weight update planes are designed to aid units in remaining decorrelated from each other's activity, and in decorrelating their receptive fields. Without terms like this, a common failure mode DISPLAYFORM2 We then normalize, re-weight, and merge each of these weight update planes into a single term, which will be used in the weight update step, DISPLAYFORM0 where θ mergeW ∈ R 10 (as we have 10 input planes).To prevent pathologies during training, we perform two post processing steps to prevent the learned optimizer from cheating, and increasing its effective learning rate, leading to instability. We only allow updates which do not decrease the weight matrix magnitude, DISPLAYFORM1 whereŴ l is W l scaled to have unit norm, and we normalize the length of the update, DISPLAYFORM2 To compute changes in the biases, we do a readout from h l. We put some constraints on this update to prevent the biases from pushing all units into the linear regime, and minimizing learning. We found this to be a possible pathology. DISPLAYFORM3 where θ Breadout ∈ R λ hdims.We then normalize the update via the second moment: DISPLAYFORM4 Finally, we define ComputeDeltaWeight as all layer's forward weight updates, and backward weight updates, and bias updates. DISPLAYFORM5 This function performs various convolutions over the batch dimension and data dimension. For ease of notation, we use m as an intermediate variable. Additionally, we drop all convolution and batch norm parameterizations. They are all separate elements of θ topD. We define two 1D convolution operators that act on rank 3 tensors: ConvBatch which performs convolutions over the zeroth index, and ConvUnit which performs convolutions along the firs index. We define the S argument to be size of hidden units, and the K argument to be the kernel size of the 1D convolutions. Additionally, we set the second argument of BatchNorm to be the axis normalized over. DISPLAYFORM6 First, we reshape to add another dimension to the end of x L, in pseudocode: DISPLAYFORM7 Next, a convolution is performed on the batch dimension with a batch norm and a ReLU non linearity. This starts to pull information from around the batch into these channels. DISPLAYFORM8 We set λ topdeltasize = 64. Next, a series of unit convolutions (convolutions over the last dimension) are performed. These act as compute over information composed from the nearby elements of the batch. These unit dimensions effectively rewrite the batch dimension. This restricts the model to operate on a fixed size batch. DISPLAYFORM9 Next a series of 1D convolutions are performed over the batch dimension for more compute capacity. Finally, we convert the representations to the desired dimensions and output. DISPLAYFORM10 With the inputs prepared, we next perform a series of convolutions on batch and unit dimensions,. DISPLAYFORM11 The is then output. ComputeH (·; θ computeH) = m 11 (App.104)We set λ computehsize = 64 which is the inner computation size. DISPLAYFORM12 We trained on a data distribution consisting of tasks sampled uniformly over the following datasets. Half of our training tasks where constructed off of a dataset consisting of 1000 font rendered characters. We resized these to 14x14 black and white images. We call this the glyph dataset. "Alphabet" is an example of such a dataset consisting of alphabet characters. We used a mixture 10, 13, 14, 17, 20, and 30 way classification problems randomly sampled, as well as sampling from three 10-way classification problems sampled from specific types of images: letters of the alphabet, math symbols, and currency symbols. For half of the random sampling and all of the specialized selection we apply additional augmentation. This augmentation consists of random rotations (up to 360 degrees) and shifts up to +-5 pixels in the x and y directions. The parameters of the augmentation were inserted into the regression target of the MetaObjective as a curriculum of sorts and to provide diverse training signal. In addition to the the glyph set, we additionally used Cifar10, resized to 16x16, as well as 10, 15, 20, and 25 way classification problems from imagenet. Once again we resized to 16x16 for compute reasons. With a dataset selected, we apply additional augmentation with some probability consisting of the following augmentations. A per task dropout mask (fixed mask across all images in that task). A per example dropout mask (a random mask per image). A permutation sampled from a fixed number of pre created permutations per class. A per image random shift in the x direction each of image. All of these additional augmentations help with larger domain transfer. We employ Adam as our meta-optimizer. We use a learning rate schedule of 3e-4 for the first 100k steps, then 1e-4 for next 50k steps, then 2e-5 for remainder of meta-training. We use gradient clipping of norm 5 on minibatchs of size 256.We compute our meta-objective by averaging 5 evaluation of the linear regression. We use a ridge penalty of 0.1 for all this work. When computing truncated gradients, we initially sample the number of unrolled applications of the UnsupervisedUpdate in a uniform distribution of. This is the number of steps gradients are backpropogated through. Over the course of 50k meta-training steps we uniformly increase this to. This increases meta-training speed and stability as large unrolls early in training can be unstable and don't seem to provide any value. For sampling the number of truncated steps (number of times the above unrolls are performed), we use a shifted normal distribution -a normal distribution with the same mean and standard deviation. We chose this based on the expected distribution of the training step, φ iteration number, across the cluster of workers. We initially set the standard deviation low, 20, but slowly increased it over the course of 5000 steps to 20k steps. This slow increase also improved stability and training speed. For each experimental figure, we document the details. The VAE we used consists of 3 layers, size 128, with ReLU activations and batch norm between each layer. We then learn a projection to mean and log std of size 32. We sample, and use the inverse architecture to decode back to images. We use a quantized normal distribution (once again parameterized as mean and log std) as a posterior. We train with Adam with a learning rate of 1e-4.To isolate the effects of objective function mismatch and overfitting, we both train on the unlabeled training set and evaluate on the labeled training set instead of a validation set. We use a 4 layer, size 128 unit architecture with a 32 layer embedding for all models. We select performance at 100k training steps for the VAE, and 3k for our learned optimizer. Our supervised learning baseline consists of the same architecture for the base model but with an additional layer that outputs log probabilities. We train with cross entropy loss on a dataset consisting of only 10 examples per class (to match the other numbers). Surprisingly, the noise from batch norm acts as a regularizer, and allowing us to avoid needing a complex early stopping scheme as test set performance simply plateaus over the course of 10k steps. We train with Adam with a learning rate of 3e-3, selected via a grid over learning rate on test set performance. In this setting, having a true validation set would dramatically lower the amount of labeled data available (only 100 labeled examples) and using the test set only aids in the this baseline's performance. For the IMDB experiments, we tokenized and selected the top 1K words with an additional set of tokens for unknown, sentence start, and sentence end. Our encoding consisted of a 1 if the word is present, otherwise 0. We used the same 4 layer, 128 hidden unit MLP with an addition layer outputting a 32 dimensional embedding. We used ReLU activations and 4 layers of size 128 with an additional layer to 32 units unless otherwise specified by the specific experiment. We trained prototypical networks on either intact or shuffled mini-Imagenet images. For shuffled images, we generated a fixed random permutation for the inputs independently for every instantiation of the base network (or 'episode' in the meta-learning literature ). The purpose of shuffling was to demonstrate the inductive bias of this type of meta-learning, namely that they do not generalize across data domain. Note that the base network trained was the same fully connected architecture like that used in this paper (3 layers, size 128, with ReLU activations and batch normalization between layers). Though the original paper used a convolutional architecture, here we swapped it with the fully connected architecture because the tied weights in a convolutional model do not make sense with shuffled pixels.
We learn an unsupervised learning algorithm that produces useful representations from a set of supervised tasks. At test-time, we apply this algorithm to new tasks without any supervision and show performance comparable to a VAE.
1,406
scitldr
There is significant recent evidence in supervised learning that, in the over-parametrized setting, wider networks achieve better test error. In other words, the bias-variance tradeoff is not directly observable when increasing network width arbitrarily. We investigate whether a corresponding phenomenon is present in reinforcement learning. We experiment on four OpenAI Gym environments, increasing the width of the value and policy networks beyond their prescribed values. Our empirical lend support to this hypothesis. However, tuning the hyperparameters of each network width separately remains as important future work in environments/algorithms where the optimal hyperparameters vary noticably across widths, confounding the when the same hyperparameters are used for all widths. A longstanding notion in supervised learning is that, as model complexity increases, test error decreases initially and, eventually, increases again. Intuitively, the idea is that as the size of your hypothesis class grows, the closer you can approximate the ground-truth function with some function in your hypothesis class. At the same time, the larger amount of functions to choose from in your hypothesis class leads to higher estimation error (overfitting) from fitting the finite data sample too closely. This is the essential bias-variance tradeoff in supervised learning. We discuss these tradeoffs in more depth in Section 2.2.However, BID20 found that increasing the width of a single hidden layer neural network leads to decreasing test error on MNIST and CIFAR-10. Since then, there has been a large amount of evidence that wider networks generalize better in a variety of different architectures and hyperparameter settings BID27 BID21 BID15 BID19 BID0 BID24 BID17, once in the over-parametrized setting BID24 BID0. In other words, the biasvariance tradeoff is not observed in this over-parametrized setting, as network width grows BID19.How far can we inductively infer from this? Is this phenomenon also present in deep reinforcement learning or do we eventually see a degradation in performance as we increase network width? In this paper, we present preliminary evidence that this phenomenon is also present in reinforcement learning. For example, using default hyperparameters, we can already see performance increase well past the default width that is commonly used in FIG0. We test the hypothesis that wider networks (both policy and value) perform monotonically better than their smaller counterparts in policy gradients methods. Of course, we will hit diminishing returns as the network width gets very large, but this is very different from the competing hypothesis that larger networks will overfit more. We are given a training set S = {(x 1, y 1), (x 2, y 2),..., (x m, y m)} of m training examples, where x i ∈ X and y i ∈ Y. Furthermore, Z = X × Y, so S ∈ Z m. D denotes a distribution over Z, so we have (x i, y i) ∼ D and S ∼ D m. We use lowercase x and y to denote random variables because of convention in this field. We learn a hypothesis h ∈ H via a learning algorithm A: Z m → H. We denote a hypothesis learned from training set S as h S = A(S). Given a loss function,: Y ×Y → R, the goal is to minimize the expected risk: DISPLAYFORM0 We present a discussion on tradeoffs in model complexity because it does not appear to be much of a focus in the reinforcement learning community. A common way of thinking about the generalization performance of a learner is through the lens of a tradeoff. For example, when h S is chosen from a hypothesis class H, R(h S) can be decomposed into approximation error and estimation error DISPLAYFORM0 where E app = min h∈H R(h) and E est = R(h S) − E app. Shalev-Shwartz & Ben-David (2014, Section 5.2) present this decomposition and frame it as a tradeoff. BID2 describe this as the "well known tradeoff between approximation error and estimation error" and present it in a slightly more lucid way as a decomposition of the excess risk: DISPLAYFORM1 where R(h *) is the Bayes error and h * H = arg min h∈H R(h) is the best hypothesis in H. The approximation error can then be interpreted as the distance of the best hypothesis in H from the Bayes classifier, and the estimation error can be interpreted as the average distance of the learned hypothesis from the best hypothesis in H. It is common to associate larger H with smaller approximation error and larger estimation error. The commonly cited universal approximation property of neural networks BID4 BID13 BID16 means that the approximation error goes to 0 as the network width increases; these do not say anything about estimation error. A similar tradeoff in model complexity is known as the bias-variance tradeoff BID7. Bias is analogous to the approximation error while variance is analogous to the estimation error. This tradeoff is probably even more pervasive (Bishop (2006, Chapter 3 .2), BID7, Hastie et al. (2001, Chapter 2.9), Goodfellow et al. (2016, Chapter 5.4.4) ). It is common to view the problem of designing a good learning algorithm as choosing a good H that optimizes this tradeoff. Statistical learning theory for supervised learning is given in the i.i.d. setting. That is, examples are independent and identically distributed. This also means the training distribution is the same as the test distribution. In reinforcement learning, training examples are not independent because examples within the same episode depend on each other through the current behavior policy and through the environment's transition dynamics. Training examples are not identically distributed because the policy produces training examples, and the policy changes over time. For the same reason, the training distribution and the test distribution are not completely the same. These differences make it nonobvious that the phenomenon seen in supervised learning would extend to reinforcement learning. We run experiments, with a variety of combinations of environments and learning algorithms, where we vary the width of the shared policy and value network. We use four different environments from OpenAI Gym BID3: CartPole, Acrobot, MountainCar, and Pendulum. We use four different learning algorithms: PPO, A2C BID18, ACER BID25, and ACKTR. We make use of the existing implementations of these algorithms in the Stable Baselines library BID12, an improved fork of OpenAI Baselines. We were only able to train ACKTR up to width 512 because it is an approximate second-order method. Experiments with ACKTR are in Appendix B.We get hyperparameters that were tuned on networks of width 64 from the RL Baselines Zoo that was built on top of Stable Baselines. One hyperparameter is how many time steps the learners are trained for. It is different for different environment/learner pairs, but always on the order of 1 million. It is always the same across widths within an environment/learner pair. In some of the plots, learners see fewer episodes because their episodes are, on average, longer. We choose relatively simple tasks for these experiments partially because they are faster to train on, but more importantly, we choose them because their simplicity lends itself to more ease of overfitting. In other words, on these tasks, we will see diminishing returns with much smaller networks, so we can test the "very wide networks will not see degraded performance" hypothesis with a much smaller range of networks. We run each experiment with 5 different random seeds. The policy and value networks are shared. The architecture consists of 2 hidden fully connected layers followed by separate linear transformations: one to yield the policy and one to yield the value. We use 2 hidden layers, rather than just 1, because 2 hidden layers are more common in reinforcement learning. In CartPole (Fig. 2), we see a lot of evidence for the hypothesis. In the both the PPO and A2C experiments, peak performance is reached by width 64, and that level of performance is maintained through width 2048. In the ACER experiment, near peak performance is reached by width 128, and through width 2048, we see peak performance. Similarly, in Acrobot (Fig. 3), we see even more evidence for the hypothesis. We see peak performance as early as width 16 in PPO, ACER, and A2C. This means that Acrobot is simple enough to only require a network of width 16 (compared to 64 for CartPole). Still, we see peak performance through width 2048 in all 3 learners. In Pendulum (Appendix A), we see more evidence for the hypothesis. The default width network, performs distinctly worse than the wider networks. We do not see any degradation of performance through width 2048. We only run PPO with the Pendulum environment because RL Baselines Zoo did not have tuned hyperparameters for the other algorithms. In the MountainCar environment, we see the first hint of what looks like could be evidence against the hypothesis (Fig. 4). PPO (left) performance begins to degrade at width 2048, ACER (center) performance begins to degrade at width 512, and we see a sharp drop in performance from width 1024 to width 2048 in A2C (right).RL algorithms are known to be highly sensitive to hyperparameter settings BID10 BID14, especially learning rate BID11. We believe this performance degradation is due to more variability across widths of the optimal hyperparameters on MountainCar (compared CartPole, Acrobot, and Pendulum). In order to fairly compare all the widths, we would like the hyperparameters for each of them to be optimal. BID6 ) study test error when scaling network width in supervised learning, and they scale the learning rate as h −1.5, where h is the network width. This scaling is motivated by making the number of steps to convergence independent of width, but it does not necessarily make the learning rate for each network optimal. Because learning rate is such an important and sensitive hyperparameter in reinforcement learning BID11, we try scaling the learning rate α with both of the following schemes: α ← min(α * 64, (h/64) −1 ) and α ← min(α * 64, (h/64) −1.5 ), where α * 64 is the learning rate that was tuned to network width 64 (pulled from RL Baselines Zoo). We see that scaling the learning rate as h −1 (Fig. 5) and h −1.5 (Appendix C, Fig. 8) actually make the largest networks perform worse, indicating that this scaling is not useful for comparing networks with optimal hyperparameters. We present these scalings on MountainCar because it was the environment that did not look like the others, but the scalings on CartPole and Acrobot are in Appendix D. The phenomenon in supervised learning that motivated this work is that, in the over-parametrized setting, increasing network width leads to monotonically lower test error (no U curve). We find a fair amount of evidence of this phenomenon extending to reinforcement learning in our preliminary experiments (namely CartPole, Acrobot, and Pendulum).However, we also saw that performance did consistently degrade in the MountainCar experiments. We believe this to be because that environment is more sensitive to hyperparameters; since the hyperparameters were chosen using width 64 and then used for all of the other widths, the hyperparameters are likely not optimal for the other widths like they are for width 64. The MountainCar environment exaggerates this lack suboptimality more than the other 3 environments. The main next experiments we plan to run will use an automated tuning procedure that chooses the hyperparameters for each width individually. We believe this protocol will yield MountainCar that look much more like the CartPole and Acrobot . We then plan to replicate these findings across more learning algorithms and more environments. A. Pendulum
Over-parametrization in width seems to help in deep reinforcement learning, just as it does in supervised learning.
1,407
scitldr
Learning disentangled representations of data is one of the central themes in unsupervised learning in general and generative modelling in particular. In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate. To this end, we present a hierarchical model and a training method (CZ-GEM) that leverages some of the recent developments in likelihood-based and likelihood-free generative models. We show that by formulation, CZ-GEM introduces the right inductive biases that ensure the disentanglement of the control from the noise variables, while also keeping the components of the control variate disentangled. This is achieved without compromising on the quality of the generated samples. Our approach is simple, general, and can be applied both in supervised and unsupervised settings. Consider the following scenario: a hunter-gatherer walking in the African Savannah some 50,000 years ago notices a lioness sprinting out of the bush towards her. In a split second, billions of photons reaching her retinas carrying an enormous amount of information: the shade of the lioness' fur, the angle of its tail, the appearance of every bush in her field of view, the mountains in the and the clouds in the sky. Yet at this point there is a very small number of attributes which are of importance: the type of the charging animal, its approximate velocity and its location. The rest are just details. The significance of the concept that the world, despite its complexity, can be described by a few explanatory factors of variation, while ignoring the small details, cannot be overestimated. In machine learning there is a large body of work aiming to extract low-dimensional, interpretable representations of complex, often visual, data. Interestingly, many of the works in this area are associated with developing generative models. The intuition is that if a model can generate a good approximation of the data then it must have learned something about its underlying representation. This representation can then be extracted either by directly inverting the generative process (b) or by extracting intermediate representations of the model itself . Clearly, just learning a representation, even if it is low-dimensional, is not enough. The reason is that while there could be many ways to compress the information captured in the data, allowing good enough approximations, there is no reason to a priori assume that such a representation is interpretable and disentangled in the sense that by manipulating certain dimensions of the representation one can control attributes of choice, say the pose of a face, while keeping other attributes unchanged. The large body of work on learning disentangled representations tackles this problem in several settings; fully supervised, weakly supervised and unsupervised, depending on the available data (; ; ; ; ; ; ;). Ideally, we would like to come up with an unsupervised generative model that can generate samples which approximate the data to a high level of accuracy while also giving rise to a disentangled and interpretable representation. In the last decade two main approaches have captured most of the attention; Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs). In their original versions, both GANs and VAEs were trained in an unsupervised manner and (a) Chair rotation generated by CGAN (b) Chair rotation generated by CZ-GEM Figure 1: Changing the azimuth of chairs in CGAN and CZ-GEM while holding Z constant. Unlike CZ-GEM, C and Z are clearly entangled in CGAN as changing C also changes the type of chair even though Z is held constant. gave rise to entangled representations. Over the years, many methods to improve the quality of the generated data as well as the disentanglement of the representations have been suggested (; ; ;). By and large, GANs are better than VAEs in the quality of the generated data while VAEs learn better disentangled representations, in particular in the unsupervised setting. In this paper, we present a framework for disentangling a small number of control variables from the rest of the latent space which accounts for all the additional details, while maintaining a high quality of the generated data. We do that by combining VAE and GAN approaches thus enjoying the best of both worlds. The framework is general and works in both the supervised and unsupervised settings. Let us start with the supervised case. We are provided with paired examples (x, c) where x is the observation and c is a control variate. Crucially, there exists a one-to-many map from c to the space of observations, and there are other unobserved attributes z (or noise) that together completely define x. For instance, if x were an image of a single object, c controls the orientation of the object relative to the camera and z could represent object identity, texture or . Our goal is to learn a generative model p θ (x|c, z) that fulfills two criteria: If we were learning models of images, we would like the generated images to look realistic and match the true conditional distribution p(x|c). The posterior is factorized p(c, z|x; θ) = p(c|x; θ)p(z|x; θ): We would like the control variate to be disentangled from the noise. For example, changing the orientation of the object should not change the identity under our model. This problem setup can occur under many situations such as learning approximate models of simulators, 3D reconstructions, speaker recognition (from speech), and even real-world data processing in the human brain as in the hunter-gatherer example above. We argue that a naive implementation of a graphical model as shown in Figure 2 (left), e.g. by a conditional GAN , does not satisfy Criterion 2. In this model, when we condition on x, due to d-separation, c and z could become dependent, unless additional constraints are posed on the model. This effect is demonstrated in Figure 1 (a). To overcome this we split the generative process into two stages by replacing C with a subgraph (C → Y) as shown in Figure 2 (center). First, we generate a crude approximation y of the data which only takes c into account. The is a blurry average of the data points conditioned on c, see Figure 2 (right). We then feed this crude approximation into a GAN-based generative model which adds the rest of the details conditioned on z. We call this framework CZ-GEM. The conditioning on z in the second stage must be done carefully to make sure that it does not get entangled with y. To that end we rely on architectural choices and normalization techniques from the style transfer literature 2. The is a model which generates images of high quality while disentangling c and z as can be clearly seen in Figure 1(b). Additionally, in the unsupervised setting, when the labels c are not available, (C → Y) can be realized by β-VAE, a regularized version of VAE which has been shown to learn a disentangled representation of its latent variables . In Section 3 we provide implementation details for both the supervised and unsupervised versions. We summarize our two main contributions: Figure 2: On the left, a conditional GAN (CGAN) model. CZ-GEM in the middle replaces node C with a subgraph (C → Y) that is trained independently of the rest of the model. This subgraph learns to only partially render the observation. As such, Z comes at a later stage of the rendering pipeline to add details to Y. As an example, consider the rightmost graph where the observation is made up of different types of chairs in different poses. Let the pose be controlled by C and the type (Identity) be explained by Z. Then in step one of CZ-GEM we learn the pose relationship between C and X via the subgraph, giving rise to a blurry chair in the correct pose. Once the pose is learned, in the second step, the approximate rendering Y is transformed into X by allowing Z to add identity related details to the blurry image. We break down the architecture to model an intermediate representation that lends itself to interpretability and disentanglement, and then (carefully) use a GAN based approach to add the rest of the details, thus enjoying a superior image generation quality compared to VAEs. We show that our model can be combined easily with common methods for discovering disentangled representations such as β-VAE to extract c and treat them as labels to generate images that do not compromise on generative quality. Generative adversarial networks (GAN) represent the current state of the art in likelihood-free generative modeling. In GANs, a generator network G θ is trained to produce samples that can fool a discriminator network D ω that is in turn trained to distinguish samples from the true data distribution p(x) and the generated samples Here, p z is usually a low dimensional easy-to-sample distribution like standard Gaussian. A variety of tricks and techniques need to be employed to solve this min-max optimization problem. For our models, we employ architectural constraints proposed by DC-GAN that have been widely successful in ensuring training stability and improving generated image quality. Conditional GANs (CGAN) adapt the GAN framework for generating class conditional samples by jointly modeling the observations with their class labels. In CGAN, the generator network G θ is fed class labels c to produce fake conditional samples and the discriminator D ω is trained to discriminate between the samples from the joint distribution of true conditional and true labels p(x|c)p(c) and the fake conditional and true labels p θ (x|c)p(c). While not the main focus of this paper, we present a novel information theoretic perspective on CGANs. Specifically, we show that CGAN is trained to maximize a lower-bound to the mutual information between the observation and its label while simultaneously minimizing an upper-bound to it. We state this formally: by training a discriminator D ω to approximate the log-ratio of the true and generated data densities i.e. D ω ≈ log p(x, c)/p θ (x, c) in turn minimizing the following where I g,θ (x, c) is the generative mutual information and q(c|x, θ) is the posterior under the learned model. The detailed derivation is provided in Appendix A.1. Notice that at the limit, the model learns exactly the marginal distribution of x and the posterior q(c|x) and the KL terms vanish. Variational autoencoders (VAE) represent a class of likelihood-based deep generative models that have recently been extensively studied and used in representation learning tasks (; ; . Consider a latent variable model where observation X is assumed to be generated from some underlying low-dimensional latent feature space Z. VAE models learn the conditional distribution p(x|z) using a deep neural network (parameterized by θ) called decoder network. It uses another deep neural network (parameterized by φ), called encoder to model the posterior distribution p(z|x). The encoder and decoder networks are trained using amortized variational inference to maximizes a variational lower-bound to the evidence likelihood (ELBO). showed that by regularizing the variational posterior approximation of p(z|x) to be close to the prior distribution p(z) in KLdivergence, the model is encouraged to learn disentangled representations. I.e. the model learns a posterior distribution that is factorized over the dimensions. They call their model β-VAE. We note that information bottleneck based methods for disentangled representation learning, such as β-VAE, severely compromise the generative quality. Batch-Normalization (BN) plays a crucial role in ensuring the stability of GAN training. However, as we discuss in Section 3, it is not suitable for our purposes. Recently, it has been shown that Instance Normalization (IN) and its variant Adaptive Instance Normalization (AdaIN) can be particularly useful for image generation and stylization. IN normalizes each convolutional channel per training sample, while AdaIN modifies this normalization to be a function of an additional variable z (usually style in style transfer). The final transformation applied by AdaIN is: where µ(x) = 1 HW h,w x nhwc and σ(x) = 1 HW h,w (x nhwc − µ(x)) 2 +. γ(z) and β(z) are learned functions of z that could be parameterized by a neural network, usually a fully connected layer. In Section 1, we provided a high level description of our approach. We will now provide a detailed description of how the two components of CZ-GEM, subgraph C → Y and the conditional generative models (Y, Z) → X are implemented and trained in practice. Figure 3 provides an implementation schematic of our proposed framework. If C is known a priori then learning the subgraph C → Y reduces to the regression problem that minimizes ||x c − y c || 2. In practice, since our observations are images, this subgraph is realized using a deep transposed-convolution based decoder network and is trained to learn the map between C and Y. This is similar to the recent work of Srivastava et al. (2019b). We emphasize that this network is trained independently of the rest of the model. to discover disentangled generative control factors. In our implementation we use β-VAE (see Section 2.2). One drawback of these information bottleneck based methods is that they compromise on the generative quality. This is where the GAN likelihood-free approach in the second stage comes into play. In fact, even if the output of the first stage (i.e. the intermediate image Y in Figure 2) is of very low generative quality, the final image is of high quality since the second stage explicitly adds details using a state-of-the-art GAN method. In Section 5 we show how a simple VAE with a very narrow information bottleneck (2-6 dimensions) can be used within CZ-GEM to discover C in an unsupervised fashion without compromising on generation quality. Vanilla GANs can only model the marginal data distribution i.e. they learn p θ to match p x and in doing so they use the input to the generator (G θ) only as a source of stochasticity. Therefore we start with a conditional GAN model instead, to preserve the correspondence between Y and X. As shown in section 2.1, this framework trains G θ such that the observation X is maximally explained by the conditioning variable Y. One major deviation from the original model is that the conditioning variable in our case is the same type and dimensionality as the observation. That is, it is an image, albeit a blurry one. This setup has previously been used by in the context of image-to-image translation. Incorporating Z requires careful implementation due to two challenges. First, trivially adding Z to the input along with Y invokes d-separation and as a Y and Z can get entangled. Intuitively, Z is adding high level details to the intermediate representation Y. We leverage this insight as an inductive bias, by incorporating Z at higher layers of the network rather than just feeding it as an input to the bottom layer. A straightforward implementation of this idea does not work tough. The reason is that BatchNorm uses batch-level statistics to normalize the incoming activations of the previous layer to speed up learning. In practice, mini-batch statistics is used to approximate batch statistics. This adds internal stochasticity to the generator causing it to ignore any externally added noise, such as Z. An elegant solution to resolve this second challenge comes in the form of adaptive instance normalization (see Section 2.3). It not only removes any dependency on the batch-statistics but also allows for the incorporation of Z in the normalization process itself. For this reason, it has previously been used in style transfer tasks . We replace all instances of BatchNorm in the generator with Adaptive InstanceNorm. We then introduce Z to the generative process using equation 1. γ(z) and β(z) are parameterized as a simple feed-forward network and are applied to each layer of AdaIN in the generator. Disentangled representation learning has been widely studied in recent years, both in the supervised and unsupervised settings. In supervised cases, works such as has emphasized the use of inductive biases and weak supervision instead of fully unsupervised methods for disentangled representation learning. and have successfully shown that including inductive biases, respectively an explicit 3D representation, leads to better performance. Their inductive bias comes in the form of learned 3D transformation pipeline. In comparison, CZ-GEM is much simpler and smaller in design and applies to the general setting where the data is determined by control and noise variables. In addition, it can be used in both supervised and unsupervised setting and does not rely on the knowledge of 3D transformations. Manually disentangled generative models like the 3D morphable model have been built for faces. They are powerful in terms of generalization but there is a big gap between those synthetic images and real-world face images. In addition, those models are built highly supervised from 3D scans and the approach is limited by the correspondence assumption which does not scale to more complex objects like chairs. We use a 3D morphable model to generate our synthetic face dataset and show that we can disentangle pose variation from synthetic and real 2D images. In this section, we provide a comprehensive set of quantitative and qualitative to demonstrate how CZ-GEM is clearly able to not only disentangle C from Z in both supervised and unsupervised settings but also ensure that independent components of C stay disentangled after training. Additionally, we show how in unsupervised settings CZ-GEM can be used to discover disentangled latent factors when C is not explicitly provided. We evaluate CZ-GEM on a variety of image generation tasks which naturally involve observed attributes C and unobserved attributes Z. To that end, we generate three 3D image datasets of faces, chairs, and cars with explicit control variables. Chairs and cars datasets are derived from ShapeNet . We sample 100k images from the full yaw variation and a pitch variation of 90 degrees. We used the straight chair subcategory with 1968 different chairs and the sedan subcategory with 559 different cars. We used Blender to render the ShapeNet meshes scripted with the Stanford ShapeNet renderer. For faces, we generated 100k images from the Basel Face Model 2017 . We sample shape and color (first 50 coefficients), expressions (first 5 coefficients), pose (yaw -90 to 90 degrees uniformly, pitch and roll according to a Gaussian with variance of 5 degrees) and the illumination from the Basel Illumination Prior. For the generation of the faces dataset, we use the software provided by. For the stated datasets we have complete access to C, but we also include unsupervised on celebA with unconstrained real images. All our datasets are built from publicly available data and tools. We use the DCGAN architecture for all neural networks involved in all the experiments in this work and provide a reference implementation with exact architecture and hyperparameter settings at https://github.com/AnonymousAuthors000/CZ-GEM. In the supervised setting we compare CZ-GEM to CGAN. We quantitatively compare the two methods to ensure that independent components of C stay disentangled post learning. Furthermore, we qualitatively compare their abilities to disentangle C and Z. And finally, we compare the quality of the samples that the models generate. For chairs and cars, C contains only the pose variables and all other variations are explained by Z. For faces, C contains in addition to pose the first 4 principal directions of shape variations. three datasets. We also include the training error (i.e. the MSE of the regressor on the real data) for comparison. The show that CGAN and CZ-GEM are comparable in preserving the label information in the generated data, but as we show below, only CZ-GEM does that while ensuring that C and Z remain disentangled. To qualitatively evaluate the level of disentanglement between C and Z, we vary each individual dimension of C over its range while holding Z constant. We plot the generated images for both models on car and chair datasets in Figure 4. Notice that CZ-GEM allows us to vary the control variates without changing the identity of the object, whereas CGAN does not. In addition, we find that for CGAN, the noise Z provides little to no control over the identity of the chairs. This is potentially due to the internal stochasticity introduced by the BatchNorm. The last rows for the CZ-GEM figures provide the visualization of Y. It can be seen how Y is clearly preserving C (pose information) but averaging the identity related details. We also qualitatively evaluate CZ-GEM on the more challenging faces dataset that includes 10 control variates. As shown in Figure 9 in the appendix, CZ-GEM is not only able to model the common pose factors such as rotation and azimuth but also accurately captures the principal shape component of Basel face model that approximates the width of the forehead, the width of jaw etc. Compared to CGAN, CZ-GEM does a qualitatively better job at keeping the identity constant. Finally, in order to ensure that our method does not compromise the generative quality, we evaluate the Inception score on all three datasets. Inception score has been widely used to measure the diversity and the generative quality of GANs. As shown in Table 2, unlike CGAN, CZ-GEM does not degrade the image quality. We now test the performance of CZ-GEM in the unsupervised setting, where disentangled components of C needs to be discovered, using β-VAE, as part of learning the mapping C → Y. For our purpose, we use a simple version of the original β-VAE method with a very narrow bottleneck (6D for faces and 2D for cars and chairs) to extract C. The latent traversals for the faces dataset are presented in Figure 5. Unsupervised discovery is able to recover rotation as well as translation variation present in the dataset. For comparison, we evaluate InfoGAN and present the in Figure 6 where it is evident that CZ-GEM clearly outperforms InfoGAN on both disentanglement and generative quality. More traversal are provided in the appendix. We further test our method on the CelebA dataset , where pose information is not available. This traversal plot is shown in Figure 7. Traversal plots for cars and chairs dataset are provided in the Appendix Figure 12 and Figure 13. We present a simple yet effective method of learning representations in deep generative models in the setting where the observation is determined by control variate C and noise variate Z. Our method ensures that in the learned representation both C and Z are disentangled as well as the components of C themselves. This is done without compromising the quality of the generated samples. In future work, we would like to explore how this method can be applied to input with multiple objects. Apart from the MSE-based estimator reported in 1, we report and additional evaluation measure. We use the same regressor f (x) trained for 1, but we report the Pearson correlation co-efficient (r) between the predicted label and the true label r(c, f (G θ (c, z))) for each dimension of C. Comparison of CZ-GEM and CGAN on face dataset is shown in Figure 9. CGAN not only produces blurry faces but also shows more undesired identity changes. In order to show the shape variation clearly, we provide a zoomed-in view in Figure 10. We provide additional for supervised and unsupervised on the chair dataset from in Figure 11 and Figure 12 respectively. The observation is the same with the previous one. CZ-GEM varies the control variables without changing the shape of chairs. In the first row in Figure 11, the leg of the chairs are visually indistinguishable showing an excellent disentanglement between C and Z. For the in unsupervised setting showing in Figure 12, CZ-GEM is able to disentangle the rotation of chairs without any label. Additional of latent traversal of CZ-GEM in the unsupervised setting is provided in Figure 13. The model is able capture the rotation but the translation is not very smooth.
Hierarchical generative model (hybrid of VAE and GAN) that learns a disentangled representation of data without compromising the generative quality.
1,408
scitldr
Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition (NER). However, this typically requires large amounts of labeled data. In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning. While active learning is sample-efficient, it can be computationally expensive since it requires iterative retraining. To speed this up, we introduce a lightweight architecture for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and word encoders and a long short term memory (LSTM) tag decoder. The model achieves nearly state-of-the-art performance on standard datasets for the task while being computationally much more efficient than best performing models. We carry out incremental active learning, during the training process, and are able to nearly match state-of-the-art performance with just 25\% of the original training data. Over the past few years, papers applying deep neural networks (DNNs) to the task of named entity recognition (NER) have successively advanced the state-of-the-art BID7 BID17 BID24 BID6 BID48. However, under typical training procedures, the advantages of deep learning diminish when working with small datasets. For instance, on the OntoNotes-5.0 English dataset, whose training set contains 1,088,503 words, a DNN model outperforms the best shallow model by 2.24% as measured by F1 score BID6. However, on the comparatively small CoNLL-2003 English dataset, whose training set contains 203,621 words, the best DNN model enjoys only a 0.4% advantage. To make deep learning more broadly useful, it is crucial to reduce its training data requirements. Generally, the annotation budget for labeling is far less than the total number of available (unlabeled) samples. For NER, getting unlabeled data is practically free, owing to the large amount of content that can be efficiently scraped off the web. On the other hand, it is especially expensive to obtain annotated data for NER since it requires multi-stage pipelines with sufficiently well-trained annotators BID19 BID5. In such cases, active learning offers a promising approach to efficiently select the set of samples for labeling. Unlike the supervised learning setting, in which examples are drawn and labeled at random, in the active learning setting, the algorithm can choose which examples to label. Active learning aims to select a more informative set of examples in contrast to supervised learning, which is trained on a set of randomly drawn examples. A central challenge in active learning is to determine what constitutes more informative and how the active learner can recognize this based on what it already knows. The most common approach is uncertainty sampling, in which the model preferentially selects examples for which it's current prediction is least confident. Other approaches include representativeness-based sampling where the model selects a diverse set that represent the input space without adding too much redundancy. In this work, we investigate practical active learning algorithms on lightweight deep neural network architectures for the NER task. Training with active learning proceeds in multiple rounds. Traditional active learning schemes are expensive for deep learning since they require complete retraining of the classifier with newly annotated samples after each round. In our experiments, for example, the model must be retrained 54 times. Because retraining from scratch is not practical, we instead carry out incremental training with each batch of new labels: we mix newly annotated samples with the older ones, and update our neural network weights for a small number of epochs, before querying for labels in a new round. This modification drastically reduces the computational requirements of active learning methods and makes it practical to deploy them. We further reduce the computational complexity by selecting a lightweight architecture for NER. We propose a new CNN-CNN-LSTM architecture for NER consisting of a convolutional character-level encoder, convolutional word-level encoder, and long short term memory (LSTM) tag decoder. This model handles out-of-vocabulary words gracefully and, owing to the greater reliance on convolutions (vs recurrent layers), trains much faster than other deep models while performing competitively. We introduce a simple uncertainty-based heuristic for active learning with sequence tagging. Our model selects those sentences for which the length-normalized log probability of the current prediction is the lowest. Our experiments with the Onto-Notes 5.0 English and Chinese datasets demonstrate comparable to the Bayesian active learning by disagreement method. Moreover our heuristic is faster to compute since it does not require multiple forward passes. On the OntoNotes-5.0 English dataset, our approach matches 99% of the F1 score achieved by the best deep models trained in a standard, supervised fashion despite using only a 24.9% of the data. On the OntoNotes-5.0 Chinese dataset, we match 99% performance with only 30.1% of the data. Thus, we are able to achieve state of art performance with drastically lower number of samples. The use of DNNs for NER was pioneered by BID7, who proposed an architecture based on temporal convolutional neural networks (CNNs) over the sequence of words. Since then, many papers have proposed improvements to this architecture. BID17 proposed to replace CNN encoder in BID7 with bidirectional LSTM encoder, while BID24 and BID6 introduced hierarchy in the architecture by replacing hand-engineered character-level features in prior works with additional bidirectional LSTM and CNN encoders respectively. In other related work, BID30 and BID34 pioneered the use of recurrent neural networks (RNNs) for decoding tags. However, most recent competitive approaches rely upon CRFs as decoder BID24 BID6 BID48. In this work, we demonstrate that LSTM decoders outperform CRF decoders and are faster to train when the number of entity types is large. While learning-theoretic properties of active learning algorithms are wellstudied BID10 BID3 BID1 BID47, classic algorithms and guarantees cannot be generalized to DNNs, which are currently are the state-of-the-art techniques for NER. Owing to the limitations of current theoretical analysis, more practical active learning applications employ a range of heuristic procedures for selecting examples to query. For example, BID44 suggests a margin-based selection criteria, while BID39 while BID40 combines multiple criteria for NLP tasks. BID8 explores the application of least confidence criterion for linear CRF models on sequence prediction tasks. For a more comprehensive review of the literature, we refer to BID38 and BID35. While DNNs have achieved impressive empirical across diverse applications BID22 BID29, active learning approaches for these models have yet to be well studied, and most current work addresses image classification. BID45 claims to be the first to study active learning for image classification with CNNs and proposes methods based on uncertainty-based sampling, while and BID18 show that sampling based on a Bayesian uncertainty measure can be more advantageous. In one related paper, CNNs. However, to our knowledge, prior to this work, deep active learning for sequence tagging tasks, which often have structured output space and variable-length input, has not been studied. Most active learning methods require frequent retraining of the model as new labeled examples are acquired. Therefore, it is crucial that the model can be efficiently retrained. On the other hand, we would still like to reach the level of performance rivaling state-of-the-art DNNs. To accomplish this, we first identify that many DNN architectures for NER can be decomposed into three components: 1) the character-level encoder, which extracts features for each word from characters, 2) the word-level encoder which extracts features from the surrounding sequence of words, and 3) the tag decoder, which induces a probability distribution over any sequences of tags. This conceptual framework allows us to view a variety of DNNs in a unified perspective; see Table 1.Owing to the superior computational efficiency of CNNs over LSTMs, we propose a lightweight neural network architecture for NER, which we name CNN-CNN-LSTM and describe below. We represent each input sentence as follows; First, special [BOS] and [EOS] tokens are added at the beginning and the end of the sentence, respectively. In order to batch the computation of multiple sentences, sentences with similar length are grouped together into buckets, and [PAD] tokens are added at the end of sentences to make their lengths uniform inside of the bucket. We follow an analogous procedure to represent the characters in each word. For example, the sentence'Kate lives on Mars' is formatted as shown in TAB2. The formatted sentence is denoted as {x ij}, where x ij is the one-hot encoding of the j-th character in the i-th word. Character-Level Encoder For each word i, we use CNNs BID25 to extract characterlevel features w char i FIG0 ). While LSTM recurrent neural network BID16 slightly outperforms CNN as a character-level encoder, the improvement is not statistically significant and the computational cost of LSTM encoders is much higher than CNNs (see Section 5, also BID37 for detailed analysis).We apply ReLU nonlinearities BID32 and dropout BID41 between CNN layers, and include a residual connection between input and output of each layer BID14. So that our representation of the word is of fixed length, we apply max-pooling on the outputs of the topmost layer of the character-level encoder BID20. DISPLAYFORM0 x22 x23 x24 x25 x26 x27 Word-Level Encoder To complete our representation of each word, we concatenate its characterlevel features with w emb i, a latent word embedding corresponding to that word: DISPLAYFORM1 DISPLAYFORM2 We initialize the latent word embeddings with with word2vec training BID31 and then update the embeddings over the course of training. In order to generalize to words unseen in the training data, we replace each word with a special [UNK] (unknown) token with 50% probability during training, an approach that resembles the word-drop method due to BID24.Given the sequence of word-level input features w DISPLAYFORM3 Enc n for each word position in the sentence using a CNN. In FIG3, we depict an instance of our architecture with two convolutional layers and kernels of width 3. We concatenate the representation at the l-th convolutional layer h LSTM RNNs can also perform word-level encoding BID17, and models with LSTM word-level encoding give a slight (but not significant) boost over CNN word-level encoders in terms of F1 score (see Section 5). However, CNN word-level encoders are considerably faster BID42, which is crucial for the iterative retraining in our active learning scheme. DISPLAYFORM4 The tag decoder induces a probability distribution over sequences of tags, conditioned on the word-level encoder features: P y 2, y 3,..., y n−1 | h Enc i 1. Chain CRF BID23 ) is a popular choice for tag decoder, adopted by most modern DNNs for NER: DISPLAYFORM0 where W, A, b are learnable parameters, and {·} ti refers to the t i -th coordinate of the vector. To compute the partition function of, which is required for training, usually dynamic programming is employed, and its time complexity is O(nT 2) where T is the number of entity types BID7.Alternatively, we use an LSTM RNN for the tag decoder, as depicted in FIG4. At the first time step, the [GO]-symbol is provided as y 1 to the decoder LSTM. At each time step i, the LSTM decoder computes h Dec i+1, the hidden state for decoding word i + 1, using the last tag y i, the current decoder hidden state h Dec i, and the learned representation of next word h Enc i+1. Using a softmax loss function, y i+1 is decoded; this is further fed as an input to the next time step. DISPLAYFORM1 Kate lives on Mars Since this is a locally normalized model BID0, it does not require the costly computation of partition function, and it allows us to significantly speed up training compared to using CRFs. Also, we observed that while it is computationally intractable to find the best sequence of tags with an LSTM decoder, greedily decoding tags from left to right yields the performance comparable to chain CRF decoder (see Appendix A). While the use of RNNs tag decoders has been explored BID30 BID34 BID49, we demonstrate for the first time that models using RNNs instead of CRFs for tag decoder can achieve state-of-the-art performance. See Section 5. Labeling data for NER usually requires manual annotations by human experts, which are costly to acquire at scale. Active learning seeks to ameliorate this problem by strategically choosing which examples to annotate, in the hope of getting greater performance with fewer annotations. To this end, we consider the following setup for interactively acquiring annotations. The learning process consists of multiple rounds: At the beginning of each round, the active learning algorithm chooses sentences to be annotated up to the predefined budget. After receiving annotations, we update the model parameters by training on the augmented dataset, and proceeds to the next round. We assume that the cost of annotating a sentence is proportional to the number of words in the sentence and that every word in the selected sentence must be annotated at once, i.e. we do not allow or account for partially annotated sentences. While various existing active learning strategies suit this setup BID38, we explore the uncertainty sampling strategy. With the uncertainty-based sampling strategy , we rank the unlabeled examples according to the current model's uncertainty in its prediction of the corresponding labels. We consider three ranking methods, each of which can be easily implemented in the CNN-CNN-LSTM model or most other deep neural approaches to NER.Least Confidence (LC) BID8 proposed to sort examples in ascending order according to the probability assigned by the model to the most likely sequence of tags: DISPLAYFORM0 Exactly computing requires identifying the most likely sequence of tags according to the LSTM decoder. Because determining the most likely sequence is intractable, we approximate the score by using the probability assigned to the greedily decoded sequence. Maximum Normalized Log-Probability (MNLP): Preliminary analysis revealed that the LC method disproportionately selects longer sentences. Note that sorting unlabeled examples in descending order by is equivalent to sorting in ascending order by the following scores: DISPLAYFORM1 Since contains summation over words, LC method naturally favors longer sentences. Because longer sentences requires more labor for annotation, we find this undesirable, and propose to normalize as follows, which we call Maximum Normalized Log-Probability method: DISPLAYFORM2 Bayesian Active Learning by Disagreement (BALD): We also consider sampling according to the measure of uncertainty proposed by. Observing a correspondence between dropout BID41 and deep Gaussian processes BID9, they propose that the variability of the predictions over successive forward passes due to dropout can be interpreted as a measure of the model's uncertainty BID11. Denote P 1, P 2,... P M as models ing from applying M independently sampled dropout masks. One measure of our uncertainty on the ith word is f i, the fraction of models which disagreed with the most popular choice: DISPLAYFORM3 where |·| denotes cardinality of a set. We normalize this by the number of words as 1 n n j=1 f j, In this paper, we draw M = 100 independent dropout masks. Other Sampling Strategies. Consider that the confidence of the model can help to distinguish between hard and easy samples. Thus, sampling examples where the model is uncertain might save us from sampling too heavily from regions where the model is already proficient. But intuitively, when we query a batch of examples in each round, we might want to guard against querying examples that are too similar to each other, thus collecting redundant information. We also might worry that a purely uncertainty-based approach would oversample outliers. Thus we explore techniques to guard against these problems by selecting a set of samples that is representative of the dataset. Following, we express the problem of maximizing representativeness of a labeled set as a submodular optimization problem, and provide an efficient streaming algorithm adapted to use a constraint suitable to the NER task. Our approach to representativeness-based sampling proceeds as follows: Denote X as the set of all samples, and X L, X U representing the set of labeled and unlabeled samples respectively. For an unlabeled set S ∈ X U, the utility f w is defined as the summation of marginal utility gain over all unlabeled points, weighted by their uncertainty. More formally, DISPLAYFORM4 where US(i) is the uncertainty score on example i. In order to find a good set S with high f w value, we exploit the submodularity of the function, and use an online algorithm under knapsack constraint. More details of this method can be found in the supplementary material (Appendix C). In our experiments, this approach fails to match the uncertainty-based heuristics or to improve upon them when used in combination. Nevertheless, we describe the algorithm and include the negative for their scientific value. FORMULA5 ). Dropout probabilities are all set as 0.5. We use structured skip-gram model BID28 trained on Gigawords-English corpus BID13, which showed a good boost over vanilla skip-gram model BID31 we do not report here. We use vanilla stochastic gradient descent, since it is commonly reported in the named entity recognition literature that this outperforms more sophisticated methods at convergence BID24 BID6. We uniformly set the step size as 0.001 and the batch size as 128. When using LSTMs for the tag decoder, for inference, we only use greedy decoding; beam search gave very marginal improvement in our initial experiments. We repeat each experiment four times, and report mean and standard deviation. In terms of measuring the training speed of our models, we compute the time spent for one iteration of training on the dataset, with eight K80 GPUs in p2.8xlarge on Amazon Web Services 2. TAB5 show the comparison between our model and other best performing models. LSTM tag decoder shows performance comparable to CRF tag decoder, and it works better than the CRF decoder when used with CNN encoder; compare CNN-CNN-LSTM vs. CNN-CNN-CRF on both tables. On the CoNLL-2003 English dataset which has only four entity types, the training speed of CNN-CNN-LSTM and CNN-CNN-CRF are comparable. However, on the OntoNotes 5.0 English dataset which has 18 entity types, the training speed of CNN-CNN-LSTM is twice faster than CNN-CNN-CRF because the time complexity of computing the partition function for CRF is quadratic to the number of entity types. CNN-CNN-LSTM is also 44% faster than CNN-LSTM-LSTM on OntoNotes, showing the advantage of CNN over LSTM as word encoder; on CoNLL-2003, sentences tend to be shorter and this advantage was not clearly seen; its median number of words in sentences is 12 opposed 17 of OntoNotes. Compared to the CNN-LSTM-CRF model, which is considered as a state-of-the-art model in terms of performance BID6 BID42, CNN-CNN-LSTM provides four times speedup in terms of the training speed, and achieves comparatively high performance measured by F1 score. We use OntoNotes-5.0 English and Chinese data BID36 for our experiments. The training datasets contain 1,088,503 words and 756,063 words respectively. State-of-the-art models trained on the full training sets achieve F1 scores of 86. 86 and 75.63 (our CNN-CNN-LSTM) on the test sets. We empirically compare selection algorithms proposed in Section 4, as well as uniformly random baseline (RAND). All algorithms start with an identical 1% of original training data and a randomly initialized model. In each round, every algorithm chooses sentences from the rest of the training data until 20,000 words have been selected, adding this data to Figure 5: Genre distribution of top 1,000 sentences chosen by an active learning algorithm FIG5 shows the . All active learning algorithms perform significantly better than the random baseline. Among active learners, MNLP and BALD slightly outperformed traditional LC in early rounds. Note that MNLP is computationally more efficient than BALD, since it only requires a single forward pass on the unlabeled dataset to compute uncertainty scores, whereas BALD requires multiple forward passes. Impressively, active learning algorithms achieve 99% performance of the best deep model trained on full data using only 24.9% of the training data on the English dataset and 30.1% on Chinese. Also, 12.0% and 16.9% of training data were enough for deep active learning algorithms to surpass the performance of the shallow models from BID36 trained on the full training data. We repeated the experiment eight times and confirmed that the trend is replicated across multiple runs; see Appendix B for details. Detection of under-explored genres To better understand how active learning algorithms choose informative examples, we designed the following experiment. The OntoNotes datasets consist of six genres: broadcast conversation (bc), braodcast news (bn), magazine genre (mz), newswire (nw), telephone conversation (tc), weblogs (wb). We created three training datasets: half-data, which contains random 50% of the original training data, nw-data, which contains sentences only from newswire (51.5% of words in the original data), and no-nw-data, which is the complement of nwdata. Then, we trained CNN-CNN-LSTM model on each dataset. The model trained on half-data achieved 85.10 F1, significantly outperforming others trained on biased datasets (no-nw-data: 81.49, nw-only-data: 82.08). This showed the importance of good genre coverage in training data. Then, we analyzed the genre distribution of 1,000 sentences MNLP chose for each model (see Figure 5). For no-nw-data, the algorithm chose many more newswire (nw) sentences than it did for unbiased half-data (367 vs. 217). On the other hand, it undersampled newswire sentences for nw-only-data and increased the proportion of broadcast news and telephone conversation, which are genres distant from newswire. Impressively, although we did not provide the genre of sentences to the algorithm, it was able to automatically detect underexplored genres. One potential concern when decoding with an LSTM decoder as compared to using a CRF decoder is that finding the best sequence of labels that maximizes the probability P t 2, t 3,..., t n−1 | h Enc i is computationally intractable. In practice, however, we find that simple greedy decoding, i.e., beam search with beam size 1, works surprisingly well. TAB8 shows how changing the beam size of decoder affects the performance of the model. It can be seen that the performance of the model changes very little with respect to the beam size. Beam search with size 2 is marginally better than greedy decoding, and further increasing the beam size did not help at all. Moreover, we note that while it may be computationally efficient to pick the most likely tag sequence given a CRF encoder, the LSTM decoder may give more accurate predictions, owing to it's greater representational power and ability to model long-range dependencies. Thus even if we do not always choose the most probable tag sequence from the LSTM, we can still outperform the CRF (as our experiments demonstrate). In order to understand the variability of learning curves in FIG5 across experiments, we repeated the active learning experiment on OntoNotes-5.0 English eight times, each of which started with different initial dataset chosen randomly. FIG6 shows the in first nine rounds of labeled data acquisition. While MNLP, LC and BALD are all competitive against each other, there is a noticeable trend that MNLP and BALD outperforms LC in early rounds of data acquisition. Consider that the confidence of the model can help to distinguish between hard and easy samples. Thus, sampling examples where the model is uncertain might save us from sampling too heavily from regions where the model is already proficient. But intuitively, when we query a batch of examples in each round, we might want to guard against querying examples that are too similar to each other, thus collecting redundant information. We also might worry that a purely uncertainty-based approach would oversample outliers. Thus we explore techniques to guard against these problems by selecting a set of samples that is representative of the dataset. Following, we express the problem of maximizing representativeness of a labeled set as a submodular optimization problem, and provide an efficient streaming algorithm adapted to use a constraint suitable to the NER task. We also provide some with theoretical guarantees. Submodular utility function In order to reason about the similarity between samples, we first embed each sample i into a fixed-dimensional euclidean space as a vector x i. We consider two embedding methods: 1) the average of pre-trained word embeddings, for p = 1, 2 which corresponds to closeness in L 1 and L 2 distance, and w(i, j) = 1 + xi·xj xi · xj, which corresponds to cosine similarity. Now, we formally define the utility function for labeling new samples. Denote X as the set of all samples which can be partitioned into two disjoint sets X L, X U representing labeled and unlabeled samples, respectively. Let S ⊆ X U be a subset of unlabeled samples, then, the utility of labeling the set is defined as follows: DISPLAYFORM0 where the function measures incremental gain of similarity between the labeled set and the rest. Given such utility function f (·), choosing a set S that maximizes the function within the budget can be seen as a monotone submodular maximization problem under a knapsack constraint BID21: max DISPLAYFORM1 where k(S) is the budget for the sample set S, and K is the total budget within each round. Note that we need to consider the knapsack constraint instead of the cardinality constraint used in the prior work, because the entire sentence needs to be labeled once selected and sequences of length confer different labeling costs. Combination with uncertainty sampling Representation-based sampling can benefit from uncertainty-based sampling in the following two ways. First, we can re-weight each sample in the utility function to reflect current model's uncertainty on it: DISPLAYFORM2 Algorithm 1 Representativeness-based Sampling DISPLAYFORM3 while Test score of M less than th do 3:Rank X U according to Sec. 4, X U = top samples S within budget t · K. Set f according to or FORMULA14 5: DISPLAYFORM0 Train M with X L. where US(i) is the uncertainty score on example i. Second, even with the state-of-the-art submodular optimization algorithms, the optimization problem can be computationally intractable. To improve the computational efficiency, we restrict the set of unlabeled examples to top samples from uncertainty sampling within budget t · K, where t is a multiplication factor we set as 4 in our experiments. Streaming algorithm for sample selection Even with the reduction of candidates with uncertainty sampling, is still a computationally challenging problem and requires careful design of optimization algorithms. Suppose l is the number of samples we need to consider. In the simplistic case in which all the samples have the same length and thus the knapsack constraint degenerates to the cardinality constraint, the greedy algorithm BID33 has an (1 − 1/e)-approximation guarantee. However, it requires calculating the utility function O(l 2 n) times, where n is the number of unlabeled samples. In practice, both l and n are large. Alternatively, we can use lazy evaluation to decrease the computation complexity to O(ln) BID26, but it requires an additional hyperparameter to be chosen in advance. Instead of greedily selecting elements in an offline fashion, we adopt the two-pass streaming algorithm of BID2, whose complexity is O(ln) 3, and generalize it to the knapsack constraint (shown in Alg. 2). In the first pass, we calculate the maximum function value of a single element normalized by its weight, which gives an estimate of the optimal value. In the second pass, we create O(1 log K) buckets and greedily update each of the bucket according to: DISPLAYFORM1 where each bucket has a different value v, and ∆ g (e|S v):= g({e} ∪ S v ) − g(S v) is the marginal improvement of submodular function g when adding element e to set S v. The whole pipeline of the active learning algorithm is shown in Alg. 1. The algorithm gives the following guarantee, which is proven in Appendix. Theorem 1. Alg. 2 gives a(1−)(1−δ) 2 -approximation guarantee for, where δ = max e∈S k({e})/K.Proof sketch: The criterion we use guarantees that each update we make is reasonably good. The set S v stops updating when either the current budget is almost K, or any sample in the stream after we reach S v does not provide enough marginal improvement. While it is easy to give guarantees when the budget is exhausted, it is unlikely to happen; we use a difference expression between current set S v and the optimal set, and prove the gap between the two is under control. In a practical label acquisition process, the budget we set for each round is usually much larger than the length of the longest sentence in the unlabeled set, making δ negligible. In our experiments, δ was around 0.01.
We introduce a lightweight architecture for named entity recognition and carry out incremental active learning, which is able to match state-of-the-art performance with just 25% of the original training data.
1,409
scitldr
Network quantization is a model compression and acceleration technique that has become essential to neural network deployment. Most quantization methods per- form fine-tuning on a pretrained network, but this sometimes in a large loss in accuracy compared to the original network. We introduce a new technique to train quantization-friendly networks, which can be directly converted to an accurate quantized network without the need for additional fine-tuning. Our technique allows quantizing the weights and activations of all network layers down to 4 bits, achieving high efficiency and facilitating deployment in practical settings. Com- pared to other fully quantized networks operating at 4 bits, we show substantial improvements in accuracy, for example 66.68% top-1 accuracy on ImageNet using ResNet-18, compared to the previous state-of-the-art accuracy of 61.52% and a full precision reference accuracy of 69.76%. We performed a thorough set of experiments to test the efficacy of our method and also conducted ablation studies on different aspects of the method and techniques to improve training stability and accuracy. Our codebase and trained models are available on GitHub. Neural network quantization is a technique to reduce the size of deep networks and to bypass computationally and energetically expensive floating-point arithmetic operations in favor of efficient integer arithmetic on quantized versions of model weights and activations. Network quantization has been the focus of intensive research in recent years (; ; ; ; ; ; ;), with most works belonging to one of two categories. The first line of work quantizes parts of the network while leaving a portion of its operations, e.g. computations in the first and last network layers in floating point. While such networks can be highly efficient, using bitwidths down to 5 or 4 bits with minimal loss in network accuracy , they may be difficult to deploy in certain practical settings, due to the complexity of extra floating point hardware needed to execute the non-quantized portions of the network. Another line of work aims for ease of real world deployment by quantizing the entire network, including all weights and activations in all convolutional and fully connected layers; we term this scheme strict quantization. Maintaining accuracy under strict quantization is considerably more challenging. While nearly lossless 8-bit strictly quantized networks have been proposed , to date state-of-the-art 4 bit networks incur large losses in accuracy compared to full precision reference models. For example, the strict 4-bit ResNet-18 model in has 61.52% accuracy, compared to 69.76% for the full precision model, while the strict 4-bit MobileNet-v2 model in has 62.00% accuracy, compared to 71.88% accuracy in full precision. To understand the difficulty of training accurate low-bitwidth strictly quantized networks, consider a common training procedure which begins with a pre-trained network, quantizes the model, then applies fine-tuning using straight-through estimators (STE) for gradient updates until the model achieves sufficient quantized accuracy. This process faces two problems. First, as the pre-trained model was not initially trained with the task of being subsequently quantized in mind, it may not be "quantization-friendly". That is, the fine-tuning process may need to make substantial changes to the initial model in order to transform it to an accurate quantized model. Second, fine-tuning a model, especially at low bitwidths, is difficult due to the lack of accurate gradient information provided Figure 1: Architecture of the proposed GQ-Net. Input x 0 follows the top and bottom paths to produce the full precision and quantized outputs x L andx L, resp. These are combined through loss functions L f and L q to form the overall loss L, which is optimized by backpropagation. For more details please refer to Section 3. by STE. In particular, fine-tuning using STE is done by updating a model represented internally with floating point values using gradients computed at the nearest quantizations of the floating point values. Thus for example, if we apply 4 bit quantization to floating point model parameters in the range, a random parameter will incur an average round-off error of 1/32, which will be incorporated into the error in the STE gradient for this parameter, leading to possibly ineffective fine-tuning. To address these problems, we propose GQ-Net, a guided quantization training algorithm. The main goal of GQ-Net is to produce an accurate and quantization-friendly full precision model, i.e. a model whose quantized version, obtained by simply rounding each full precision value to its nearest quantized point, has nearly the same accuracy as itself. To do this, we design a loss function for the model which includes two components, one to minimize error with respect to the training labels, and another component to minimize the distributional difference between the model's outputs and the outputs of the model's quantized version. This loss function has the effect of guiding the optimization process towards a model which is both accurate, by virtue of minimizing the first loss component, and which is also similar enough to its quantized version due to minimization of the second component to ensure that the quantized model is also accurate. In addition, because the first component of the loss function deals only with floating point values, it provides accurate gradient information during optimization, in contrast to STE-based optimization which uses biased gradients at rounded points, which further improves the accuracy of the quantized model. Since GQ-Net directly produces a quantized model which does not require further fine-tuning, the number of epochs required to train GQ-Net is substantially less than the total number of epochs needed to train and fine-tune a model using the traditional quantization approach, leading to significantly reduced wall-clock training time. We note that GQ-Net's technique is independent of and can be used in conjunction with other techniques for improving quantization accuracy, as we demonstrate in Section 4.3. Finally, we believe that the guided training technique we propose can also be applied to other neural network structural optimization problems such as network pruning. We implemented GQ-Net in PyTorch and our codebase and trained models are publicly available 1. We validated GQ-Net on the ImageNet classification task with the widely used ResNet-18 and compact MobileNet-v1/v2 models, and also performed a thorough set of ablation experiments to study different aspects of our technique. In terms of quantization accuracy loss compared to reference floating point models, GQ-Net strictly quantized using 4-bit weights and activations surpasses existing state-of-the-art strict methods by up to 2.7×, and also improves upon these methods even when they use higher bitwidths. In particular, 4-bit GQ-Net applied to ResNet-18 achieves 66.68% top-1 accuracy, compared to 61.52% accuracy in and a reference floating point accuracy of 69.76%, while on MobileNet-v2 GQ-Net achieves 66.15% top-1 accuracy compared to 62.00% accuracy in and a reference floating point accuracy of 71.88%. Additionally, GQ-Net achieves these using layer-wise quantization, as opposed to channel-wise quantization in , which further enhances the efficiency and practicality of the technique. Neural network quantization has been the subject of extensive investigation in recent years. Quantization can be applied to different part of neural networks, including weights, activations or gradients. , , , and other works quantized model weights to binary, ternary or multi-bit integers to reduce model size. quantized activations of object detection models for knowledge transfer. , quantized model gradients to accelerate distributed training. Another line of work quantizes both weights and activations to accelerate model inference by utilizing fix-point or integer arithmetic. These works include , , , ,, ,. A large set of methods have been proposed to improve training or fine-tuning for network quantization. Straight through estimators (STE) propagate gradients through non-differentiable operations with the identity mapping. Other training methods "soften" nondifferentiable operations to similar differentiable ones in order for gradients to pass through, then gradually anneal to piecewise continuous functions by applying stronger constraints. This line of works include , ,. Some works regard quantization as a stochastic process that produces parameterized discrete distributions, and guides training using gradients with respect to these parameters ,. Another line of works does not require fine tuning, and instead re-calibrates or modifies the original network to recover accuracy using little or even no data , ,. Several recent works have focused on quantizing all parts of a network, typically in order to support deployment using only integer arithmetic units and avoiding the cost and complexity of additional floating point units. proposed performing network inference using dynamic fixed-point arithmetic, where bitwidths for the integer and mantissa parts are determined based on a model's weight distribution.; proposed the quantization training and deployment algorithm behind the Tensorflow-Lite quantization runtime, which generates strictly quantized networks that can be easily implemented in hardware. proposed a training method for strictly quantized models based on annealing a smooth quantization function to a piecewise continuous one. There has also been recent work on using parameterized quantizers which are optimized during quantization training. introduced learnable upper bounds to control the range of quantization. proposed quantizers with a learnable basis which an be executed using fixed-point arithmetic. proposed to optimize weight scaling and quantization ranges jointly from task losses. In this section we describe the architecture of our proposed GQ-Net and then discuss components of the architecture which can be tuned to improve performance. The major components of GQ-Net include the following, and are illustrated in Figure 1: 1. An L-layer neural network h W (·) with all computations performed using full precision floating point arithmetic. Here W = {W 1, . . ., W L} denotes the parameter (weights) of the model, with W i, i ∈ 1... L being the weights in layer i and expressed in floating point. } is a set of quantizers, i.e. mappings from floating point to (scaled) integer values; the quantizers may be parameterized, and we describe how to optimize these parameters in Section 3.2. Q w i quantizes weights W i and Q a i quantizes activations in layer i. Let x 0 denote an input to h W. To construct outputĥ W,Q (x 0) of the quantized network, we proceed layer by layer. We first quantize the weights in layers i = 1,..., L asŵ i = Q w i (w i), and also quantize the input by settingx 0 = Q a 0 (x 0). we compute the quantized activationsx i in layer i iteratively for, and g i (·) denotes the nonlinearity function in layer i and * denotes convolution. Note that sinceŵ i andx i−1 are quantized,x i can be computed using integer or fixed point arithmetic. 3. Next, we construct a loss function L incorporating both the training loss L f of the full precision model h W and a loss L q capturing the difference between h W and the quantized Here ω f, ω q ∈ R are parameters capturing the relative importance of training loss versus distributional loss. In this paper, we focus on image classification networks, and thus we set L f to be the cross-entropy loss between outputs from h W and the training labels., where σ denotes the softmax function, to be the KL divergence between distributions σ(h W) and σ(ĥ W,Q) on each input. Hence, minimizing the second term in L corresponds to pushing the floating point and quantized models to behave similarly to each other. Since the weight parameters W appear in both terms in L, the two terms can give conflicting signals for updating W during the optimization of L, causing the optimization to be unstable. We discuss how to deal with this problem in Section 3.2. To train GQ-Net, we successively take mini-batches of training samples and labels and use them to compute L during the forward pass and propagate gradients with respect to W and the parameters of Q during the backward pass in order to minimize L. After L has converged sufficiently, we take the quantized weights inĥ W,Q (·) as the quantized model. We now describe how different components of GQ-Net can be optimized to improve accuracy and training stability. Weight scheduling for L f and L q Parameters ω f and ω q capture the relative importance of the cross entropy and KL divergence errors during training. A large ω f ignores the similarity between the floating point and quantized models and may in a model that is accurate but not quantization-friendly. Conversely, a large ω q ignores guidance on accuracy from the floating point model and may in similar but poorly performing floating point and quantized models. We tested different schemes for weighting L f and L q, and found that using fixed values such as ω f = ω q = 0.5 already yields better than many current methods, as discussed in Section 4. However, further experimentation showed that scheduling, i.e. dynamically modifying the values of ω f, ω q during training can produce higher accuracy than using static values. For example, consider a schedule as shown in Figure 2a, which initially sets ω f = 1, ω q = 0, then alternates between setting ω q = 1 and ω q = 0 several times. Schedules of this sort can be understood as initially favoring model accuracy so that the floating point model is driven to a high accuracy region of model space, before increasing the importance of model similarity so that a quantization-friendly model can be found in the high accuracy region. This is repeated several times, leading to increasingly more accurate full precision models whose accuracy is then transferred to the quantized model. As demonstrated in Section 4, this schedule in better performance than static ones. Reducing interference between L f and L q The loss function L includes terms in L f and L q, where the former is a function of the floating point parameters W, and the latter involves both W and the quantized versionŴ of W parameterized by θ. We discovered that directly optimizing L in reduced accuracy, which we attribute to conflicting updates to W produced by gradients from the L f and L q terms. Ideally, we would like update W to minimize L f, while independently updatingŴ to minimize L q. While this is clearly not possible due to the dependency between W andŴ, we found that a heuristic based on this idea helped improve accuracy. In particular, let x L = h W (x 0) andx L = h W,Q (x 0) be the output of the full precision and quantized networks on input x 0, so that, as in Section 3.1. During back propagation we compute L f from x L and derive ∇ x L L f as usual, and use this via the chain rule to update W. However, when computing L q from x L andx L, we treat x L as a constant tensor which does not produce any gradients, and only derive ∇x L L q and use this via the chain rule to update W. We can implement this behavior using the detach operator in PyTorch or the stop gradient operator in TensorFlow. Parameterized quantizer GQ-Net can use any type of quantizer Q(·): R → T for a discrete set T. Motivated by recent work such as PACT, we adopt layer-wise linear quantizers with learnable boundaries in GQ-Net. In particular, for each layer i, we use one weight quantizer function Q w i,θ w i (·) for all weights in the layer, and one activation quantizer function Q a i,θ a i (·) for all activations. Here, θ w i and θ a i represent learnable parameters; for expository simplicity we drop a, w and i in the following and denote all parameters by θ. θ consists of k, the quantization bitwidth, and lb and ub representing the lower and upper quantization boundaries. We use uniform quantization, which is substantially easier to implement in hardware, and set Where clamp(x, a, b) = max(a, min(x, b)), and · represents the round operator. During training, gradients propagate through the nondifferentiable · operator using the straight though estimator (STE), i.e. ∂ x ∂x = 1. Parameters lb, ub are updated by the gradients propagated from the loss function L, and thus the quantizers will learn to set the appropriate quantization boundaries to improve accuracy. Alternatingly optimizing W and θ The accuracy of the quantized model depends both on the weights W of the floating point model as well as how these are quantized using the quantizers parameterized by θ. We found that jointly optimizing W and θ in each iteration ed in unstable training and poor accuracy. Performance was substantially improved by alternatingly optimizing W and θ. That is, in each training epoch we update either the values in W or θ while freezing the values of the other set of parameters, then switch the updated and frozen sets in the next epoch. The reason alternating optimization improved training is that both W and θ affect the values of quantized weights, so that updating both simultaneously may cause suboptimal changes to quantized weights. Multi-domain batch normalization Batch normalization is a critical component in large model training. However, since GQ-Net trains a floating point and quantized model simultaneously, we need to adjust the way batch normalization is performed to achieve good accuracy. In particular, as illustrated in Figure 2, activations from the floating point and quantized models follow different distributions. Thus, normalizing them with same set of running statistics can hinder training. Instead, we regard activations in different numerical precision as being from different domains, similar to as in multi-domain transfer learning, and normalize them with separate statistics {µ f, σ f} and {µ q, σ q}: σq. This modification only introduces minor storage overhead, while it significantly improves GQ-Net's accuracy in both full-precision and quantized settings. To validate the effectiveness of GQ-Net and assess its different components, we conducted a series of comparisons and ablation studies using the ImageNet classification task. We used the ILSVRC 2012 dataset consists of 1.2M training samples and 10K validation samples from 1K categories, and evaluated our system using top-1 and top-5 validation accuracies. Network settings We used the ResNet-18, MobileNet-v1 and MobileNet-v2 architectures in the ImageNet experiments. All MobileNets used channel expansion ratio 1.0. Unlike some recent works, we did not modify the order of the Conv, BN and ReLU layers. We replaced the BatchNorm layers in these models with SyncBatchNorm, i.e. we used mini-batch statistics µ f, µ q and σ f, σ q computed from all distributed GPUs during training. Quantization settings Unless otherwise specified, all of the following experiments were conducted with parameterized linear quantizers using a bitwidth of 4. The weight and activation quantizers each have their own parameters θ = {lb, ub}, which are initialized at the iteration right before the quantization error penalty L q is enabled. Specifically, for weight quantizers, {lb, ub} are initialized by the minimum and maximum elements in the weight tensors of each layer. For activation quantizers, {lb, ub} are initialized by the upper and lower 99.9% percentile values, computed from 5 mini-batches sampled from the training set. Weights and activations of all layers were quantized, including the first and last layers. Training protocol We used the same training protocol for all architectures and quantization settings. Training was performed on 32 distributed GPUs, each with a mini-batch size of 64, and stopped after 120 epochs on the training set. Model weights W and quantization parameters θ were optimized with different optimization settings. Model weights W were randomly initialized using the Kaiming-normal scheme without using a pre-trained model, and optimized by SGD with 0.9 Nesterov momentum and 10 −4 weight decay. The learning rate warmed-up from 0.2 to 0.8 in the first 4 epochs, and decayed twice by a factor of 0.1 at epochs 60 and 90. Quantization parameters θ were optimized using Adam without weight decay, with coefficients for first and second momentums set to β 1 = 0.9 and β 2 = 0.999, and learning rate fixed to 10 −3 during the entire training process. Following standard practice, training samples were resized and randomly cropped to 224 × 224 pixels, followed by random horizontal flipping and normalization. Validation samples were centrally cropped to 224 × 224 pixels, followed by normalization. We validated the effectiveness of GQ-Net by comparing it with several other state-of-the-art quantization methods. As GQ-Net seeks to fully quantize a network and execute it using only integer operations, we perform comparisons with other strict quantization methods. The comparison baselines include from , and , indicated as White Paper, Integer-only and RelaxedQuant resp. in Table 1. The first row in the table contains full precision accuracy evaluated using reference implementations 2, while the second row contains the full precision accuracy for models trained with our training protocol. For the ResNet-18 model which is widely studied in the network compression literature, we significantly outperform the state-of-the-art strict quantization method RelaxedQuant in an equal bitwidth setting (+5.16% top-1 accuracy improvement with 4-bit weights and activations). Our 4-bit model even outperforms the comparison methods when they use higher bitwidths (+1.58% compared with 5-bit RelaxedQuant, +2.04% compared with 5-bit Integer-only). For the compact MobileNets family, our method also achieves higher accuracy using lower bitwidths. For example, compared with the White Paper method at 8-bit weights and 4-bit activations, our 4-bit W/A GQ-Net MobileNet-v2 model achieves +8.15% top-1 accuracy improvement, and our GQNet MobileNet-v1 model obtains +1.04% higher top-1 accuracy in the same setting. We also validated the effectiveness of the different components of GQ-Net discussed in Section 3.2, by progressively adding them to the vanilla quantization-friendly training protocol, and applying these protocols to the ResNet-18 architecture with its weights and activations quantized to 4-bits. For the vanilla setting, W and θ were optimized jointly at each training step, loss weights were set to ω f = ω q = 0.5 through the entire training process, gradients from L q propagated through both σ(x L) and σ(x L), and full precision and quantized activations were normalized using the same set of running statistics. For a more complete comparison, we also evaluated these settings using the full precision GQ-Net model. The numerical are given in Table 2. By alternatingly updating model weights W and quantizer parameters θ between training steps, quantization accuracy improved by +4.24% over the vanilla protocol. This indicates that although gradients ∇ W L and ∇ θ L both guide their respective parameters to minimize training loss, combining them in a training step makes the quantized weightsŴ derived from these parameters suboptimal. Dynamically adjusting the weights ω f and ω q during different parts of the training process as described in Section 3.2 improved quantization accuracy by +0.23%. This suggests that the importance of the accuracy loss L f and the distributional loss L q may not be equal at different stages of 2 Implementations of ResNet-18 and MobileNet-v2 are taken from torchvision package (v0.3.0, https://pytorch.org/docs/stable/torchvision/models.html). The accuracy of MobileNet-v1 is cited from the original paper. Table 2: Full-precision (FP) and quantized (Q) top-1 accuracy for 4-bit weights and activations in ResNet-18 on ImageNet. Components of GQ-Net are progressively applied. The first row indicates the vanilla setting. A check mark in column "Alt {W, θ}" means alternatingly optimizing W and θ. A check for "Schedule ω f, ω q " means ω f and ω q were dynamically adjusted, as described in §3.2. A check for "Detach σ(x L)" means that σ(x L) is treated as a constant and we do not propagate ∇ σ(x L) L q during the backward pass. A check for "Multi-domain BN" means that full-precision and quantized activations are normalized using separate running statistics. The full-precision reference has top-1 accuracy = 69.89.) Alt the training process. The schedule we used, alternating between short periods of ω q = 0 and longer periods of ω q = 1, suggests we should first allow the floating point model settle into a reasonably accurate state before enforcing its similarity to the quantized model. Blocking the gradient from L q caused by σ(x L) further improved quantization accuracy by +0.66%. This indicates that although the full precision logits x L and quantized logitsx L are both derived from the same set of parameters W, it is useful to heuristically separate their effects during backpropagation. Normalizing full precision and quantized activations by different sets of running statistics in the BatchNorm layers improved both full precision accuracy (+1.69%) and quantized accuracy (+0.60%). This highlights the difference in the mean and variance of the full precision and quantized activations, despite the similarity of the full precision and quantized models in terms of KL divergence. Lastly, we considered the effectiveness of using parameterized quantizers. We tested replacing the parameterized quantizers in GQ-Net with naive linear quantizers using fixed weight and activation quantization ranges set to the 99.9% percentile values derived from 5 initial training mini-batches. We found that using fixed quantizers significantly lowered accuracy, by 3.59%. We note that RelaxedQuant also uses learned quantizers, and that replacing these with fixed quantizers may also in decreased accuracy. In this paper we presented GQ-Net, a novel method for training accurate quantized neural networks. GQ-Net uses a loss function balancing full precision accuracy as well as similarity between the full precision and quantized models to guide the optimization process. By properly tuning the weights of these two factors, we obtained fully quantized networks whose accuracy significantly exceeds the state of the art. We are currently studying additional ways to adjust GQ-Net components to further improve accuracy. We are also interested in combining GQ-Net with complementary quantization techniques, and in applying similar methodologies to other neural network optimization problems.
We train accurate fully quantized networks using a loss function maximizing full precision model accuracy and minimizing the difference between the full precision and quantized networks.
1,410
scitldr
While much of the work in the design of convolutional networks over the last five years has revolved around the empirical investigation of the importance of depth, filter sizes, and number of feature channels, recent studies have shown that branching, i.e., splitting the computation along parallel but distinct threads and then aggregating their outputs, represents a new promising dimension for significant improvements in performance. To combat the complexity of design choices in multi-branch architectures, prior work has adopted simple strategies, such as a fixed branching factor, the same input being fed to all parallel branches, and an additive combination of the outputs produced by all branches at aggregation points. In this work we remove these predefined choices and propose an algorithm to learn the connections between branches in the network. Instead of being chosen a priori by the human designer, the multi-branch connectivity is learned simultaneously with the weights of the network by optimizing a single loss function defined with respect to the end task. We demonstrate our approach on the problem of multi-class image classification using four different datasets where it yields consistently higher accuracy compared to the state-of-the-art ``ResNeXt'' multi-branch network given the same learning capacity. Deep neural networks have emerged as one of the most prominent models for problems that require the learning of complex functions and that involve large amounts of training data. While deep learning has recently enabled dramatic performance improvements in many application domains, the design of deep architectures is still a challenging and time-consuming endeavor. The difficulty lies in the many architecture choices that impact-often significantly-the performance of the system. In the specific domain of image categorization, which is the focus of this paper, significant research effort has been invested in the empirical study of how depth, filter sizes, number of feature maps, and choice of nonlinearities affect performance BID8 BID17 BID24 BID19;. Recently, several authors have proposed to simplify the architecture design by defining convolutional neural networks (CNNs) in terms of combinations of basic building blocks. This strategy was arguably first popularized by the VGG networks BID25 which were built by stacking a series of convolutional layers having identical filter size (3 × 3). The idea of modularized CNN design was made even more explicit in residual networks (ResNets) BID13, which are constructed by combining residual blocks of fixed topology. While in ResNets residual blocks are stacked one on top of each other to form very deep networks, the recently introduced ResNeXt models BID31 have shown that it is also beneficial to arrange these building blocks in parallel to build multi-branch convolutional networks. The modular component of ResNeXt then consists of C parallel branches, corresponding to residual blocks with identical topology but distinct parameters. Network built by stacking these multi-branch components have been shown to lead to better than single-thread ResNets of the same capacity. While the principle of modularized design has greatly simplified the challenge of building effective architectures for image analysis, the choice of how to combine and aggregate the computations of these building blocks still rests on the shoulders of the human designer. In order to avoid a combinatorial explosion of options, prior work has relied on simple, uniform rules of aggregation DISPLAYFORM0 Submitted to 31st Conference on Neural Information Processing Systems (NIPS 2017). Do not distribute. While much of the work in the design of convolutional networks over the last 1 five years has revolved around the empirical investigation of the importance of 2 depth, filter sizes, and number of feature channels, recent studies have shown that 3 branching, i.e., splitting the computation along parallel but distinct threads and then 4 aggregating their outputs, represents a new promising dimension for significant 5 improvements in performance. To combat the complexity of design choices in 6 multi-branch architectures, prior work has adopted simple strategies, such as a 7 fixed branching factor, the same input being fed to all parallel branches, and an 8 additive combination of the outputs produced by all branches at aggregation points. In this work we remove these predefined choices and propose an algorithm to learn 10 the connections between branches in the network. Instead of being chosen a priori 11 by the human designer, the multi-branch connectivity is learned simultaneously 12 with the weights of the network by optimizing a single loss function defined with 13 respect to the end task. We demonstrate our approach on the problem of multi-class 14 image classification where it yields absolute improvements of over 3% with respect to the state-of-the-art "ResNeXt" multi-branch architecture given the same learning Submitted to 31st Conference on Neural Information Processing Systems (NIPS 2017). Do not distribute. five years has revolved around the empirical investigation of the importance of 2 depth, filter sizes, and number of feature channels, recent studies have shown that 3 branching, i.e., splitting the computation along parallel but distinct threads and then 4 aggregating their outputs, represents a new promising dimension for significant 5 improvements in performance. To combat the complexity of design choices in 6 multi-branch architectures, prior work has adopted simple strategies, such as a 7 fixed branching factor, the same input being fed to all parallel branches, and an 8 additive combination of the outputs produced by all branches at aggregation points. In this work we remove these predefined choices and propose an algorithm to learn 10 the connections between branches in the network. DISPLAYFORM0 Submitted to 31st Conference on Neural Information Processing Systems (NIPS 2017). Do not distribute.aggregating their outputs, represents a new promising dimension for 5 improvements in performance. To combat the complexity of design 6 multi-branch architectures, prior work has adopted simple strategies 7 fixed branching factor, the same input being fed to all parallel branch 8 additive combination of the outputs produced by all branches at aggrega 9In this work we remove these predefined choices and propose an algorit 10 the connections between branches in the network. DISPLAYFORM1 Submitted to 31st Conference on Neural Information Processing Systems (NIPS 2017).five years has revolved around the empirical investigation of the importance of 2 depth, filter sizes, and number of feature channels, recent studies have shown that 3 branching, i.e., splitting the computation along parallel but distinct threads and then 4 aggregating their outputs, represents a new promising dimension for significant 5 improvements in performance. To combat the complexity of design choices in 6 multi-branch architectures, prior work has adopted simple strategies, such as a 7 fixed branching factor, the same input being fed to all parallel branches, and an 8 additive combination of the outputs produced by all branches at aggregation points. In this work we remove these predefined choices and propose an algorithm to learn 10 the connections between branches in the network. DISPLAYFORM0 Submitted to 31st Conference on Neural Information Processing Systems (NIPS 2017). Do not distribute.capacity. DISPLAYFORM1 Submitted to 31st Conference on Neural Information Processing Systems (NIPS 2017). Do not distribute. DISPLAYFORM2 Figure 1: Different types of building blocks for modular network design: (a) a prototypical residual block with bottleneck convolutional layers BID13; (b) the multi-branch RexNeXt module consisting of C parallel residual blocks BID31; (c) our approach replaces the fixed aggregation points of RexNeXt with learnable masks m defining the input connections for each individual residual block.and composition. For example, ResNeXt models BID31 are based on the following set of simplifying assumptions: the branching factor C (also referred to as cardinality) is fixed to the same constant in all layers of the network, all branches of a module are fed the same input, and the outputs of parallel branches are aggregated by a simple additive operation that provides the input to the next module. In this paper we remove these predefined choices and propose an algorithm that learns to combine and aggregate building blocks of a neural network. In this new regime, the network connectivity naturally arises as a of the training optimization rather than being hand-defined by the human designer. We demonstrate our approach using residual blocks as our modular components, but we take inspiration from ResNeXt by arranging these modules in a multi-branch architecture. Rather than predefining the input connections and aggregation pathways of each branch, we let the algorithm discover the optimal way to combine and connect residual blocks with respect to the end learning objective. This is achieved by means of masks, i.e., learned binary parameters that act as "switches" determining the final connectivity in our network. The masks are learned together with the convolutional weights of the network, as part of a joint optimization via backpropagation with respect to a traditional multi-class classification objective. We demonstrate that, given the same budget of residual blocks (and parameters), our learned architecture consistently outperforms the predefined ResNeXt network in all our experiments. An interesting byproduct of our approach is that it can automatically identify residual blocks that are superfluous, i.e., unnecessary or detrimental for the end objective. At the end of the optimization, these unused residual blocks can be pruned away without any impact on the learned hypothesis while yielding substantial savings in number of parameters to store and in test-time computation.2 TECHNICAL APPROACH We begin by providing a brief review of residual blocks BID13, which represent the modular components of our architecture. We then discuss ResNeXt BID31, which inspired the multi-branch structure of our networks. Finally, we present our approach to learning the connectivity of multi-branch architectures using binary masks. Residual Learning. The framework of residual learning was introduced by He et al. BID13 as a strategy to cope with the challenging optimization of deep models. The approach was inspired by the observation that deeper neural networks, despite having larger learning capacity than shallower models, often yield higher training error, due to the difficulty of optimization posed by increasing depth. Yet, given any arbitrary shallow network, it is trivially possible to reproduce its function using a deeper model, e.g., by copying the shallow network into the top portion of the deep model and by setting the remaining layers to implement identity functions. This simple yet revealing intuition inspired the authors to introduce residual blocks, which learn residual functions with reference to the layer input. Figure 1 (a) illustrates an example of these modular components where the 3 layers in the block implement a residual function F(x). A shortcut connections aggregates the residual block output F(x) with its input x, thus computing F(x) + x, which becomes the input to the next block. The point of this module is that if at any depth in the network the representation x is already optimal, then F(x) can be trivially set to be the zero function, which is easier to learn than an identity mapping. In fact, it was shown BID13 ) that reformulating the layers as learning residuals eases optimization and enables the effective training of networks that are substantially deeper than previously possible. Since we are interested in applying our approach to image categorization, in this paper we use convolutional residual blocks using the bottleneck BID13 shown in Figure 1 (a). The first 1 × 1 layer projects the input feature maps onto a lower dimensional embedding, the second applies 3 × 3 filters, and the third restores the original feature map dimensionality. As in BID13, Batch Normalization BID15 and ReLU BID17 are applied after each layer, and a ReLU is used after each aggregation. The multi-branch architecture of ResNeXt. Recent work BID31 has shown that it is beneficial to arrange residual blocks not only along the depth dimension but also to implement parallel multiple threads of computation feeding from the same input layer. The outputs of the parallel residual blocks are then summed up together with the original input and passed on to the next module. The ing multi-branch module is illustrated in Figure 1 (b). More formally, let F(x; θ (i) j ) be the transformation implemented by the j-th residual block in module i-th of the network, where j = 1,..., C and i = 1,..., L, with L denoting the total number of modules stacked on top of each other to form the complete network. The hyperparameter C is called the cardinality of the module and defines the number of parallel branches within each module. The hyperparameter L controls the total depth of the network: under the assumption of 3 layers per residual block (as shown in the figure), the total depth of the network is given by D = 2 + 3L (an initial convolutional layer and an output fully-connected layers add 2 layers). Note that in ResNeXt all residual blocks in a module have the same topology (F) but each block has its own parameters (θ (i) j denotes the parameters of residual block j in module i). Then, the output of the i-th module is computed as: DISPLAYFORM0 Tensor y represents the input to the (i + 1)-th module. Note that the ResNeXt module effectively implements a split-transform-merge strategy that perfoms a projection of the input into separate lower-dimensional embeddings (via bottlenecks), a separate transformation within each embedding, a projection back to the high-dimensional space and a final aggregation via addition. It can be shown that the solutions that can be implemented by such module are a strict subspace of the solutions of a single layer operating on the high-dimensional embedding but at a considerably lower cost in terms of computational complexity and number of parameters. In BID31 it was experimentally shown that increasing the cardinality C is a more effective way of improving accuracy compared to increasing depth or the number of filters. In other words, given a fixed budget of parameters, ResNeXt multi-branch networks were shown to consistently outperform single-branch ResNets of the same learning capacity. We note, however, that in an attempt to ease network design, several restrictive limitations were embedded in the architecture of ResNeXt modules: each ResNeXt module implements C parallel feature extractors that operate on the same input; furthermore, the number of active branches is constant at all depth levels of the network. In the next subsection we present an approach that removes these restrictions without adding any significant burden on the process of manual network design (with the exception of a single additional integer hyperparameter for the entire network).Our masked multi-branch architecture. As in ResNeXt, our proposed architecture consists of a stack of L multi-branch modules, each containing C parallel feature extractors. However, differently from ResNeXt, each branch in a module can take a different input. The input pathway of each branch is controlled by a binary mask vector that is learned jointly with the weights of the network. Let m DISPLAYFORM1 C be the binary mask vector defining the active input connections feeding the j-th residual block in module i. If m (i) j,k = 1, then the activation volume produced by the k-th branch in module (i − 1) is fed as input to the j-th residual block of module i. If DISPLAYFORM2 j,k = 0, then the output from the k-th branch in the previous module is ignored by the j-th residual block of the current module. Thus, if we denote with y (i−1) k the output activation tensor computed by the k-th branch in module (i − 1), the input x (i) j to the j-th residual block in module i will be given by the following equation: DISPLAYFORM3 Then, the output of this block will be obtained through the usual residual computation, i.e., DISPLAYFORM4 We note that under this model we no longer have fixed aggregation nodes summing up all outputs computed from a module. Instead, the mask m (i) j now determines selectively for each block which branches from the previous module will be aggregated and provided as input to the block. Under this scheme, the parallel branches in a module receive different inputs and as such are likely to yield more diverse features. We point out that depending on the constraints posed over m (i) j, different interesting models can be realized. For example, by introducing the constraint that k m (i) j,k = 1 for all blocks j, then each residual block will receive input from only one branch (since each m (i) j,k must be either 0 or 1). It can be noted that at the other end of the spectrum, if we set m (i) j,k = 1 for all blocks j, k in each module i, then all connections would be active and we would obtain again the fixed ResNeXt architecture. In our experiments we will demonstrate that the best are achieved for a middle ground between these two extremes, i.e., by connecting each block to K branches where K is an integer-valued hyperparameter such that 1 < K < C. We refer to this hyperparameter as the fan-in of a block. As discussed in the next section, the mask vector m (i) j for each block is learned simultaneously with all the other weights in the network via backpropagation. Finally, we note that it may be possible for a residual block in the network to become unused. This happens when, as a of the optimization, block k in module (i − 1) is such that m (i) jk = 0 for all j = 1,..., C. In this case, at the end of the optimization, we prune the block in order to reduce the number of parameters to store and to speed up inference (note that this does not affect the function computed by the network). Thus, at any point in the network the total number of active parallel threads can be any number smaller than or equal to C. This implies that a variable branching factor is learned adaptively for the different depths in the network. We refer to our learning algorithm as MASKCONNECT. It performs joint optimization of a given learning objective with respect to both the weights of the network (θ) as well as the masks (m). Since in this paper we apply our method to the problem of image categorization, we use the traditional multi-class cross-entropy objective for the loss. However, our approach can be applied without change to other loss functions as well as to other tasks benefitting from a multi-branch architecture. In MASKCONNECT the weights have real values, as in traditional networks, while the branch masks have binary values. This renders the optimization more challenging. To learn these binary parameters, we adopt a modified version of backpropagation, inspired by the algorithm proposed by Courbariaux et al. BID3 to train neural networks with binary weights. During training we store and update a real-valued versionm DISPLAYFORM0 C of the branch masks, with entries clipped to lie in the continuous interval from 0 to 1.In general, the training via backpropagation consists of three steps: 1) forward propagation, 2) backward propagation, and 3) parameters update. At each iteration, we stochastically binarize the real-valued branch masks into binary-valued vectors m (i) j ∈ {0, 1} C which are then used for the forward propagation and backward propagation (steps 1 and 2). Instead, during the parameters update (step 3), the method updates the real-valued branch masksm (i) j. The weights θ of the convolutional and fully connected layers are optimized using standard backpropagation. We discuss below the details of our mask training procedure, under the constraint that at any time there can be only K active entries in the binary branch mask m (i) j, where K is a predefined integer hyperparameter with 1 ≤ K ≤ C. In other words, we impose the following constraints: DISPLAYFORM1.., C} and ∀i ∈ {1, . . ., L}.These constraints imply that each residual block receives input from exactly K branches of the previous module. Forward Propagation. During the forward propagation, our algorithm first normalizes the C real-valued branch masks for each block j to sum up to 1, i.e., such that C k=1m (i) j,k = 1. This is done so that Mult(m DISPLAYFORM2 j,C) defines a proper multinomial distribution over the C branch connections feeding into block j. Then, the binary branch mask m (i) j is stochastically generated by drawing K distinct samples a 1, a 2,..., a K ∈ {1, . . ., C} from the multinomial distribution over the branch connections. Finally, the entries corresponding to the K samples are activated in the binary branch mask vector, i.e., m DISPLAYFORM3 The input activation volume to the residual block j is then computed according to Eq. 2 from the sampled binary branch masks. We note that the sampling from the Multinomial distribution ensures that the connections with largestm (i) j,k values will be more likely to be chosen, while at the same time the stochasticity of this process allows different connectivities to be explored, particularly during early stages of the learning when the real-valued masks have still fairly uniform values. Backward Propagation. In the backward propagation step, the gradient ∂ /∂y Mask Update. In the parameter update step our algorithm computes the gradient with respect to the binary branch masks for each branch. Then, using these computed gradients and the given learning rate, it updates the real-valued branch masks via gradient descent. At this time we clip the updated real-valued branch masks to constrain them to remain within the valid interval. The same clipping strategy was adopted for the binary weights in the work of BID3.As discussed in the supplementary material, after joint training over θ and m, we have found beneficial to fine-tune the weights θ of the network with fixed binary masks (connectivity), by setting as active connections for each block j in module i those corresponding to the K largest values iñ DISPLAYFORM4 j. Pseudocode for our training procedure is given in the supplementary material. We tested our approach on the task of image categorization using several benchmarks: CIFAR-10 , CIFAR-100 BID16 ), Mini-ImageNet BID29, as well as the full ImageNet BID4. In this section we discuss achieved on CIFAR-100 and ImageNet BID4, while the for CIFAR-10 (BID16 and Mini-ImageNet BID29 can be found in the Appendix. CIFAR-100 is a dataset of color images of size 32x32. It consists of 50,000 training images and 10,000 test images. Each image in CIFAR-100 is categorized into one of 100 possible classes. Effect of fan-in (K). We start by studying the effect of the fan-in hyperparameter (K) on the performance of models built and trained using our proposed approach. The fan-in defines the number of active branches feeding each residual block. For this experiment we use a model obtained by stacking L = 6 multi-branch residual modules, each having cardinality C = 8 (number of branches in each module). We use residual blocks consisting of 3 convolutional layers with a bottleneck implementing dimensionality reduction on the number of feature channels, as shown in Figure 1. The bottleneck for this experiment was set to w = 4. Since each residual block consists of 3 layers, the total depth of the network in terms of learnable layers is D = 2 + 3L = 20.We trained and tested this architecture using different fan-in values: K = 1,.., 8. Note that varying K does not affect the number of parameters. Thus, all these models have the same learning capacity. (left) versus the connectivity learned by our method (right) using (K = 1). Each green square is a residual block, each row of C = 8 square is a multibranch module. The network consists of a stack of L = 9 modules. Arrows indicate pathways connecting residual blocks of adjacent modules. In each net, the top red circle is a convolutional layer, the bottom circle is the final fully-connected layer. It can be noticed that MASKCONNECT learns sparse connections. The squares without in/out edges are those deemed superfluous by our algorithm and can be pruned at the end of learning. This gives rise to a branching factor that varies along the depth of the net. The are shown in FIG5. We can see that the best accuracy is achieved by connecting each residual block to K = 4 branches out of the total C = 8 in each module. Using a very low or very high fan-in yields lower accuracy. Note that when setting K = C, there is no need to learn the masks. In this case each mask is simply replaced by an element-wise addition of the outputs from all the branches. This renders the model equivalent to ResNeXt BID31, which has fixed connectivity. Based on the of FIG5, in all our experiments below we use K = 4, since it gives the best accuracy, but also K = 1, since it gives high sparsity which, as we will see shortly, implies savings in number of parameters. Varying the architectures. In Table 1 we show the classification accuracy achieved with different architectures (the details of each architecture are listed in the Appendix). For each architecture we report obtained using MASKCONNECT with fan-in K = 1 and K = 4. We also include the accuracy achieved with full (as opposed to learned) connectivity, which corresponds to ResNeXt. These show that learning the connectivity produces consistently higher accuracy than using fixed connectivity, with accuracy gains of up 2.2% compared to the state-of-the-art ResNeXt model. We note that these improvements in accuracy come at little computational training cost: the average training time overhead for learning masks and weights is about 39% using our unoptimized implementation compared to learning only the weights given a fixed connectivity. Additionally, for each architecture we include models trained using sparse random connectivity (Fixed-Random). For these models, each mask is set to have K = 4 randomly-chosen active connections, and the connectivity is kept fixed during learning of the parameters. We can notice that the accuracy of these nets is considerably lower compared to our models, despite having the same connectivity density (K = 4). This shows that the improvements of our approach over ResNeXt are not due to sparser connectivity but they are rather due to learned connectivity. Parameter savings. Our proposed approach provides the benefit of automatically identifying during training residual blocks that are unnecessary. At the end of the training, the unused residual blocks can be pruned away. This yields savings in the number of parameters to store and in testtime computation. In Table 1, columns Train and Test under Params show the original number of parameters (used during training) and the number of parameters after pruning (used at test-time). Note that for the biggest architecture, our approach using K = 1 yields a parameter saving of 40% compared to ResNeXt with full connectivity (20.5M vs 34.4M), while achieving the same accuracy. Thus, in summary, using fan-in K = 4 gives models that have the same number of parameters as ResNeXt but they yield higher accuracy; using fan-in K = 1 gives a significant saving in number of parameters and accuracy on par with ResNeXt. Model with real-valued masks. We have also attempted to learn our models using real-valued masks by computing tensors in the forward and backward propagation with respect to masks DISPLAYFORM0 C rather than the binary vectors m DISPLAYFORM1 C. However, we found this variant to Table 1: CIFAR-100 accuracies (single crop) achieved by different architectures trained using the predefined full connectivity of ResNeXt (Fixed-Full) versus the connectivity learned by our algorithm (Learned). We also include models trained using random, fixed connectivity (Fixed-Random) defined by setting K = 4 random active connections per branch. Each model was trained 4 times, using different random initializations. For each model we report the best test performance as well as the mean test performance computed from the 4 runs. For our method, we report performance using K = 1 as well as K = 4. We also list the number of parameters used during training (Params-Train) and the number of parameters obtained after pruning the unused blocks (Params-Test). Our learned connectivity using K = 4 produces accuracy gains of up 2.2% compared to the strong ResNeXt model, while using K = 1 yields equivalent to ResNeXt but it induces a significant reduction in number of parameters at test time (a saving of 40% for model {29,64,8}). Connectivity Params Accuracy (%) DISPLAYFORM0 Fixed-Full, K=8 BID31 yield consistently lower compared to our models using binary masks. For example, for model {D = 29, w = 8, C = 8} the best accuracy achieved with real-valued masks is 1.93% worse compared to that obtained with binary masks. In particular we observed that for this variant, the real-valued masks change little over training even when using large learning rates. Conversely, performing the forward and backward propagation using stochastically-sampled binary masks yields a larger exploration of connectivities and in bigger changes of the auxiliary real-valued masks leading to better connectivity learning. Visualization of the learned connectivity. FIG6 provides an illustration of the connectivity learned by MASKCONNECT for K = 1 versus the fixed connectivity of ResNeXt for model {D = 29, w = 8, C = 8}. While ResNeXt feeds the same input to all blocks of a module, our algorithm learns different input pathways for each block and yields a branching factor that varies along depth. Finally, we evaluate our approach on the large-scale ImageNet 2012 dataset BID4, which includes images of 1000 classes. We train our approach on the training set (1.28M images) and evaluate it on the validation set (50K images). In TAB4, we report the Top-1 and Top-5 accuracies for three different architectures. For these experiments we set K = C/2. We can observe that for all three architectures, our learned connectivity yields an improvement in accuracy over the fixed connectivity of ResNeXt BID31. We invite the reader to review achieved on CIFAR-10 & Mini-ImageNet in the Appendix. Also on these datasets our algorithm consistently outperforms the ResNeXt models based on fixed connectivity, with accuracy gains of up to 3.8%. Despite their wide adoption, deep networks often require laborious model search in order to yield good . As a , significant research effort has been devoted to the design of algorithms for automatic model selection. However, most of this prior work falls within the genre of hyper- parameter optimization BID2 BID26 rather than architecture or connectivity learning. Evolutionary search has been proposed as an interesting framework to learn both the structure as well as the connections in a neural network BID30 BID6 BID22. Architecture search has also been recently formulated as a reinforcement learning problem with impressive . Unlike these approaches, our method is limited to learning the connectivity within a predefined architecture but it does so efficiently by gradient descent optimization of the learning objective as opposed to more costly procedures such as evolutionary search or reinforcement learning. Several authors have proposed learning connectivity by pruning unimportant weights from the network BID18 BID10 BID31 BID9 BID12. However, these prior methods operate in stages where initially the network with full connectivity is learned and then connections are greedily removed according to an importance criterion. In PathNet BID5, the connectivity within a given architecture was searched via evolution. Compare to these prior approaches, our work provides the advantage of learning the connectivity by direct global optimization of the loss function of the problem at hand rather than by greedy optimization of a proxy criterion or by evolution. Our technical approach shares similarities with the "Shake-Shake" regularization recently introduced in unpublished work BID7. This procedure was demonstrated on two-branch ResNeXt models and consists in randomly scaling tensors produced by parallel branches during each training iteration while at test time the network uses uniform weighting of tensors. Conversely, our algorithm learns an optimal binary scaling of the parallel tensors with respect to the training objective and uses the ing network with sparse connectivity at test time. Our work is also related to approaches that learn a hierarchical structure in the last one or two layers of a network in order to obtain distinct features for different categories BID20 BID1. Differently from these methods, our algorithm learns efficiently connections at all depths in the network, thus optimizing over a much larger family of connectivity models. While our algorithm is limited to optimizing the connectivity structure within a predefined architecture, Adams et al. BID0 proposed a nonparametric Bayesian approach that searches over an infinite network using MCMC. Saxena and Verbeek BID23 introduced convolutional neural fabric which are learnable 3D trellises that locally connect response maps at different layers of a CNN. Similarly to our work, they enable optimization over an exponentially large family of connectivities, albeit different from those considered here. In this paper we introduced an algorithm to learn the connectivity of deep multi-branch networks. The problem is formulated as a single joint optimization over the weights and the branch connections of the model. We tested our approach on challenging image categorization benchmarks where it led to significant accuracy improvements over the state-of-the-art ResNeXt model. An added benefit of our approach is that it can automatically identify superfluous blocks, which can be pruned without impact on accuracy for more efficient testing and for reducing the number of parameters to store. While our experiments were focused on a particular multi-branch architecture (ResNeXt) and a specific form of building block (residual block), we expect the benefits of our approach to extend to other modules and network structures. For example, it could be applied to learn the connectivity of skip-connections in DenseNets BID14, which are currently based on predefined connectivity rules. In this paper, our masks perform non-parametric additive aggregation of the branch outputs. It would be interesting to experiment with learnable (parametric) aggregations of the outputs from the individual branches. Our approach is limited to learning connectivity within a given, fixed architecture. Future work will explore the use of learnable masks for architecture discovery. Normalize the real-valued mask to sum up to 1:m DISPLAYFORM0 Set active binary mask based on drawn samples: DISPLAYFORM1 j of the mask, given branch activations y DISPLAYFORM2 and y DISPLAYFORM3 The CIFAR-10 dataset consists of color images of size 32x32. The training set contains 50,000 images, the testing set 10,000 images. Each image in CIFAR-10 is categorized into one of 10 possible classes. In Table 3, we report the performance of different models trained on CIFAR-10. From these we can observe that our models using learned connectivity achieve consistently better performance over the equivalent models trained with the fixed connectivity BID31. Table 3: CIFAR-10 accuracies (single crop) achieved by different multi-branch architectures trained using the predefined connectivity of ResNeXt (Fixed-Full) versus the connectivity learned by our algorithm (Learned). Each model was trained 4 times, using different random initializations. For each model we report the best test performance as well as the mean test performance computed from the 4 runs. Fixed-Full K=8 BID31 91.39 (91.13±0.11) Learned K=4 92.85 (92.76±0.10) Fixed-Full K=8 BID31 Mini-ImageNet is a subset of the full ImageNet BID4 dataset. It was used in BID29 BID21. It is created by randomly selecting 100 classes from the full ImageNet BID4 ). For each class, 600 images are randomly selected. We use 500 examples per class for training, and the other 100 examples per class for testing. The selected images are resized to size 84x84 pixels as in BID29 BID21. The advantage of this dataset is that it poses the recognition challenges typical of the ImageNet photos but at the same time it does not need require the powerful resources needed to train on the full ImageNet dataset. This allows to include the additional baselines involving random fixed connectivity (Fixed-Random).We report the performance of different models trained on Mini-ImageNet in TAB6. From these , we see that our models using learned connectivity with fan-in K=4 yield a nice accuracy gain over the same models trained with the fixed full connectivity of ResNeXt BID31. The absolute improvement (in Top-1 accuracy) is 3.87% for the 20-layer network and 3.17% for the 29-layer network. We can notice that the accuracy of the models with fixed random connectivity (Fixed-Random) is considerably lower compared to our nets with learned connectivity, despite having the same connectivity density (K = 4). This shows that the improvement of our approach over ResNeXt is not due to sparser connectivity but it is rather due to learned connectivity. The plot in FIG9 shows how the number of active branches varies as a function of the module depth for model {D = 29, w = 4, C = 8} trained on CIFAR-100. For K = 1, we can observe that the number of active branches tends to be larger for deep modules (closer to the output layer) compared to early modules (closer to the input). We observed this phenomenon consistently for all architectures. This suggests that having many parallel threads of computation is particularly important in deep layers of the network. Conversely, the setting K = 4 tends to produce a fairly uniform number of active branches across the modules and the number is quite close to the maximum value C. For this reason, there is little saving in terms of number of parameters when using K = 4, as there are rarely unused blocks. The plot in FIG10 shows the number of active branches as a function of module depth for model {D = 50, w = 4, C = 32} trained on ImageNet, using K = 16. Inside the brackets we specify the residual block used in each multi-branch module by listing the number of input channels, the size of the convolutional filters, as well as the number of filters (number of output channels). To the right of each bracket we list the cardinality (i.e., the number of parallel branches in the module). ×2 means that the same multi-branch module is stacked twice. The first layer for all models is a convolutional layer with 16 filters of size 3 × 3. The last layer performs global average pooling followed by a softmax. The specifications of the architectures used in all our experiments on CIFAR-10 and CIFAR-100 are given in TAB7.Several of these architectures are those presented in the original ResNeXt paper BID31 and are trained using the same setup, including the data augmentation strategy. Four pixels are padded on each side of the input image, and a 32x32 crop is randomly sampled from the padded image or its horizontal flip, with per-pixel mean subtracted BID17. For testing, we use the original 32x32 image. The stacks have output feature map of size 32, 16, and 8 respectively. The models are trained on 8 GPUs with a mini-batch size of 128 (16 per GPU), with a weight decay of 0.0005 and momentum of 0.9. We adopt four incremental training phases with a total of 320 epochs. In phase 1 we train the model for 120 epochs with a learning rate of 0.1 for the convolutional and fully-connected layers, and a learning rate of 0.2 for the masks. In phase 2 we freeze the connectivity by setting as active connections for each block those corresponding to its top-K values in the masks. With these fixed learned connectivity, we finetune the model from phase 1 for 100 epochs with a learning rate of 0.1 for the weights. Then, in phase 3 we finetune the weights of the model from phase 2 for 50 epochs with a learning rate of 0.01 using again the fixed learned connectivity from phase 1. Finally, in phase 4 we finetune the weights of the model from phase 3 for 50 epochs with a learning rate of 0.001. The architectures for our ImageNet experiments are those specified in the original ResNeXt paper BID31. Table 6: Mini-ImageNet architectures with varying depth (D), and bottleneck width (w). Inside the brackets we specify the residual block used in each multi-branch module by listing the number of input channels, the size of the convolutional filters, as well as the number of filters (number of output channels). To the right of each bracket we list the cardinality (C) (i.e., the number of parallel branches in the module). ×2 means that the same multi-branch module is stacked twice. Also for these experiments, we follow the data augmentation strategy described in BID31. The input image has size 224x224 and it is randomly cropped from the resized original image. We use a mini-batch size of 256 on 8 GPUs (32 per GPU), with a weight decay of 0.0001 and a momentum of 0.9. We use four incremental training phases with a total of 120 epochs. In phase 1 we train the model for 30 epochs with a learning rate of 0.1 for the convolutional and fully-connected layers, and a learning rate of 0.2 for the masks. In phase 2 we finetune the model from phase 1 for another 30 epochs with a learning rate of 0.1 and a learning rate of 0.0 for the masks (i.e., we use the fixed connectivity learned in phase 1). In phase 3 we finetune the weights from phase 2 for 30 epochs with a learning rate of 0.01 and the learning rate of the masks is 0.0. Finally, in phase 4 we finetune the weights from phase 3 for 30 epochs with a learning rate of 0.001 while the learning rate of the masks is still set to 0.0. For the experiments on the Mini-ImageNet dataset, a 64x64 crop is randomly sampled from the scaled 84x84 image or its horizontal flip, with per-pixel mean subtracted BID17. For testing, we use the center 64x64 crop. The specifications of the models are identical to the CIFAR-100 models used in the previous subsection, except that the first input convolutional layer in the network is followed by a max pooling layer. The models are trained on 8 GPUs with a mini-batch size of 256 (32 per GPU), with a weight decay of 0.0005 and momentum of 0.9. Similar to training CIFAR-100 dataset, we also adopt four incremental training phases with a total of 320 epochs.
In this paper we introduced an algorithm to learn the connectivity of deep multi-branch networks. The approach is evaluated on image categorization where it consistently yields accuracy gains over state-of-the-art models that use fixed connectivity.
1,411
scitldr
Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret. Especially, little is known about how they represent language in their intermediate layers. In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns. In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text. We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language. Understanding and interpreting how deep neural networks process natural language is a crucial and challenging problem. While deep neural networks have achieved state-of-the-art performances in neural machine translation (NMT) BID18;; BID21, sentiment classification tasks BID23 ) and many more, the sequence of non-linear transformations makes it difficult for users to make sense of any part of the whole model. Because of their lack of interpretability, deep models are often regarded as hard to debug and unreliable for deployment, not to mention that they also prevent the user from learning about how to make better decisions based on the model's outputs. An important research direction toward interpretable deep networks is to understand what their hidden representations learn and how they encode informative factors when solving the target task. Some studies including;; BID8 have researched on what information is captured by individual or multiple units in visual representations learned for image recognition tasks. These studies showed that some of the individual units are selectively responsive to specific visual concepts, as opposed to getting activated in an uninterpretable manner. By analyzing individual units of deep networks, not only were they able to obtain more fine-grained insights about the representations than analyzing representations as a whole, but they were also able to find meaningful connections to various problems such as generalization of network BID5, generating explanations for the decision of the model BID25 BID9 BID26 and controlling the output of generative model .Since these studies of unit-level representations have mainly been conducted on models learned for computer vision-oriented tasks, little is known about the representation of models learned from natural language processing (NLP) tasks. Several studies that have previously analyzed individual units of natural language representations assumed that they align a predefined set of specific concepts, such as sentiment present in the text BID12 ), text lengths, quotes and brackets . They discovered the emergence of certain units that selectively activate to those specific concepts. Building upon these lines of research, we consider the following question: What natural language concepts are captured by each unit in the representations learned from NLP tasks? FIG11: We discover the most activated sentences and aligned concepts to the units in hidden representations of deep convolutional networks. Aligned concepts appear frequently in most activated sentences, implying that those units respond selectively to specific natural language concepts. To answer this question, we newly propose a simple but highly effective concept alignment method that can discover which natural language concepts are aligned to each unit in the representation. Here we use the term unit to refer to each channel in convolutional representation, and natural language concepts to refer to the grammatical units of natural language that preserve meanings; i.e. morphemes, words, and phrases. Our approach first identifies the most activated sentences per unit and breaks those sentences into these natural language concepts. It then aligns specific concepts to each unit by measuring activation value of replicated text that indicates how much each concept contributes to the unit activation. This method also allows us to systematically analyze the concepts carried by units in diverse settings, including depth of layers, the form of supervision, and dataspecific or task-specific dependencies. The contributions of this work can be summarized as follows:• We show that the units of deep CNNs learned in NLP tasks could act as a natural language concept detector. Without any additional labeled data or re-training process, we can discover, for each unit of the CNN, natural language concepts including morphemes, words and phrases that are present in the training data.• We systematically analyze what information is captured by units in representation across multiple settings by varying network architectures, tasks, and datasets. We use VD-CNN for sentiment and topic classification tasks on Yelp Reviews, AG News BID23, and DBpedia ontology dataset BID4 and ByteNet for translation tasks on Europarl BID3 and News Commentary BID20 datasets.• We also analyze how aligned natural language concepts evolve as they get represented in deeper layers. As part of our analysis, we show that our interpretation of learned representations could be utilized at designing network architectures with fewer parameters but with comparable performance to baseline models. Recent works on interpreting hidden representations at unit-level were mostly motivated by their counterparts in computer vision. In the computer vision community, BID24 retrieved image samples with the highest unit activation, for each of units in a CNN trained on image recognition tasks. They used these retrieved samples to show that visual concepts like color, texture and object parts are aligned to specific units, and the concepts were aligned to units by human annotators. introduced BRODEN dataset, which consists of pixel-level segmentation labels for diverse visual concepts and then analyzed the correlation between activation of each unit and such visual concepts. In their work, although aligning concepts which absent from BRODEN dataset requires additional labeled images or human annotation, they quantitatively showed that some individual units respond to specific visual concepts. On the other hand,; BID8; BID16 discovered visual concepts aligned to each unit by optimizing a random initial image to maximize the unit activation by gradient descent. In these cases, the ing interpretation of each unit is in the form of optimized images, and not in the natural language form as the aforementioned ones. However, these continuous interpretation make it hard for further quantitative analyses of discrete properties of representations, such as quantifying characteristics of representations with layer depth and correlations between the interpretability of a unit and regularization BID25. Nevertheless, these methods have the advantage that the are not constrained to a predefined set of concepts, giving flexibility as to which concepts are captured by each unit. In the NLP domain, studies including; BID19 BID11; BID14 analyzed the internal mechanisms of deep models used for NLP and found intriguing properties that appear in units of hidden representations. Among those studies, the closest one to ours is BID12, who defined a unit as each element in the representation of an LSTM learned for language modeling and found that the concept of sentiment was aligned to a particular unit. Compared with these previous studies, we focus on discovering a much wider variety of natural language concepts, including any morphemes, words, and phrases all found in the training data. To the best our knowledge, this is the first attempt to discover concepts among all that exist in the form of natural language from the training corpus. By extending the scope of detected concepts to meaningful building blocks of natural language, we provide insights into how various linguistic features are encoded by the hidden units of deep representations. Most previous work that analyzes the learned representation of NLP tasks focused on constructing downstream tasks that predict concepts of interest. A common approach is to measure the performance of a classification model that predicts the concept of interest to see whether those concepts are encoded in representation of a input sentence. For example,;; BID27 proposed several probing tasks to test whether the (non-)linear regression model can predict well the syntactic or semantic information from the representation learned on translation tasks or the skip-thought or word embedding vectors. BID15 constructed regression tasks that predict labels such as voice, tense, part-of-speech tag, and morpheme from the encoder representation of the learned model in translation task. Compared with previous work, our contributions can be summarized as follows. FORMULA1 By identifying the role of the individual units, rather than analyzing the representation as a whole, we provide more fine-grained understanding of how the representations encode informative factors in training data. Rather than limiting the linguistic features within the representation to be discovered, we focus on covering concepts of fundamental building blocks of natural language (morphemes, words, and phrases) present in the training data, providing more flexible interpretation without relying on a predefined set of concepts. Our concept alignment method does not need any additional labeled data or re-training process, so it can always provide deterministic interpretation using only the training data. We focus on convolutional neural networks (CNNs), particularly their character-level variants. CNNs have shown great success on various natural language applications, including translation and sentence classification (; ; BID23). Compared to deep architectures based on fully connected layers, CNNs are natural candidates for unit-level analysis because their channel-level representations are reported to work as templates for detecting concepts .Our approach for aligning natural language concepts to units is summarized as follows. We first train a CNN model for each natural language task (e.g. translation and classification) and retrieve training sentences that highly activate specific units. Interestingly, we discover morphemes, words and phrases that appear dominantly within these retrieved sentences, implying that those concepts have a significant impact on the activation value of the unit. Then, we find a set of concepts which attribute a lot to the unit activation by measuring activation value of each replicated candidate concept, and align them to unit. Once we train a CNN model for a given task, we feed again all sentences S in the training set to the CNN model and record their activations. Given a layer and sentence s ∈ S, let A DISPLAYFORM0, where Z is a normalizer. We then retrieve top K training sentences per unit with the highest mean activation a u. Interestingly, some natural language patterns such as morphemes, words, phrases frequently appear in the retrieved sentences (see FIG11, implying that those concepts might have a large attribution to the activation value of that unit. We propose a simple approach for identifying the concepts as follows. For constructing candidate concepts, we parse each of top K sentences with a constituency parser BID2 . Within the constituency-based parse tree, we define candidate concepts as all terminal and nonterminal nodes (e.g. from sentence John hit the balls, we obtain candidate concepts as {John, hit, the, balls, the balls, hit the balls, John hit the balls}). We also break each word into morphemes using a morphological analysis tool BID22 and add them to candidate concepts (e.g. from word balls, we obtain morphemes {ball, s}). We repeat this process for every top K sentence and build a set of candidate concepts for unit u, which is denoted as C u = {c 1, ..., c N}, where N is the number of candidate concepts of the unit. Next, we measure how each candidate concept contributes to the unit's activation value. For normalizing the degree of an input signal to the unit activation, we create a synthetic sentence by replicating each candidate concept so that its length is identical to the average length of all training sentences (e.g. candidate concept the ball is replicated as the ball the ball the ball...). Replicated sentences are denoted as R = {r 1, ..., r N}, and each r n ∈ R is forwarded to CNN, and their activation value of unit u is measured as a u (r n) ∈ R, which is averaged over l entries. Finally, the degree of alignment (DoA) between a candidate concept c n and a unit u is defined as follows: DISPLAYFORM0 In short, the DoA 1 measures the extent to unit u's activation is sensitive to the presence of candidate concept c n. If a candidate concept c n appears in the top K sentences and unit's activation value a u is responsive to c n a lot, then DoA u,cn gets large, suggesting that candidate concept c n is strongly aligned to unit u. Finally, for each unit u, we define a set of its aligned concepts C * u = {c * 1, ..., c * M} as M candidate concepts with the largest DoA values in C u. Depending on how we set M, we can detect different numbers of concepts per unit. In this experiment, we set M to 3. We analyze representations learned on three classification and four translation datasets shown in Table 1. Training details for each dataset are available in Appendix B. We then focus on the representations in each encoder layer of ByteNet and convolutional layer of VDCNN, because as BID6 pointed out, the representation of the decoder (the output layer in the case of classification) is specialized for predicting the output of the target task rather than for learning the semantics of the input text. To quantitatively evaluate how well our approach aligns concepts, we measure how selectively each unit responds to the aligned concept. Motivated by BID5, we define the concept selectivity of a unit u, to a set of concepts C * u that our alignment method detects, as follows: DISPLAYFORM0 where S denotes all sentences in training set, and µ + = 1 |S+| s∈S+ a u (s) is the average value of unit activation when forwarding a set of sentences S +, which is defined as one of the following:• replicate: S + contains the sentences created by replicating each concept in C * u. As before, the sentence length is set as the average length of all training sentences for fair comparison.• one instance: S + contains just one instance of each concept in C * u. Thus, the input sentence length is shorter than those of others in general.• inclusion: S + contains the training sentences that include at least one concept in C * u.• random: S + contains randomly sampled sentences from the training data. DISPLAYFORM1 is the average value of unit activation when forwarding S −, which consists of training sentences that do not include any concept in C * u. Intuitively, if unit u's activation is highly sensitive to C * u (i.e. those found by our alignment method) and if it is not to other factors, then Sel u gets large; otherwise, Sel u is near 0. FIG0 shows the mean and variance of selectivity values for all units learned in each dataset for the four S + categories. Consistent with our intuition, in all datasets, the mean selectivity of the replicate set is the highest with a significant margin, that of one instance, inclusion set is the runner-up, and that of the random set is the lowest. These support our claims that units are selectively responsive to specific concepts and our method is successful to align such concepts to units. Moreover, the mean selectivity of the replicate set is higher than that of the one instance set, which implies that a unit's activation increases as its concepts appear more often in the input text. FIG2 shows examples of the top K sentences and the aligned concepts that are discovered by our method, for selected units. For each unit, we find the top K = 10 sentences that activate the most in several encoding layers of ByteNet and VDCNN, and select some of them (only up to five sentences are shown due to space constraints). We observe that some patterns appear frequently within the top K sentences. For example, in the top K sentences that activate unit 124 of 0th layer of ByteNet, the concepts of'(', ')','-' appear in common, while the concepts of soft, software, wi appear frequently in the sentences for unit 19 of 1st layer of VDCNN. These qualitatively show that individual units are selectively responsive to specific natural language concepts. More interestingly, we discover that many units could capture specific meanings or syntactic roles beyond superficial, low-level patterns. For example, unit 690 of the 14th layer in ByteNet captures (what, who, where) concepts, all of which play a similar grammatical role. On the other hand, unit 224 of the 14th layer in ByteNet and unit 53 of the 0th layer in VDCNN each captures semantically similar concepts, with the ByteNet unit detecting the meaning of certainty in knowledge (sure, know, aware) and the VDCNN unit detecting years. This suggests that, although we train character-level CNNs with feeding sentences as the form of discrete symbols (i.e. character indices), individual units could capture natural language concepts sharing a similar semantic or grammatical role. More quantitative analyses for such concepts are available in Appendix E.• That is not the subject of this communication.• That is the purpose of this communication.• I would like to ask the Commissioner for a reply.• This is impossible without increasing efficiency.• Will we be able to achieve this, Commissioner?Layer 06, Unit 396: of this communication, will, communication• qualcomm has inked a licensing agreement with Microsoft • peoplesoft wants its customers to get aggressive with software upgrades to increase efficiency.• provide its customers with access to wi-fi hotspots around.. • realnetworks altered the software for market-leading ipod.• apple lost one war to microsoft by not licensing its mac… • They know that and we know that.• I am sure you will understand.• I am sure you will do this.• I am confident that we will find a solution. We note that there are units that detect concepts more abstract than just morphemes, words, or phrases, and for these units, our method tends to align relevant lower-level concepts. For example, in unit 244 of the 3rd layer in VDCNN, while each aligned concept emerges only once in the top K sentences, all top K sentences have similar nuances like positive sentiments. In this case, our method does capture relevant phrase-level concepts (e.g., very disappointing, absolute worst place), indicating that the higher-level nuance (e.g., negativity) is indirectly captured. We note that, because the number of morphemes, words, and phrases present in training corpus is usually much greater than the number of units per layer, we do not expect to always align any natural language concepts in the corpus to one of the units. Our approach thus tends to find concepts that are frequent in training data or considered as more important than others for solving the target task. Overall, these suggest how input sentences are represented in the hidden layers of the CNN:• Several units in the CNN learned on NLP tasks respond selectively to specific natural language concepts, rather than getting activated in an uninterpretable way. This means that these units can serve as detectors for specific natural language concepts.• There are units capturing syntactically or semantically related concepts, suggesting that they model the meaning or grammatical role shared between those concepts, as opposed to superficially modeling each natural language symbol. Using the concept alignments found earlier, we can visualize how concepts are distributed across layers. FIG3 shows the concepts of the units in the 0th, 1st, 3rd layer of VDCNN learned on AGNews dataset, and 0th, 4th, and 14th layer of the ByteNet encoder learned on English-to-German Europarl dataset with their number of aligned units. For each layer, we sort concepts in decreasing order by the number of aligned units and show 30 concepts most aligned. Recall that, since we align concepts for each unit, there are concepts aligned to multiple units simultaneously. Concept distribution for other datasets are available in Appendix G.Overall, we find that data and task-specific concepts are likely to be aligned to many units. In AG News, since the task is to classify given sentences into following categories; World, Sports, Business and Science/Tech, concepts related to these topics commonly emerge. Similarly, we can see that units learned for Europarl dataset focus to encode some key words (e.g. vote, propose, environment) in the training corpus. In computer vision tasks, visual concepts captured by units in CNN representations learned for image recognition tasks evolve with layer depths; color, texture concepts are emergent in earlier layers and more abstract concepts like parts and objects are emergent in deeper layers. To confirm that it also holds for representations learned in NLP tasks, we divide granularity of natural language concepts to the morpheme, word and N -gram phrase (N = 2, 3, 4, 5), and observe the number of units that they are aligned in different layers. FIG4 shows this trend, where in lower layers such as the 0th layer, fewer phrase concepts but more morphemes and words are detected. This is because we use a character-level CNN, whose receptive fields of convolution may not be large enough to detect lengthy phrases. Further, interestingly in translation cases, we observe that concepts significantly change in shallower layers (e.g. from the 0th to the 4th), but do not change much from middle to deeper layers (e.g. from the 5th to the 14th).Thus, it remains for us to answer the following question: for the representations learned on translation datasets, why does concept granularity not evolve much in deeper layers? One possibility is that the capacity of the network is large enough so that the representations in the middle layers could be sufficiently informative to solve the task. To validate this hypothesis, we re-train ByteNet from scratch while varying only layer depth of the encoder and fixing other conditions. We record their BLEU scores on the validation data as shown in FIG5. The performance of the translation model does not change much with more than six encoder layers, but it significantly drops at the models with fewer than 4 encoder layers. This trend coincides with the from FIG4 that the evolution of concept granularity stops around middle-to-higher layers. This shared pattern suggests that about six encoder layers are enough to encode informative factors in the given datasets to perform optimally on the translation task. In deeper models, this may suggest that the middle layer's representation may be already informative enough to encode the input text, and our may partly coincide with that of BID6, which shows that representation of intermediate layers is more transferable than that of deeper layers in language tasks, unlike in computer vision where deeper layers are usually more useful and discriminative. We show how many units each concept is aligned per layer in Section 4.4 and Appendix G. We observe that the concepts do not appear uniformly; some concepts are aligned to many units, while others are aligned to few or even no units. Then, the following question arises: What makes certain concepts emerge more than others?Two possible hypotheses may explain the emergence of dominant concepts. First, the concepts with a higher frequency in training data may be aligned to more units. FIG6 -(a) shows the correlation between the frequency of each concept in the training corpus and the number of units where each concept is aligned in the last layer of the topic classification model learned on AG News dataset. Second, the concepts that have more influence on the objective function (expected loss) may be aligned to more units. We can measure the effect of concept c on the task performance as Delta of Expected Loss (DEL) as follows: DISPLAYFORM0 where S is a set of training sentences, and Y is the set of ground-truths, and L(s, y) is the loss function for the input sentence s and label y. Occ c (s) is an occlusion of concept c in sentence s, where we replace concept c by dummy character tokens that have no meaning. If sentence s does not include concept c, Occ c (s) equals to original sentence s. As a , DEL(c) measures the impact of concept c on the loss function, where a large positive value implies that concept c has an important role for solving the target task. We proposed a simple but highly effective concept alignment method for character-level CNNs to confirm that each unit of the hidden layers serves as detectors of natural language concepts. Using this method, we analyzed the characteristics of units with multiple datasets on classification and translation tasks. Consequently, we shed light on how deep representations capture the natural language, and how they vary with various conditions. An interesting future direction is to extend the concept coverage from natural language to more abstract forms such as sentence structure, nuance, and tone. Another direction is to quantify the properties of individual units in other models widely used in NLP tasks. In particular, combining our definition of concepts with the attention mechanism (e.g. Bahdanau et al. FORMULA1) could be a promising direction, because it can reveal how the representations are attended by the model to capture concepts, helping us better understand the decision-making process of popular deep models. In section 3.2, we define Degree of Alignment (DoA) between concept c n and unit u as activation value of unit u for replication of c n. We tried lots of stuff while we were working on DoA metrics, but a lot of it gives biased concept alignment for several reasons. We here provide the things we tried and their reasons for failure. Point-wise Mutual Information (PMI) is a measure of association used in information theory and statistics. The PMI of a pair of samples x and y sampled from random variables X and Y quantifies the discrepancy between the probability of their coincidence as follows: DISPLAYFORM0 We then define DoA between candidate concept c n and unit u by using PMI as follow: DISPLAYFORM1, where (5a) DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 However, this metric has a bias of always preferring lengthy concepts even in earlier layers, which is not possible considering the receptive field of the convolution. Our intuition for this bias is consistent with BID13, where it is a well-known problem with PMI, which is its tendency to give very high association scores to pairs involving low-frequency ones, as the denominator is small in such cases. If certain concept c n in top K sentences is very lengthy, then its frequency in the corpus p(c n) would get very small, and pmi(u, c n) would be large with regardless of correlation between u and c n. We tested concept alignments with the following concept occlusion method. For each of the top K sentences, we replace a c n by dummy character tokens which have no meaning, forward it to the model, and measure the reduction of the unit activation value. We repeat this for every candidate concept in the sentences -as a , we can identify which candidate concept greatly reduce unit activation values. We thus define concepts aligned to each unit as the candidate concept that consistently lower the unit activation across the top K sentences. More formally, for each unit u, let S = {s 1, ..., s K} be top K activated sentences. Since we occlude each candidate concept in sentences, we define the set of candidate concept C = {c 1, ..., c N}, obtained from parsing each sentence in S.We define the degree of alignment (DoA) between a concept c ∈ C and a unit u as: DISPLAYFORM0 where Z is a normalizing factor, and a u indicates the mean activation of unit u, Occ cn (s) is a sentence s where candidate concept c n is occluded, and 1(c n ∈ s) is an indicator of whether c n is included in the sentence s. In short, the DoA measures how much a candidate concept contributes to the activation of the unit's top K sentences. If a candidate concept c n appears in the top K sentences S and greatly reduces the activation of unit u, then DoA u,cn gets large, implying that the c n is strongly aligned to unit u. Unfortunately, this metric could not fairly compare the attribution of several candidate concepts. For example, consider the following two concepts c 1 = hit, c 2 = hit the ball are included in one sentence. Occluding c 2 might gives relatively large decrement in unit activation value than that of c 1, since c 1 includes c 2. For this reason, the occlusion based metric is unnecessarily dependant of the length of concept, rather than it's attribution. Note that inclusion selectivity in section 4.2 is also used as DoA. Recall that inclusion selectivity is calculated as equation 2. In this case, µ + = 1 |S+| s∈S+ a u (s) is the average value of unit activation when forwarding a set of sentences S +, where S + denotes that sentences including candidate concept c n.However, it induces a bias which is similar to section A.1. It always prefers lengthy phrases since those lengthy concepts occur few times in entire corpus. For example, assume that the activation value of unit u for the sentence including specific lengthy phase is very high. If such a phrase occurs only one time over the entire corpus, µ + is equal to the activation value of the sentence, which is relatively very high than µ + for other candidate concepts. This error could be alleviated on a very large corpus where every candidate concept occurs enough in the corpus so that estimation of µ + get relatively accurate, which is practically not possible. In Section 3.2, we replicate each candidate concept into the input sentence for computing DoA in Eq.. Since each unit works as a concept detector whose activation value increases with the length of the input sentence (Section 4.2), it is essential to normalize the length of input for fair comparison of DoA values between the concepts that have different lengths one another. Without the lengthnormalization (i.e. each input sentence consists of just one instance of the candidate concept), the DoA metric has a bias to prefer lengthy concepts (e.g. phrases) because they typically have more signals that affect the unit activation than short candidate concepts (e.g. single words). In this work, we trained a ByteNet for the translation tasks and a VDCNN for the classification tasks, both to analyze properties of representations for language. Training details are as follows., 3, 5, 10], where M is the number of selected concepts per unit. In all settings, the selectivity of replicate ones is the highest, that of one instance ones is runner-up, and that of random is the lowest near 0.CNN was learned with the same structure and hyperparameters. Our code is based on a TensorFlow implementation of VDCNN found in https://github.com/zonetrooper32/VDCNN. In Section 3.2, we set M = 3. Although M is used as a threshold to set how many concepts per unit are considered, different M values have little influence on quantitative such as selectivity in Section 4.2. FIG8 shows the mean and variance of selectivity values with different M =, where there is little variants in the overall trend; the sensitivity of the replicate set is the highest, and that of one instance is runner-up, and that of random is the lowest. Whereas some units are sensitive to specific natural language concepts as shown in Section 4.3, other unites are not sensitive to any concepts at all. We call such units as non-interpretable units, which deserve to be explored. We first define the unit interpretability for unit u as follows: DISPLAYFORM0 where S is the set of training sentences, a u (s) is the activation value of unit u, and r i is the activation value of the sentence that is made up of replicating concept c i. We define unit u as interpretable when its Interpretability(u) equals to 1, and otherwise as non-interpretable. The intuition is that if a replicated sentence that is composed of only one concept has a less activation value than the top-activated sentences, the unit is not sensitive to the concept compared to a sequence of different words. Figure 9 shows the ratio of the interpretable units in each layer on several datasets. We observe that more than 90% of units are interpretable across all layers and all datasets. • At some point, the party is going to end.• At the other extreme are Chile, Costa Rica, and Uruguay.• You gather data, do experiments, read, and write.• Cashews, shrimp, and cocaine.• Scotland has a parliament, Wales an assembly.• What would it cost the organic producers?• So what exactly were Parliament' s requirements?• I must also thank the authors of the amendments.• We have to recognise the difficult cases.• I shall therefore vote against the proposals. align concepts that are out of natural language form. For example, in unit 001 in the left of FIG11, we discover that sentence structure involves many commas in top activated sentences. Since we limit the candidate concepts to only the form of morpheme, word and phrase, such punctuation concepts are hard to be detected. Another possibility is that some units may be so-called dead units that are not sensitive to any concept at all. For example, unit 260 in the right of FIG11 has no pattern that appears consistently in top activated sentences. We introduce some units whose concepts have the shared meaning in Section 4.3. We here refer concept cluster to the concepts that are aligned to the same unit and have similar semantics or grammatical roles. We analyze how clusters are formed in the units and how they vary with the target task and layer depth. FORMULA1; we define the distance between two concepts as the Euclidean distance of their vector space embedding. We use fastText pretrained on Wikipedia dataset to project each concept into the vector space. Since fastText is a character-level N -gram based word embedding, we can universally obtain the embedding for morphemes as well as words or phrases. For phrase embedding, we split it to words, project each of them and average their embeddings. The distance between two clusters is defined as the distance between their centroids. Each central heat map represents the number of times each concept pair is aligned to the same unit. Since the concepts in the x, y-axes are ordered by the clustering , if the diagonal blocks (concept clusters) emerge more strongly, the concepts in the same unit are more likely to have the similar meanings. In FIG11, the units learned in the classification tasks tend to have stronger concept clusters than those learned in the translation tasks. Particularly, the concept clusters are highly evident in the units learned in DBpedia and AG News dataset. Our intuition is that units might have more benefits to FIG11: Concept clusters of the last layer representations learned in each task. The more distinct the diagonal blocks are, the stronger the tendency that concepts aligned to the same unit share similar meanings or semantics. See Appendix E for details. solve the task by clustering similar concepts in the classification than the translation. That is, in the classification, input sentences that have the similar concepts tend to belong to the same class label, while in the translation, different concepts should be translated to different words or phrases even if they have similar meanings in general. We analyze how concept clusters change by layer in each task. We compute the averaged pairwise distance between the concepts in each layer. We project each concept to the vector space using the three pretrained embeddings: Glove BID10, ConceptNet BID17, FORMULA4 fastText. Glove and fastText embeddings are pretrained on Wikipedia dataset, and ConcpetNet is pretrained based on the ConceptNet graph structure. FIG0 shows the averaged pairwise distances in each layer. In all tasks, there is a tendency that the concepts in the same unit become closer in the vector space as the layer goes deeper. It indicates that individual units in earlier layers tend to capture more basic text patterns or symbols, while units in deeper layers capture more abstract semantics. We investigate why certain concepts emerge more than others at Section 4.6 when the ByteNet is trained on English-to-French news dataset. Here, FIG3 shows more in other datasets. Consistent with our intuition, in all datasets, both the document frequency and the delta of expected loss are closely related to the number of units per concept. It concludes that the representations are learned for identifying not only the frequent concepts in the training set and but also the important concepts for solving the target task. In section 4.4, we visualized how concepts are distributed across layers, where the model is trained on AG News dataset and English-to-German Europarl dataset. Here, FIG4 shows concept distribution in other datasets noted in Table 1.In the classification tasks, we expect to find more concepts that are directly related to predicting the output label, as opposed to the translation tasks where the representations may have to include information on most of the words for an accurate translation. While our goal is not to relate each concept to one of the labels, we find several concepts that are more predictive to a particular label than others. Consistent with section 4.4, there are data-specific and task-specific concepts aligned in each layer; i.e. {worst, 2 stars, awful} at Yelp Review, {film, ship, school} at DBpedia, and some key words at translation datasets. Note that Yelp Review and DBpedia is a classification dataset, where the model is required to predict the polarity (i.e. +1 or -1) or ontology (i.e. Company, Educational Institution, Artist, Athlete, Officeholder, Mean of Transportation, Building, Natural Place, Village, Animal, Plant, Album, Film, Written Work) for given sentence in supervised setting. FIG5 shows the number of occurrences of each concept at different layers. We count how many times each concept appears across all layers and sort them in decreasing order. We select two concepts in the translation model and seven concepts in the classification model, as to their number of occurrences. For example, since there are 15 encoder layers in the ByteNet translation model, we select 30 concepts in total. Although task and data specific concepts emerge at different layers, there is no strong pattern between the concepts and their occurrences at multiple layers. FIG5: Aligned concepts per each task and their number of occurrences over multiple layers. See Appendix H for more details.
We show that individual units in CNN representations learned in NLP tasks are selectively responsive to natural language concepts.
1,412
scitldr
Applying reinforcement learning (RL) to real-world problems will require reasoning about action-reward correlation over long time horizons. Hierarchical reinforcement learning (HRL) methods handle this by dividing the task into hierarchies, often with hand-tuned network structure or pre-defined subgoals. We propose a novel HRL framework TAIC, which learns the temporal abstraction from past experience or expert demonstrations without task-specific knowledge. We formulate the temporal abstraction problem as learning latent representations of action sequences and present a novel approach of regularizing the latent space by adding information-theoretic constraints. Specifically, we maximize the mutual information between the latent variables and the state changes. A visualization of the latent space demonstrates that our algorithm learns an effective abstraction of the long action sequences. The learned abstraction allows us to learn new tasks on higher level more efficiently. We convey a significant speedup in convergence over benchmark learning problems. These demonstrate that learning temporal abstractions is an effective technique in increasing the convergence rate and sample efficiency of RL algorithms. Reinforcement learning (RL) has been successfully applied to many different tasks . However, applying it to real-world tasks remains a challenging problem, mainly due to the large search space and sparse reward signals. In order to solve this, many research efforts have been focused on the hierarchical reinforcement learning (HRL), which decomposes an RL problem into sub-goals. By solving the sub-goals, low-level actions are composed into high-level temporal abstractions. In this way, the size of the searching space is decreased exponentially. However, the HRL often requires explicitly specifying task structures or sub-goals . How to learn those task structures or temporal abstractions automatically is still an active studying area. Many different strategies are proposed for automatically discovering the task hierarchy or learning the temporal abstraction. Some early studies try to find sub-goals or critical states based on statistic methods (; ;). More recent work seeks to learn the temporal abstraction with deep learning (; ; a). However, many of these methods still require a predefined hierarchical policy structure (e.g. the number of sub-policies), or need some degree of task-specific knowledge (e.g. hand-crafted reward function). We present a general HRL framework TAIC (Temporal Abstraction with Information-theoretic Constraints), which allows an agent to learn the temporal abstraction from past experiences or expert demonstrations without task-specific knowledge. Built upon the ideas of options framework and motor skills , we formulate the temporal abstraction problem as learning a latent representation of action sequences. In order to obtain good latent representations, we propose a novel approach to regularize the latent space by using information-theoretic constraints. The learned abstract representations of action sequences (we called options) allow us to do RL at a higher level, and easily transfer the knowledge between different tasks. Our contributions are: 1) We formulate the temporal abstraction problem as learning a latent representation of action sequences. Motivated by works using Recurrent Variational AutoEncoders (RVAE) to model sequential data in neural language processing (NLP) and other areas , we employ RVAE to perform temporal abstraction in RL. 2) We propose a regularization approach on the option space. It constrains the option to encode more information about its consequence (how the option changes the states). We present both theoretical derivations and practical solutions. 3) We show in the experiments that our learned temporal abstraction conveys meaningful information and benefit the RL training. In addition, the proposed framework provides an efficient tool for transferring knowledge between tasks. HRL is a long-standing study topic. proposed HRL with temporal abstraction in early 1990s. proposed a system in which recurrent NNs generate sequences of sub-goals, a evaluator NN predicts the rewards of going from start to goal and a RL machine tries to use such sub-goal sequences to achieve final goals . The options framework is a popular formulation for considering the problem with a two-level hierarchy . A sequence of primitive actions are converted into options. The traditional Markov Decision Process (MDP) problem is extended into semi-MDP with the use of options. Parr developed an approach called Hierarchies of Abstract Machines to calculate hierarchically structured MDP policies . Dietterich proposed MAXQ method, which performs value function decomposition over a given task structure . These early work assume the task hierarchy is predefined by human experts. For the automatic task decomposition problem, many methods try to find sub-goals or critical states based on statistic methods (; ;). These methods cannot handle continuous control problem. More recent work seeks to learn the temporal abstraction with deep learning (; ; ; a;). developed a two-layer hierarchy of policies, including one meta-policy and several primitive policies. defined the policy sketches, which annotate tasks with sequences of sub-tasks, and learn the sub-tasks and upper-level tasks jointly. presented hierarchical-DQN, which integrates hierarchical action-value functions with intrinsic reward based on predefined sub-goals. combined the options framework with policy gradient and proposed an option-critic architecture. These methods either require a predefined hierarchical policy structure (e.g. the number of sub-policies) or need to specify sub-goals. We propose a framework that allows learning temporal abstraction without predefined sub-goals. The idea of learning latent representation for HRL has been proposed before ), but they did not learn from action sequences. The SeCTAR algorithm learns a latent representation from from state sequences in trajectories using RVAE . Macro-action methods share the similar idea of combining sequences of actions into options . However, existing literature only combines primitive actions, while our work learns an abstraction, which we will show later is more beneficial for HRL training. Our work is related to Fabius & van;; , since they also utilize RVAE to encode sequential data. In this paper, we apply RVAE to model the abstraction of action sequences. We consider a MDP problem M = (S, A, P, R, γ), where S is the set of states, A is the action set, P: (S, A) → S is the transition model, R: (S, A) → R is the reward function, γ is the discount factor. presents the option framework, where an option is formulated as a tuple (I, π, β) in which I ∈ S is the initiation set, π is the sub-policy and β: S → is the termination condition. The option framework and its followers either manually designed or learned a finite set of options. The number of option becomes a hyperparameter, and each option in the set is usually independent with one another. We propose the TAIC framework, in which we model the option o as a continuous random variable, which denotes the latent representation of a sequence of actions. Furthermore, following the option framework we define (I(o), π(o), β(s, o)) as the function of the random variable o. I(o) is the initiation condition, which is assumed to be the entire state space S for simplification. We define the function π(o) as the sub-policy that maps the latent variable o to a sequence of actions. The termination condition β(s, o) controls the temporal length of the option, and will be detailed later. This modification of the option definition brings important difference. In the original option framework, each option represents one sub-policy. Above those sub-policies, there is a meta-policy π Ω that acts as a classifier, choosing one of the sub-policies at a longer time horizon. In contrast, the TAIC framework specifies the option o as a continuous random variable, which could have an infinite number of values. Sub-policy is defined as a function over the random variable. Our meta-policy π Ω outputs directly the random variable o. This changes the framework from discrete options to continuous options, and allows us to put constraints on the option space. So that the options with similar consequences become closer in the option space. Given a set of past experiences Λ = {τ 0, τ 1, ..., τ m}, where τ i ∈ Λ is one trajectory {s 0, a 0, r 0, s 1, a 1, r 1, ..., s k, a k, r k}, which comes from the agent's past experience or expert demonstration, our problem is to learn a temporal abstraction o, and the corresponding, so that the o should be able to apply to the RL task, and improve the training efficiency. We consider the problem of learning latent representations o from action sequences {a 0...k0, a 0...k1, ...}. To model the sequential data with variable length, we use the recurrent autoencoder (AE), which has been used intensively in natural language processing (NLP) and other sequential modeling tasks . Specifically, we deploy recurrent variational auto-encoders (RVAE) (; Fabius & van), because it is empirically shown to be better at finding hidden features of the inputs . In general, we would like to calculate the posterior p(o|a 0...k), where the option o captures the intrinsic features of the action sequences. The RVAE solves this by approximating the true posterior by q(o|a 0...k) and then optimizing a lower bound on the log-likelihood (Fabius & van). The log-likelihood of an action sequence a 0...k can be written as: where KL(p|q) is the Kullback-Leibler divergence, and the L(o) is the evidence lower bound: In order to find a good approximation of p(o|a 0...k) with q(o|a 0...k), namely minimize KL(q(o|a 0...k)||p(o|a 0...k)), we need to maximize the evidence lower bound L(o). The conditional distributions q(o|a 0...k) and p(a 0...k |o) are approximated by two networks, a recurrent encoder Following the common VAE formulation, we model q(o) with the Gaussian distribution, and the training objective is to minimize a reconstruction loss and a KL loss : Specifically, the encoder E takes in an action sequence a 0...k, and outputs the option represented by two vectors o µ and o σ, namely the mean and standard deviation in the Gaussian distribution. The L KL is the KL divergence between this output and the prior Gaussian distribution N. The decoder D takes in the sample of option o, and outputsâ 0...k. The L Recons is defined as the mean-square-error between a 0...k andâ 0...k. The reconstruction loss in the RVAE setting suggests it encodes the action sequences with respect to the L2 distance in action space. Figure 1 shows a simple example in 2D navigation task. Two ) 0...3 have big L2 distance in action space, but they have the same consequence (both move from state s 0 to s 1). In favor of upper-level task solver, they should be encoded closely together in the option space. Imaging the navigation task in real life, in order to reach the door you could follow different paths (avoiding some dynamic obstacles). Those different sequences all have the same coding in your brain "go reach the door". In contrast to precisely reconstructing the action sequences, our goal is to extract the latent variable capturing the information which could benefit RL training. Intuitively speaking, the option should encode the consequence of action sequence, namely how the sequence changes the state. Formally, we maximize the mutual information between the option and state changes. We decompose it into two terms. The first term is to maximize the mutual information between the option and two states (states before and after a sequence of actions): On the other hand, the option should encode the state change, and be decoupled from particular start and end states. For example, navigating to the door from this room should be represented similarly as navigating to the door at another room. This is formulated as minimizing the mutual information between the option and the individual start and end state: Following the definition of the mutual information and the chain rule of the conditional entropy, Equation 4 is transformed into minimizing the summation of two conditional entropy: Similarly, Equation 5 is equivalent to maximizing the conditional entropy of s/s given o: Notice Equation 6 and Equation 7 conflicts with each other, which is also very intuitive. Forcing the option to be irrelevant to the start and end state would make it harder to encode the information of state changes. This is a trade-off depending on how much you want the code to be state-independent. Combining Equation 6 and Equation 7, we obtain our final constraints: Figure 2: System architecture. E, D, F, P and RL are implemented with neural networks. The gradients of three losses L rvae, L adv and L pred will be back-propagated through the encoder E, and regularize the option. Note the gradient of L adv will change sign before back-propagating to E. Those two constraints are also approximated with neural networks. For Equation 8, we utilize a predictor network P: (s, o) → s, which predicts the end state s given the start state s and the option o. To optimize the constraint, we minimize the predictive loss L pred, and backpropagate the gradients into the encoder E. In order to minimize the log-likelihood in constraint Equation 9, we borrow the technology from the adversarial training . We utilize another network F: o → (s, s), which tries to recover the start state and the end state given the option code. The encoder E and the state estimator F are trained competitively with each other. F acts like a discriminator, trained in minimization of the loss of estimating the states L adv, while E is trained with the opposite gradients that try to maximize the loss L adv. Thus, E will be pushed to encode option in a way that F is not able to recover the start and end states. In this way, the encode E is regularized by these extra gradients. In practise, all of these networks are trained jointly on the experience set Λ = {τ 0, τ 1, ..., τ m}. As Figure 2 shows, we have 4 networks (E, D, F and P) collaborating and competing (note RL is not updated in the option learning process). The learned latent representation o is a temporal abstraction that captures the execution consequence of the action sequences. After learning the option, we can train the agent at a higher level. The HRL policy outputs an option, which is then decoded to a sequence of actions by the decoder D. In order to apply the learned option to HRL training, we need to consider the termination condition β(s, o). We implement and compare three different methods: Fix-length, Term-output and Term-predict. In the simplest setting, we learn the option from the action sequence with fixed length N. The decoder D terminates the output after N actions, and then the HRL policy will output another option. This termination condition (referred as Fix-len later) does not depend on the option and state. This straightforward method is easy to implement. We will show in the experiment section that for most tasks, even this naive implementation would bring benefit to the HRL training. In order to learn the option code of arbitrary sequence length, we add another output to the decoder D. At each time step t, D outputs an action a t and a termination signal c t, which determines whether this is the last action in the sequence (Figure 3 (a) ). The termination output is a 2-class classifier, trained with supervised cross-entropy loss. Note that this termination condition (referred to as Term-output) is only dependent on the option. Allowing the RVAE to encode various lengths of sequence provides the HRL with more choices. For example, in states that are stable and safe, the HRL can choose longer sequences; while in unstable backpropagate L pred i:= i + 1 8: end while 9:â 0.. states, it can operate cautiously with shorter sequences. However, this termination condition is not dependent on the state, so it cannot respond to sudden changes in the state. In this setting, we utilize the predictor network P to decide the length of action sequences (Figure 3 (b) ). The intuition is that the predictor network P also acts as a world model , which models the environment dynamics. When things are within expectation, the decoder D can go ahead and output longer sequence. When the agent encounters unfamiliar states, D should become more cautious and output shorter sequence. This termination condition (referred to as Termpredict) depends on both the option and the state. During training, we set a threshold δ on the prediction loss L pred. When L pred is bigger than δ, the encoder D terminates the output, then the HRL policy will output a new option for subsequent actions. The training algorithm is detailed in Algorithm 1. The option learned in this way is supposed to be more robust to state changes. However, this method relies on a good predictor. The length of the sequence is dependent on the hyper parameter δ. Given the option, we employ Semi-MDP framework to train an HRL agent at a higher level. Because the option is in continuous space, we use policy gradient algorithm and derive the algorithm over options. Note that although our option is continuous, our temporal abstraction could be applied to discrete problems, as long as the RVAE outputs discrete actions.;. The gradient over the parameters has the form: where the π(o|s; θ) is the high-level policy planning in state-option space. The advantage function A(s t, o t) is designed as the accumulative reward subtracting a baseline function: where R(s t, o t) is the cumulative discounted reward from time t to the future, and the baseline function V (s) is updated with the TD algorithm (assuming the upcoming options are of length t 1, t 2, ...): where r t→t+t1 is the discounted cumulative reward during executing o t between t and t + t 1: In the experiments, we employ PPO algorithm. In general, the policy gradient over option and the policy gradient over action are similar. Most of the existing RL algorithms such as TRPO, TNPG and SAC (b) are applicable. We first consider a 2D navigation task for proof of the concept. Then we apply our temporal abstraction framework to robotic control tasks in MuJoCo . At last we will move to more challenging tasks, which are also used by Haarnoja et al. (2018a) and , and are hard to solve by non-hierarchical methods. The experiments in this section are generally performed in three steps: 1) collecting the experiences using a flat PPO agent; 2) learning the option based on the TAIC algorithm; 3) training HRL based on the option. We implement a one-layer LSTM with 64 hidden units for both the E and D, and three-layer MLPs for the RL, F and P. We use learning rate 0.01 for the E, D and P, learning rate 0.0003 for the RL, 0.001 for F. We also balance multiple losses with different weights: λ kl is used for the L kl in RVAE; λ adv is used for L adv for F. We first test our framework using a 2D navigation task, which is a toy example that allows us easily visualize the option we learned from the experience. As Figure 4 (a) shows, our environment is a 10m by 10m room with obstacles. Both the state and the action are continuous, represented by (x, y) and (v x, v y) respectively. The goal is to navigate from the start location (orange circle) to the goal location (green circle). The reward for reaching the goal is 100, and -1 otherwise. We collect the experience using a flat PPO agent, running on multiple tasks with random start and goal location. Then we train an RVAE network using the collected action sequences. Figure 5 (a) shows the interpolation between two options. We first randomly sample two action sequences from the testing set (shown in red dash lines), and encode them into two options using the network E. We do a linear interpolation between these two options and then decode those options into a set of action sequences using network D. The first thing we notice is that the RVAE nicely capturing the direction and curvature of the sequences and usually outputs a smoother version (solid lines) of the original sequence (red dash lines). Further, the interpolations between two sequences smoothly transfer from one to the other, denoting that we have a smooth option space. In Figure 5 (b), we visualize how each dimension in the option space encodes different information. The red dash line shows an action sequence sampled from the testing set. This action sequence is encoded into an option which is a vector with 6 real values. We modify one dimension and decode the options into action sequences shown with the solid lines. We notice the first two dimensions control the ending point of the action sequence, and dimension 3 and 4 control the curvature, dimension 5 and 6 control more subtle properties. Note that not all the trials of training get the same . Sometimes the properties are mixed, but Figure 5 illustrates one of the most common . Next, we evaluate the ability to recover from a sudden change in the environment. and also presented a similar experiment. Initially the environment is set up as Figure 4 (a) left. After 1000 episodes, an obstacle blocks the shortest path, and creates a trap (Figure 4 (a) right). We reset the standard deviation output of the PPO algorithm to encourage exploration. The flat RL is hard to recover from this situation since the exploration at each time step still gives a similar trajectory, which leads to the trap. While the exploration in the option space in more diverse trajectories. We average 10 repeated experiments for both settings. As can be seen in Figure 5 (c), the TAIC recovers from the environment changes for all 10 trials. While the flat RL agent find the goal in 20% trials. In the robotic control domain, we utilize the MuJoCo tasks in the OpenAI Gym environment . The goal is to learn a control policy π: S → A, so that the robot runs forward as fast as possible (Figure 4 (b) ). Both the state and the action are vectors of continuous values. We present two ways of evaluating the TAIC framework. Firstly, we qualitatively evaluate the learned option by visualize the option space. Secondly, we apply the learned option to the HRL training and compare the performance. Evaluating the high dimensional option space is difficult. We observe that the losses in the option learning process (L rvae, L pred and L adv) could not reveal the underlying structure of the option space. We propose a visualization method that provides a qualitative evaluation of the option space. We now consider how to visualize the correlation between the option and state change. We randomly sample the options ). The options and the end states are first converted to 2D vectors using t-SNE , respectively. The 2D vector of the end states are associated to the location of the points in the figure, while the color of the points represents the distribution of the options. This is to say, if the option distribution is more coupled with the state change, the points will be arranged better with respect to the color. As we can see from Figure 6, with information-theoretic constraints the options and state changes become more correlated. We evaluate the option on four benchmark tasks in simulation. Figure 7 compares the HRL performance with different termination conditions. We observe the HRL using TAIC outperform the flat PPO in 3 out of 4 tasks. In the HarfCheetah and the Ant task, the curve of TAIC starts higher (because the random policy over the option space already gets positive rewards) and converges faster. One thing should be notice is that the HRL policy is updated in a much lower frequency than the flat RL, because the options are executed in multiple time steps (a typical average length is 5 time steps). In the Walker2d task, our method does not outperform the flat RL. Our hypothesis is that the agent is more unstable in this task. In our current setup, the decoder D does not depend on the state S. It acts like an open-loop controller that is more vulnerable to unstable and fast-changing states. However, our option model can be extended to close-loop sub-policies π(a|o, s) that also depends on the state. We will explore this idea in the future study. The different termination conditions have pro and cons. The Fix-len termination condition is more simple and stable in most of the cases. The ones with Term-output and Term-predict termination condition get better in some cases, but they require more tuning of the parameters such as the predict threshold, which controls the average length of the sequences. The experiences used for training the option model is a critical factor that decides what the agent can learn (as humans can also be biased by their previous experiences). We tested two ways of selecting the experiences: random selecting the experiences and selecting experiences with higher rewards. From our experiments, the agent learns faster and performs better in the latter setting. The TAIC framework provides an efficient way to transfer the past experiences to unseen and more sophisticated tasks. We apply the options learned from the above MoJoCo Ant-v1 running task, to novel tasks shown in Figure 4 (c). In the Ant turn task, the agent is awarded by making left turn as fast as possible. The goal of the other three tasks (Ant maze, Ant push and Ant fall) is to move from the start location (orange circle) to the goal location (green circle). The reward is the negative of the distance between the agent and the goal. The last two tasks require the agent to manipulate the red box in a right way, in order to avoid being stuck in sub-optimal trap states. As shown in Figure 8, TAIC (green) achieves higher rewards and quickly converges to the goal state in the first three tasks. In contrast, the flat RL (blue) method is not able to solve these tasks in 3 We compare the proposed TAIC framework with two baseline methods on more challenging tasks. The TAIC framework efficiently transfers the options learned in simpler Ant-v1 task to these novel and more complex tasks. million interactions without any task transfer treatment. We also apply transfer learning on the flat RL (orange) by using the Ant-v1 policy as an initialization. As expected, transfer learning does not bring comparable benefit to the flat RL, indicating sharing weights between different tasks is less efficient. The last two experiments also show that the transferred policy (orange) may even degrade the performance. This does not happen to our TAIC framework since it transfers high level abstracted knowledge in the form of options. This paper presented a general HRL framework TAIC for learning temporal abstraction from action sequences. We formulate the temporal abstraction problem as learning latent representations (called options) over action sequences. In order to learn a better representation, we derive theoretically on how to regularize the option space and give an applicable solution of adding constraints to option space. In the experiments, we try to reveal the underlying structure of the option space by visualizing the correlation between options and state changes. We showed qualitatively and quantitatively that our options encode meaningful information and benefit the RL training. Furthermore, the TAIC framework provides an efficient tool to transfer the knowledge learned from one task to another. Our framework can be applied together with all kinds of RL optimization algorithms, and can be applied to both discrete and continuous problems. This work brings many new directions for future studies. As we currently learn the RL task and the option separately, the option could not be improved with the improvement of the policy. In theory, it is entirely feasible to jointly optimize the two parts, or at least train them alternately. As mentioned above, the current sub-policy acts like an open-loop controller. So learning a close-loop sub-policy beyond the RNN decoder will be one of the focus areas of our future studies. We would also like to apply the TAIC framework to discrete problems and with other RL algorithms such as DQN and SAC. This could bring more insights to further improve the framework.
We propose a novel HRL framework, in which we formulate the temporal abstraction problem as learning a latent representation of action sequence.
1,413
scitldr
Recurrent neural networks (RNNs) have achieved state-of-the-art performance on many diverse tasks, from machine translation to surgical activity recognition, yet training RNNs to capture long-term dependencies remains difficult. To date, the vast majority of successful RNN architectures alleviate this problem using nearly-additive connections between states, as introduced by long short-term memory (LSTM). We take an orthogonal approach and introduce MIST RNNs, a NARX RNN architecture that allows direct connections from the very distant past. We show that MIST RNNs 1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs; 2) are far more efficient than previously-proposed NARX RNN architectures, requiring even fewer computations than LSTM; and 3) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies. Recurrent neural networks BID33; BID35 ) are a powerful class of neural networks that are naturally suited to modeling sequential data. For example, in recent years alone, RNNs have achieved state-of-the-art performance on tasks as diverse as machine translation, speech recognition BID29, generative image modeling BID30, and surgical activity recognition BID8.These successes, and the vast majority of other RNN successes, rely on a mechanism introduced by long short-term memory BID20 BID14, which was designed to alleviate the so called vanishing gradient problem (; BID3 . The problem is that gradient contributions from events at time t − τ to a loss at time t diminish exponentially fast with τ, thus making it extremely difficult to learn from distant events (see FIG0 . LSTM alleviates the problem using nearly-additive connections between adjacent states, which help push the base of the exponential decay toward 1. However LSTM in no way solves the problem, and in many cases still fails to learn long-term dependencies (see, e.g., BID0). 1 RNNs BID27 offer an orthogonal mechanism for dealing with the vanishing gradient problem, by allowing direct connections, or delays, from the distant past. However NARX RNNs have received much less attention in literature than LSTM, which we believe is for two reasons. First, as previously introduced, NARX RNNs have only a small effect on vanishing gradients, as they reduce the exponent of the decay by only a factor of n d, the number of delays. Second, as previously introduced, NARX RNNs are extremely inefficient, as both parameter counts and computation counts grow by the same factor n d.In this paper, we introduce MIxed hiSTory RNNs (MIST RNNs), a new NARX RNN architecture which 1) exhibits superior vanishing-gradient properties in comparison to LSTM and previouslyproposed NARX RNNs; 2) improves performance substantially over LSTM on tasks requiring very long-term dependencies; and 3) remains efficient in parameters and computation, requiring even fewer than LSTM for a fixed number of hidden units. Importantly, MIST RNNs reduce the decay's exponent by a factor of 2 n d −1; see FIG1. 2 AND RELATED WORK Recurrent neural networks, as commonly described in literature, take on the general form DISPLAYFORM0 which compute a new state h t in terms of the previous state h t−1, the current input x t, and some parameters θ (which are shared over time).One of the earliest variants, now known to be especially vulnerable to the vanishing gradient problem, is that of simple RNNs , described by DISPLAYFORM1 In this equation and elsewhere in this paper, all weight matrices W and biases b collectively form the parameters θ to be learned, and tanh is always written explicitly 2.Long short-term memory BID20 BID14, the most widelyused RNN architecture to date, was specifically introduced to address the vanishing gradient problem. The term LSTM is often overloaded; we refer to the variant with forget gates and without peephole connections, which performs similarly to more complex variants BID16: DISPLAYFORM2 Here σ(·) denotes the element-wise sigmoid function and denotes element-wise multiplication. f t, i t, and o t are referred as the forget, input, and output gates, which can be interpreted as controlling how much we reset, write to, and read from the memory cell c t. LSTM has better gradient properties than simple RNNs (see FIG1) because of the mechanism in Equation 7, which introduces a path between c t−1 and c t which is modulated only by the forget gate. We also remark that gated recurrent units BID5 alleviate the vanishing gradient problem using this exact same idea. NARX RNNs BID27 also address the vanishing gradient problem, but using a mechanism that is orthogonal to (and possibly complementary to) that of LSTM. This is done by allowing delays, or direct connections from the past. NARX RNNs in their general form are described by but literature typically assumes the specific variant explored in BID27, DISPLAYFORM3 DISPLAYFORM4 which we refer to as simple NARX RNNs. Note that simple NARX RNNs require approximately n d as much computation and n d as many parameters as their simple-RNN counterpart (with n d = 1), which greatly hinders their applicability in practice. To our knowledge, this drawback holds for all NARX RNN variants before MIST RNNs. For example, in , higher-order recurrent neural networks (HORNNs) are defined precisely as simple NARX RNNs, and every variant in the paper suffers from this exact same problem. And, in BID37, a simple NARX RNN architecture is defined that is limited to having precisely two delays with non-zero weights. This way, at the expense of having fewer, longer paths to the past, parameter and computation counts are only doubled. The previous work that is most similar to ours is that of Clockwork RNNs BID22, which split weights and hidden units into partitions, each with a distinct period. When it's not a partition's time to tick, its hidden units are passed through unchanged, and so Clockwork RNNs in some ways mimic NARX RNNs. However Clockwork RNNs differ in two key ways. First, Clockwork RNNs sever high-frequency-to-low-frequency paths, thus making it difficult to learn long-term behavior that must be detected at high frequency (for example, learning to depend on quick motions from the past for activity recognition). Second, Clockwork RNNs require hidden units to be partitioned a priori, which in practice is difficult to do in any meaningful way. NARX RNNs (and in particular MIST RNNs) suffer from neither of these drawbacks. Many other approaches have also been proposed to capture long-term dependencies. Notable approaches include maintaining a generative model over inputs and learning to process only unexpected inputs , operating explicitly at multiple time scales BID9, Hessian-free optimization BID28, using associative or explicit memory BID32 BID7 BID15 BID34, and initializing or restricting weight matrices to be orthogonal BID0 BID18. In BID3 BID31, gradient decompositions and sufficient conditions for vanishing gradients are presented for simple RNNs, which contain one path between times t − τ and t. Here, we use the chain rule for ordered derivatives to connect gradient components to paths and edges, which in turn provides a simple extension of the from BID3 BID31 to general NARX RNNs. We remark that we rely on slightly overloaded notation for clarity, as otherwise notation becomes cumbersome (see ).We begin by disambiguating notation, as the symbol ∂f ∂x is routinely overloaded in literature. Consider the Jacobian of f (x, u(x)) with respect to x. We let DISPLAYFORM0 ∂x, a collection of full derivatives, and we let DISPLAYFORM1 ∂x, a collection of partial derivatives. This lets us write the ordinary chain rule as DISPLAYFORM2 ∂x. Note that this notation is consistent with (; BID10, but is the exact opposite of the convention used in BID31 . Consider an ordered system of n vectors v 1, v 2, . . ., v n, where each is a function of all previous: DISPLAYFORM0 The chain rule for ordered derivatives expresses the full derivatives DISPLAYFORM1 ∂vj for any j < i in terms of the full derivatives that relate v i to all previous v k : DISPLAYFORM2 Consider NARX RNNs in their general form FORMULA3), which we remark encompasses other RNNs such as LSTM as special cases. Also, for simplicity, consider the situation that is most often encountered in practice, where the loss at time t is defined in terms of the current state h t and its own parameters θ l (which are independent of θ). DISPLAYFORM0 (This is in not necessary, but we proceed this way to make the connection with RNNs in practice evident. For example, f l may be a linear transformation with parameters θ l followed by squarederror loss.) Then the Jacobian (or transposed gradient) with respect to θ can be written as DISPLAYFORM1 because the additional term DISPLAYFORM2, and so on in Equations 11 and 12, we immediately obtain DISPLAYFORM3 because all of the partials ∂xt−τ ∂θ are 0. Equations 14 and 15 extend Equations 3 and 4 of BID31 to general NARX RNNs, which encompass simple RNNs, LSTM, etc., as special cases. This decomposition breaks ∂ + ht ∂θ into its temporal components, making it clear that the spectral norm of DISPLAYFORM4 ∂ht−τ plays a major role in how h t−τ affects the final gradient DISPLAYFORM5 In particular, if the norm of DISPLAYFORM6 ∂ht−τ is extremely small, then h t−τ has only a negligible effect on the final gradient, which in turn makes it extremely difficult to learn from events that occurred at t − τ. Equations 14 and 15, along with the chain rule for ordered derivatives, let us connect gradient components to paths and edges, which is useful for a) gaining insights into various architectures and b) solidifying intuitions from backpropagation through time which suggest that short paths between t − τ and t facilitate gradient flow. Here we provide an overview of the main idea; please see the appendix for a full derivation. By applying the chain rule for ordered derivatives to expand DISPLAYFORM0, we obtain a sum over τ terms. However each term involves a partial derivative between h t and a prior hidden state, and thus all of these terms are 0 with the exception of those states that share an edge with h t. Now, for each term, we can repeat this process. This then yields non-zero terms only for hidden states which can be connected to h t through two edges. We can then continue to apply the chain rule for ordered derivatives repeatedly, until only partial derivatives remain. Upon completion, we have a sum over gradient components, with each component corresponding to exactly one path from t − τ to t and being a product over its path's edges. The spectral norm corresponding to any particular path (t − τ → t → t → · · · → t) can then bounded as DISPLAYFORM1 where λ is the maximum spectral norm of any factor and n e is the number of edges on the path. Terms with λ < 1 diminish exponentially fast, and when all λ < 1, shortest paths dominate 3. Viewing gradient components as paths, with each component being a product with one factor per edge along the path, gives us useful insight into various RNN architectures. When relating a loss at time t to events at time t − τ, simple RNNs and LSTM contain shortest paths of length τ, while simple NARX RNNs contain shortest paths of length τ /n d, where n d is the number of delays. One can envision many NARX RNN architectures with non-contiguous delays that reduce these shortest paths further. In this section we introduce one such architecture using base-2 exponential delays. In this case, for all τ ≤ 2 n d −1, shortest paths exist with only log 2 τ edges; and for τ > 2 n d −1, shortest paths exist with only τ /2 n d −1 edges (see FIG0). Finally we must avoid the parameter and computation growth of simple NARX RNNs. We achieve this by sharing weights over delays, instead using an attention-like mechanism BID2 over delays and a reset mechanism from gated recurrent units BID5 ).The proposed architecture, which we call mixed history RNNs (MIST RNNs), is described by DISPLAYFORM0 DISPLAYFORM1 Here, a t is a learned vector of n d convex-combination coefficients and r t is a reset gate. At each time step, a convex combination of delayed states is formed according to a t; units of this combination are reset according to r t; and finally the typical linear layer and nonlinearity are applied. Here we compare MIST RNNs to simple RNNs, LSTM, and Clockwork RNNs. We begin with the sequential permuted MNIST task and the copy problem, synthetic tasks that were introduced to explicitly test RNNs for their ability to learn long-term dependencies BID20 BID28 BID24 BID0 BID18 BID7. Next we move on to 3 tasks for which it is plausible that very long-term dependencies play a role: recognizing surgical maneuvers from robot kinematics, recognizing phonemes from speech, and classifying activities from smartphone motion data. We note that for all architectures involved, many variations can be applied (variational dropout, layer normalization, zoneout, etc.). We keep experiments manageable by comparing architectures without such variations. The sequential MNIST task BID24 consists of classifying 28x28 MNIST images BID25 as one of 10 digits, by scanning pixel by pixel -left to right, top to bottom -and emitting FORMULA0 is a challenging variant where a random permutation of pixels is chosen and applied to all images before classification. LSTM with 100 hidden units is used as a baseline, with hidden unit counts for other architectures chosen to match the number of parameters. Means and standard deviations are computed using the top 5 randomized trials out of 50 (ranked according to performance on the validation set), with random learning rates and initializations. Additional experimental details can be found in the appendix. Test error rates are shown in TAB0. Here, MIST RNNs outperform simple RNNs, LSTM, and Clockwork RNNs by a large margin. We remark that our LSTM error rates are consistent with best previously-reported values, such as the error rates of 9.8% in BID6 and 12% in BID0, which also use 100 hidden units. One may also wonder if the difference in performance is due to hidden-unit counts. To test this we also increased the LSTM hidden unit count to 139 (to match MIST RNNs), and continued to increase the capacity of each model further. MIST RNNs significantly outperform LSTM in all cases. We also used this task to visualize gradient magnitudes as a function of τ (the distance from the loss which occurs at time t = 784). Gradient norms for all methods were averaged over a batch of 100 random examples early in training; see FIG1. Here we can see that simple RNNs and LSTM capture essentially no learning signal from steps that are far from the loss. To validate this claim further, we repeated the 512-unit LSTM and MIST RNN experiments, but using only the last 200 permuted pixels (rather than all 784). LSTM performance remains the same (7.4% error, within 1 standard deviation) whereas MIST RNN performance drops by 15 standard deviations (6.0% error). The copy problem is a synthetic task that explicitly challenges a network to store and reproduce information from the past. Our setup follows BID0, which is in turn based on BID20 ). An input sequence begins with L relevant symbols to be copied, is followed by a delay of D − 1 special blank symbols and 1 special go symbol, and ends with L additional blank symbols. The corresponding target sequence begins with L + D blank symbols and ends with a copy of the relevant symbols from the inputs (in the same order). We run experiments with copy delays of D = 50, 100, 200, and 400. LSTM with 100 hidden units is used as a baseline, with hidden unit counts for other architectures chosen to match the number of parameters. Additional experimental details can be found in the appendix. Results are shown in FIG2, showing validation curves of the top 5 randomized trials out of 50, with random learning rates and initializations. With a short copy delay of D = 50, we can see that all methods other than Clockwork RNNs can solve the task in a reasonable amount of time. However, as the copy delay D is increased, we can see that simple RNNs and LSTM become unable to learn a solution, whereas MIST RNNs are relatively unaffected. We also note that our LSTM are consistent with those in BID0 BID18.Note that Clockwork RNNs are expected to fail for large delays (for example, the second symbol can only be seen by the highest-frequency partition, so learning to copy this symbol will fail for precisely the same reason that simple RNNs fail). However, here they also fail for short delays, which is surprising because the high-speed partition resembles a simple RNN. We hypothesized that this failure is due to hidden unit counts / parameter counts: here, the high-frequency partition is allocated only 256 / 8 = 32 hidden units. To test this hypothesis, we reran the Clockwork RNN experiments with 1024 hidden units, so that 128 are allocated to the high-frequency partition. Indeed, under this configuration (with 10x as many parameters), Clockwork RNNs do solve the task for a delay of D = 50 and fail to solve the task for all higher delays, thus behaving like simple RNNs. Here we consider the task of online surgical maneuver recognition using the MISTIC-SL dataset BID12 BID8. Maneuvers are fairly long, high-level activities; examples include suture throw and knot tying. The dataset was collected using a da Vinci, and the goal is to map robot kinematics over time (e.g., x, y, z) to gestures over time (which are densely labeled as 1 of 4 maneuvers on a per-frame basis). We follow BID8, which achieves state-ofthe-art performance on this task, as closely as possible, using the same kinematic inputs, test setup, and hyperparameters; details can be found in the original work or in the appendix. The primary difference is that we replace their LSTM layer with our layers. Results are shown in TAB1. Here MIST RNNs match LSTM performance (with half the number of parameters). Here we consider the task of online framewise phoneme recognition using the TIMIT corpus BID13. Each frame is originally labeled as 1 of 61 phonemes. We follow common practice and collapse these into a smaller set of 39 phonemes BID26, and we include glottal stops to yield 40 classes in total. We follow BID16 for data preprocessing and for training, validation, and test splits. LSTM with 100 hidden units is used as a baseline, with hidden unit counts for other architectures chosen to match the number of parameters. Means and standard deviations are computed using the top 5 randomized trials out of 50 (ranked according to performance on the validation set), with random learning rates and initializations. Other experimental details can be found in the appendix. TAB2 shows that LSTM and MIST RNNs perform nearly identically, which both outperform simple RNNs and Clockwork RNNs. Here we consider the task of sequence classification from smartphones using the MobiAct (v2.0) dataset BID4. The goal is to classify each sequence as jogging, running, sitting down, etc., using smartphone motion data over time. Approximately 3,200 sequences were collected from 67 different subjects. We use the first 47 subjects for training, the next 10 for validation, and the final 10 for testing. Means and standard deviations are computed using the top 5 randomized trials out of 50 (ranked according to performance on the validation set), with random learning rates and initializations. Other experimental details can be found in the appendix. Results are shown in TAB3. Here, MIST RNNs outperform all other methods, including LSTM and LSTM +, a variant with the same number of hidden units and twice as many parameters. In this work we analyzed NARX RNNs and introduced a variant which we call MIST RNNs, which 1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs; 2) improve performance substantially over LSTM on tasks requiring very long-term dependencies; and 3) require even fewer parameters and computation than LSTM. One obvious direction for future work is the exploration of other NARX RNN architectures with non-contiguous delays. In addition, many recent techniques that have focused on LSTM are immediately transferable to NARX RNNs, such as variational dropout BID11, layer normalization BID1, and zoneout BID23, and it will be interesting to see if such enhancements can improve MIST RNN performance further. Removed for anonymity. ∂ht−τ are 0 except for the one satisfying t = t − τ + 1. This yields DISPLAYFORM0 Now, by applying Equation 12 again to DISPLAYFORM1, and then to DISPLAYFORM2, and so on, we trace out a path from t − τ to t, as shown in FIG0, finally ing the single term DISPLAYFORM3 which is associated with the only path from t − τ to t, with one factor for each edge that is encountered along the path. Next we consider simple NARX RNNs, again by expanding Equation 15. From Equation 10, we can see that up to n d partials are now nonzero, and that any particular partial ∂h t ∂ht−τ is nonzero if and only if t > t − τ and t and t − τ share an edge. Collecting these t as the set V t−τ = {t : t > t − τ and (t − τ, t) ∈ E}, we can write DISPLAYFORM0 We can then apply this exact same process to each DISPLAYFORM1; by defining V t = {t : t > t and (t, t) ∈ E} for all t, we can write DISPLAYFORM2 By continuing this process until only partials remain, we obtain a summation over all possible paths from t − τ to t. Each term in the sum is a product over factors, one per edge: DISPLAYFORM3 The analysis is nearly identical for general NARX RNNs, with the only difference being the specific sets of edges that are considered.8 APPENDIX: EXPERIMENTAL DETAILS 8.1 GENERAL EXPERIMENTAL SETUP Everything in this section holds for all experiments except surgical maneuver recognition, as in that case we mimicked BID8 as closely as possible, as described above. All weight matrices are initialized using a normal distribution with a mean of 0 and a standard deviation of 1/ √ n h, where n h is the number of hidden units. All initial hidden states (for t < 1) are initialized to 0. For optimization, gradients are computed using full backpropagation through time, and we use stochastic gradient descent with a momentum of 0.9, with gradient clipping as described by BID31 at 1, and with a minibatch size of 100. Biases are generally initialized to 0, but we follow best practice for LSTM by initializing the forget-gate bias to 1 Gers et al. FORMULA1; BID21. For Clockwork RNNs, 8 exponential periods are used, as in the original paper. For MIST RNNs, 8 delays are used. We avoid manual learning-rate tuning in its entirety. Instead we run 50 trials for each experimental configuration. In each trial, the learning rate is drawn uniformly at random in log space between 10 −4 and 10 1, and initial weight matrices are also redrawn at random. We report over the top 10% of trials according to validationset error. (An alternative option is to report over all trials. However, because the majority of trials yields bad performance for all methods, this simply blurs comparisons. See for example FIG2 of BID16, which compares these two options.) Data preprocessing is kept minimal, with each input image individually shifted and scaled to have mean 0 and variance 1. We split the official training set into two parts, the first 58,000 used for training and the last 2,000 used for validation. Our test set is the same as the official test set, consisting of 10,000 images. Training is carried out by minimizing cross-entropy loss. In our experiments, the L relevant symbols are drawn at random (with replacement) from the set {0, 1, . . ., 9}; D is always a multiple of 10; and L is chosen to be D/10. This way the simplest baseline of always predicting the blank symbol yields a constant error rate for all experiments. No input preprocessing of any kind is performed. In each case, we generate 100,000 examples for training and 1,000 examples for validation. Training is carried out by minimizing cross-entropy loss. We use the same experimental setup as BID8, which currently holds state-of-the-art performance on these tasks. For kinematic inputs we use positions, velocities, and gripper angles for both hands. We also use their leave-one-user-out teset setup, with 8 users in the case of JIGSAWS and 15 users in the case of MISTIC-SL. Finally we use the same hyperparameters: 1 hidden layer of 1024 units; dropout with p = 0.5; 80 epochs of training with a learning rate of 1.0 for the first 40 epochs and having the learning rate every 5 epochs for the rest of training. As mentioned in the main paper, the primary difference is that we replaced their LSTM layer with our simple RNN, LSTM, or MIST RNN layer. Training is carried out by minimizing cross-entropy loss. We follow BID16 and extract 12 mel frequency cepstral coefficients plus energy every 10ms using 25ms Hamming windows and a pre-emphasis coefficient of 0.97. However we do not use derivatives, ing in 13 inputs per frame. Each input sequence is individually shifted and scaled to have mean 0 and variance 1 over each dimension. We form our splits according to BID17, ing in 3696 sequences for training, 400 sequences for validation, and 192 sequences for testing. Training is carried out by minimizing cross-entropy loss. Means and standard deviations are computed using the top 5 randomized trials out of 50 (ranked according to performance on the validation set). In BID4, emphasis was placed on hand-crafted features, and each subject was included during both training and testing (with no official test set defined). We instead operate on
We introduce MIST RNNs, which a) exhibit superior vanishing-gradient properties in comparison to LSTM; b) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies; and c) are much more efficient than previously-proposed NARX RNNs, with even fewer parameters and operations than LSTM.
1,414
scitldr
A well-trained model should classify objects with unanimous score for every category. This requires the high-level semantic features should be alike among samples, despite a wide span in resolution, texture, deformation, etc. Previous works focus on re-designing the loss function or proposing new regularization constraints on the loss. In this paper, we address this problem via a new perspective. For each category, it is assumed that there are two sets in the feature space: one with more reliable information and the other with less reliable source. We argue that the reliable set could guide the feature learning of the less reliable set during training - in spirit of student mimicking teacher’s behavior and thus pushing towards a more compact class centroid in the high-dimensional space. Such a scheme also benefits the reliable set since samples become more closer within the same category - implying that it is easilier for the classifier to identify. We refer to this mutual learning process as feature intertwiner and embed the spirit into object detection. It is well-known that objects of low resolution are more difficult to detect due to the loss of detailed information during network forward pass. We thus regard objects of high resolution as the reliable set and objects of low resolution as the less reliable set. Specifically, an intertwiner is achieved by minimizing the distribution divergence between two sets. We design a historical buffer to represent all previous samples in the reliable set and utilize them to guide the feature learning of the less reliable set. The design of obtaining an effective feature representation for the reliable set is further investigated, where we introduce the optimal transport (OT) algorithm into the framework. Samples in the less reliable set are better aligned with the reliable set with aid of OT metric. Incorporated with such a plug-and-play intertwiner, we achieve an evident improvement over previous state-of-the-arts on the COCO object detection benchmark. Classifying complex data in the high-dimensional feature space is the core of most machine learning problems, especially with the emergence of deep learning for better feature embedding (; BID3 BID10 . Previous methods address the feature representation problem by the conventional cross-entropy loss, l 1 / l 2 loss, or a regularization constraint on the loss term to ensure small intra-class variation and large inter-class distance (; BID16 BID29 BID15 . The goal of these works is to learn more compact representation for each class in the feature space. In this paper, we also aim for such a goal and propose a new perspective to address the problem. Our observation is that samples can be grouped into two sets in the feature space. One set is more reliable, while the other is less reliable. For example, visual samples may be less reliable due to low resolution, occlusion, adverse lighting, noise, blur, etc. The learned features for samples from the reliable set are easier to classify than those from the less reliable one. Our hypothesis is that the reliable set can guide the feature learning of the less reliable set, in the spirit of a teacher supervising the student. We refer to this mutual learning process as a feature intertwiner. In this paper, a plug-and-play module, namely, feature intertwiner, is applied for object detection, which is the task of classifying and localizing objects in the wild. An object of lower resolution will inevitably lose detailed information during the forward pass in the network. Therefore, it is well-known that the detection accuracy drops significantly as resolutions of objects decrease. We can treat samples with high resolution (often corresponds to large objects or region proposals) as the reliable set and samples with low resolution (small instances) as the less reliable set 1. Equipped with these two'prototypical' sets, we can apply the feature intertwiner where the reliable set is leveraged to help the feature learning of the less reliable set. Without intertwiner in (a), samples are more scattered and separated from each other. Note there are several samples that are far from its own class and close to the samples in other categories (e.g., class person in blue), indicating a potential mistake in classification. With the aid of feature intertwiner in (b), there is barely outlier sample outside each cluster. the features in the lower resolution set approach closer to the features in the higher resolution set -achieving the goal of compact centroids in the feature space. Empirically, these two settings correspond to the baseline and intertwiner experiments (marked in gray) in TAB3. The overall mAP metric increases from 32.8% to 35.2%, with an evident improvement of 2.6% for small instances and a satisfying increase of 0.8% for large counterparts. This suggests the proposed feature intertwiner could benefit both sets. Two important modifications are incorporated based on the preliminary intertwiner framework. The first is the use of class-dependent historical representative stored in a buffer. Since there might be no large sample for the same category in one mini-batch during training, the record of all previous features of a given category for large instances is recorded by a representative, of which value gets updated dynamically as training evolves. The second is an inclusion of the optimal transport (OT) divergence as a deluxe regularization in the feature intertwiner. OT metric maps the comparison of two distributions on high-dimensional feature space onto a lower dimension space so that it is more sensible to measure the similarity between two distributions. For the feature intertwiner, OT is capable of enforcing the less reliable set to be better aligned with the reliable set. We name the detection system equipped with the feature intertwiner as InterNet. Full code suite is available at https://github.com/hli2020/feature intertwiner. For brevity, we put the descriptions of dividing two sets in the detection task, related work (partial), knowledge on OT theory and additional experiments in the appendix. Object detection BID3; BID22 BID22 BID12 ) is one of the most fundamental computer vision tasks and serves as a precursor step for other high-level problems. It is challenging due to the complexity of features in high-dimensional space , the large intra-class variation and inter-class similarity across categories in benchmarks BID4 BID27. Thanks to the development of deep networks structure BID25 BID3 and modern GPU hardware acceleration, this community has witnessed a great bloom in both performance and efficiency. The detection of small objects is addressed in concurrent literature mainly through two manners. The first is by looking at the surrounding context BID18 since a larger receptive filed in the surrounding region could well compensate for the information loss on a tiny instance during down-sampling in the network. The second is to adopt a multiscale strategy BID12 BID14 BID24 to handle the scale problem. This is probably the most effective manner to identify objects in various sizes and can be seen in (almost) all detectors. Such a practice is a "sliding-window" version of warping features across different stages in the network, aiming for normalizing the sizes of features for objects of different resolutions. The proposed feature intertwiner is perpendicular to these two solutions. We provide a new perspective of addressing the detection of small objects -leveraging the feature guidance from high-resolution reliable samples. Designing loss functions for learning better features. The standard cross-entropy loss does not have the constraint on narrowing down the intra-class variation. Several works thereafter have focused on adding new constraints to the intra-class regularization. Liu et al. BID15 proposed the angular softmax loss to learn angularly discriminative features. The new loss is expected to have smaller maximal intra-class distance than minimal inter-class distance. The center loss BID29 ) approach specifically learns a centroid for each class and penalizes the distances between samples within the category and the center. Our feature intertwiner shares some spirit with this work in that, the proposed buffer is also in charge of collecting feature representatives for each class. A simple modification BID16 to the inner product between the normalized feature input and the class centroid for the softmax loss also decreases the inner-class variation and improves the classification accuracy. Our work is from a new perspective in using the reliable set for guiding the less reliable set. In this paper, we adopt the Faster RCNN pipeline for object detection BID3 BID22. In Faster RCNN, the input image is first fed into a backbone network to extract features; a region proposal network BID22 is built on top of it to generate potential region proposals, which are several candidate rectangular boxes that might contain objects. These region proposals vary in size. Then the features inside the region are extracted and warped into the same spatial size (by RoI-pooling). Finally, the warped features are used by the subsequent CNN layers for classifying whether an object exists in the region. We now explicitly depict how the idea of feature intertwiner could be adapted into the object detection framework. FIG2 describes the overall pipeline of the proposed InterNet. A network is divided into several levels based on the spatial size of feature maps. For each level l, we split the set of region proposals into two categories: one is the large-region set whose size is larger than the output size of RoI-pooling layer and another the small-region set whose size is smaller. These two sets corresponds to the reliable and less reliable sets, respectively. For details on the generation of these two sets in object detection, refer to Sec. 6.2 in the appendix. Feature map P l at level l is fed into the RoI layer and then passed onto a make-up layer. This layer is designed to fuel back the lost information during RoI and compensate necessary details for instances of small resolution. The refined high-level semantics after this layer is robust to factors (such as pose, lighting, appearance, etc.) despite sample variations. It consists of one convolutional layer without Blue blobs stands for the less reliable set (small objects) and green for the reliable set (large ones). For current level l, feature map P l of the small set is first passed into a RoI-pooling layer. Then it is fed into a make-up layer, which fuels back the information lost during RoI; it is optimized via the intertwiner module (yellow rectangle), with aid of the reliable set (green).'OT' (in red) stands for the optimal transport divergence, which aligns information between levels (for details see Sec. 3.3). P m|l is the input feature map of the reliable set for the RoI layer; m indicates higher level(s) than l.altering the spatial size. The make-up unit is learned and optimized via the intertwiner unit, with aid of features from the large object set, which is shown in the upstream (green) of FIG2.The feature intertwiner is essentially a data distribution measurement to evaluate divergence between two sets. For the reliable set, the input is directly the outcome of the RoI layer of the large-object feature maps P m|l, which correspond to samples of higher level/resolution. For the less reliable set, the input is the output of the make-up layer. Both inputs are fed into a critic module to extract further representation of these two sets and provide evidence for intertwiner. The critic consists of two convolutions that transfer features to a larger channel size and reduce spatial size to one, leaving out of consideration the spatial information. A simple l 2 loss can be used for comparing difference between two sets. The final loss is a combination of the standard detection losses BID22 and the intertwiner loss across all levels. The detailed network structure of the make-up and critic module in the feature intertwiner is shown in the appendix (Sec. 6.6). There are two problems when applying the aforementioned pipeline into application. The first is that the two sets for the same category often do not occur simultaneously in one mini-batch; the second is how to choose the input source for the reliable set, i.e., feature map P m|l for the large object set. We address these two points in the following sections. The goal of the feature intertwiner is to have samples from less reliable set close to the samples within the same category from the reliable set. In one mini-batch, however, it often happens that samples from the less reliable set are existent while samples of the same category from the reliable set are non-existent (or vice versa). This makes it difficult to calculate the intertwiner loss between two sets. To address this problem, we use a buffer B to store the representative (prototype) for each category. Basically the representative is the mean feature representation from large instances. Let the set of features from the large-region object on all levels be F (large) critic; each sample consisting of the large set F be f (j), where j is the sample index and its feature dimension is d. The buffer could be generated as a mapping from sample features to class representative: DISPLAYFORM0 DISPLAYFORM1 where the total number of classes is denoted as N cls. Each entry b i in the buffer B is referred to as the representative of class i. Every sample, indexed by j in the large object set, contributes to the class representative i * if its label belongs to i *. Here we denote i * as the label of sample j; and Z in Eqn. denotes the total number of instances whose label is i *. The representative is deemed as a reliable source of feature representation and could be used to guide the learning of the less reliable set. There are many options to design the mapping M, e.g., the weighted average of all features in the past iterations during training within the class as shown in Eqn. FORMULA1, feature statistics from only a period of past iterations, etc. We empirically discuss different options in TAB3.Equipped with the class buffer, we define the intertwiner loss between two sets as: DISPLAYFORM2 where D is a divergence measurement; f (small,l,j) critic denotes the semantic feature after critic of the j-th sample at level l in the less reliable set (small instances). Note that the feature intertwiner is proposed to optimize the feature learning of the less reliable set for each level. During inference, the green flow as shown in FIG2 for obtaining the class buffer will be removed. Discussion on the intertwiner. (a) Through such a mutual learning, features for small-region objects gradually encode the affluent details from large-region counterparts, ensuring that the semantic features within one category should be as much similar as possible despite the visual appearance variation caused by resolution change. The resolution imperfection of small instances inherited from the RoI interpolation is compensated by mimicking a more reliable set. Such a mechanism could be seen as a teacher-student guidance in the self-supervised domain It is observed that if the representative b i is detached in back-propagation process (i.e., no backward gradient update in buffer), performance gets better. The buffer is used as the guidance for less reliable samples. As contents in buffer are changing as training evolves, excluding the buffer from network update would favorably stabilize the model to converge. Such a practice shares similar spirit of the replay memory update in deep reinforcement learning. (c) The buffer statistics come from all levels. Note that the concept of "large" and "small" is a relative term: large proposals on current level could be deemed as "small" ones on the next level. However, the level-agnostic buffer would always receive semantic features for (strictly) large instances. This is why there are improvements across all levels (large or small objects) in the experiments. How to acquire the input source, denoted as P (large,l), i.e., feature maps of large proposals, to be fed into the RoI layer on current level l? The feature maps, denoted by P l or P m, are the output of ResNet at different stages, corresponding to different resolutions. Altogether we use four stages, i.e., P 2 to P 5; P 2 corresponds to feature maps of the highest resolution and P 5 has the lowest resolution. The inputs are crucial since they serve as the guidance targets to be learned by small instances. There are several choices, which is depicted in Fig. 3. Option (a): P (large,l) = P l. The most straightforward manner would be using features on current level as input for large object set. This is inappropriate since P l is trained in RPN specifically for identifying small objects; adopting it as the source could contain noisy details of small instances. DISPLAYFORM0 Option (b): P (large,l) = P m. Here m and l denote the index of stage/level in ResNet and m > l. One can utilize the higher level feature map(s), which has the proper resolution for large objects. Compared with P l, P m have lower resolution and higher semantics. For example, consider the large instances assigned to level l = 2 (how to assign large and small instances is discussed in the appendix Sec. 6.2), P m indicates three stages m = 3, 4, 5. However, among these large instances, some of them are deemed as small objects on higher level m -implying that those feature maps P m might not carry enough information. They would also have to be up-sampled during the RoI operation for updating the buffer on current level l. TAB6 in the appendix for example, among the assigned 98 proposals on level 2, there are 31 (11 on level 3 and 20 on level 4) objects that have insufficient size (smaller than RoI's output). Hence it might be inappropriate to directly use the high-level feature map as well. Option (c): P (large,l) = P m|l F(P m). P m is first up-sampled to match the size at P l and then is RoI-pooled with outcome denoted as P m|l. The up-sampling operation aims at optimizing a mapping F: P m → P m|l that can recover the information of large objects on a shallow level. F could be as simple as a bilinear interpolation or a neural network. These three options are empirically reported in Table 1. The baseline model in (b) corresponds to the default setting in cases 2d, 2e of TAB3, where the feature intertwiner is adopted already. There is a 0.8% AP boost from option (b) to (c), suggesting that P m for large objects should be converted back to the feature space of P l. The gain from (a) to (c) is more evident, which verifies that it might not be good to use P l directly. More analysis is provided in the appendix. Option (c) is a better choice for using the reliable feature set of large-region objects. Furthermore, we build on top of this choice and introduce a better alternative to build the connection between P l and P m|l, since the intertwiner is designed to guide the feature learning of the less reliable set on the current level. If some constraint is introduced to keep information better aligned between two sets, the modified input source P m|l for large instance would be more proper for the other set to learn. (large,l) = OT(P l, P m|l). The spirit of moving one distribution into another distribution optimally in the most effective manner fits well into the optimal transport (OT) domain BID19. In this work, we incorporate the OT unit between feature map P l and P m|l, which serve as inputs before the RoI-pooling operation. A discretized version BID6 BID2 of the OT divergence is employed as an additional regularization to the loss: DISPLAYFORM0 where the non-positive P serves as a proxy for the coupling and satisfies P T 1 C2 = 1 C1, P 1 C1 = 1 C2. ·, · indicates the Frobenius dot-product for two matrices and 1 m:= (1/m, . . ., 1/m) ∈ R m +. Now the problem boils down to computing P given some ground cost Q. We adopt the Sinkhorn algorithm BID26 in an iterative manner to compute W Q, which is promised to have a differentiable loss function. The OT divergence is hence referred to as Sinkhorn divergence. Given features maps P m from higher level, the generator network F up-samples them to match the size of P l and outputs P m|l. The channel dimension of P l and P m|l is denoted as C. The critic unit H (not the proposed critic unit in the feature intertwiner) is designed to reduce the spatial dimensionality of input to a lower dimension k while keeping the channel dimension unchanged. The number of samples in each distribution is C. The outcome of the critic unit in OT module is denoted as p l, p m|l, respectively. We choose cosine distance as the measurement to calculate the distance between manifolds. The output is known as the ground cost Q x,y, where x, y indexes the sample in these two distributions. The complete workflow to compute the Sinkhorn divergence is summarized in Alg. 1. Note that each level owns their own OT module W l Q (P l, P m) = OT(P l, P m|l). The total loss for the detector is summarized as: DISPLAYFORM1 where L standard is the classification and regression losses defined in most detectors BID22.Algorithm 1 Sinkhorn divergence W Q adapted for object detection (red rectangle in FIG2 Input: Feature maps on current and higher levels, P l, P m The generator network F and the critic unit in OT module H Output: Sinkhorn loss W l Q (P l, P m) = OT(P l, P m|l) Upsample via generator P m|l = F(P m) Feed both inputs into critic p l = H(P l), p m|l = H(P m|l) DISPLAYFORM2 DISPLAYFORM3 known as Sinkhorn iterate end for Compute the proxy matrix PWhy prefer OT to other alternatives. As proved in BID0, the OT metric converges while other variants (KL or JS divergence) do not in some scenarios. OT provides sensible cost functions when learning distributions supported by low-dim manifolds (in our case, p l and p m|l) while other alternatives do not. As verified via experiments in Table 1, such a property could facilitate the training towards a larger gap between positive and false samples. In essence, OT metric maps the comparison of two distributions on high-dimensional feature space onto a lower dimension space. The use of Euclidean distance could improve AP by around 0.5% (see Table 1, (d) l 2 case), but does not gain as much as OT does. This is probably due to the complexity of feature representations in high-dimension space, especially learned by deep models. We evaluate InterNet on the object detection track of the challenging COCO benchmark . For training, we follow common practice as in BID22 ) and use the trainval35k split (union of 80k images from train and a random 35k subset of images from 40k val split) for training. The lesion and sensitivity studies are reported by evaluating on the minival split (the remaining 5k images from val). For all experiments, we use depth 50 or 101 ResNet BID3 with FPN BID12 ) constructed on top. We base the framework on Mask-RCNN without the segmentation branch. All ablative analysis adopt austere settings: training and test image scale only at 512; no multi-scale and data augmentation (except for horizontal flip). Details on the training and test procedure are provided in the appendix (Sec. 6.5). Baseline comparison. TAB3 lists the comparison of InterNet to baseline, where both methods shares the same setting. On average it improves by 2 points in terms of mAP. The gain for small objects is much more evident. Note that our method also enhances the detection of large objects (by 0.8%), since the last level also participates in the intertwiner update by comparing its similarity feature to the history buffer, which requires features of the same category to be closer to each other. The last level does not contribute to the buffer update though. Assignment strategy (analysis based on Sec. 6.2). Table 2a also investigates the effect of different region proposal allocations.'by RoI size' divides proposals whose area is below the RoI threshold in TAB6 as small and above as large;'more on higher' indicates the base value in Eqn. is smaller (=40); the default setting follows BID12 where the base is set to 224. Preliminary, we think putting more proposals on higher levels (the first two cases) would balance the workload of the intertwiner; since the default setting leans towards too many proposals on level 2. However, there is no gain due to the mis-alignment with RPN training. The distribution of anchor templates in RPN does not alter accordingly, ing in the inappropriate use of backbone feature maps. Intertwinter loss. Upper block in TAB3 shows a factor of 1.0 to be merged on the total loss whereas lower block depicts a specific factor that achieves better AP than others. The simple l 2 loss achieves slightly better than the KL divergence, where the latter is computed as DISPLAYFORM0 The l 1 option is by around 1 point inferior than these two and yet still verifies the effectiveness of the intertwiner module compared with baseline (34.2 vs 32.8) -implying the generalization ability of our method in different loss options. How does the intertwiner module affect learning? By measuring the divergence between two sets (i.e., small proposals in the batch and large references in the buffer), we have gradients, as the influence, back-propagated from the critic to make-up layer. In the end, the make-up layer is optimized to enforce raw RoI outputs recovering details even after the loss from reduced resolution. The naive design denoted by'separate' achieves 34.0% AP as shown in TAB3. To further make the influence of the intertwiner stronger, we linearly combine the features after critic with the original detection feature (with equal weights, aka 0.5; not shown in FIG2) and feed this new combination into the final detection heads. This improves AP by 1 point (denoted as 'linear' in TAB3). The'naive add' case with equal weights 1 does not work (loss suddenly explodes during training), since the amplitude of features among these two sources vary differently if we simply add them. TAB3 shows that it does not. A natural thought could be having a window size of K and sliding the window to keep the most recent features recorded. In general, larger size improves performance (see case '2000' vs the size of 'one epoch' where batch size is 8, 37.3% → 38.8%). In these cases, statistics of large object features for one category cannot reflect the whole training set and it keeps alternating as network is updated. Using'all history' data by running averaging not only saves memory but also has the whole picture of the data. Preliminary, we choose a decayed scheme that weighs more to recent features than ones in the long run, hoping that the model would be optimized better as training evolves. However, experiments does not accord with such an assumption: AP is better where features are equally averaged (c.f., 40.5% and 39.2%) in terms of network evolution. Unified or level-based buffer? Unified. TAB3 upper block reports such a perspective. In early experiments, we only have one unified buffer in order to let objects on the last level also involved in the intertwiner. Besides, the visual features of large objects should be irrelevant of scale variation. This achieves a satisfying AP already. We also try applying different buffers on each level 3. The performance improvement is slight, although the additional memory cost is minor. Other investigations. As discussed at the end of Sec. 3.1, detaching buffer transaction from gradient update attracts improvement (40.5% vs 40.1% in TAB3). Moreover, we tried imposing stronger supervision on the similarity feature of large proposals by branching out a cross-entropy loss, for purpose of diversifying the critic outputs among different categories. However, it does not work and this additional loss seems to dominate the training process. Performance. We list a comparison of our InterNet with previous state-of-the-arts in TAB7 in the appendix. Without multi-scale technique, ours (42.5%) still favorably outperforms other two-stage detectors (e.g., Mask-RCNN, 39.2%) as well as one-stage detector (SSD, 31.2%). Moreover, we showcase in FIG3 the per-class improvement between the baseline and the improved model after adopting feature intertwiner in distinct drop for the'couch' class, we find that for a large couch among samples on COCO, usually there sit a bunch of people, stuff, pets, etc. And yet the annotations in these cases would cover the whole scenario including these noises, making the feature representation of the large couch quite inaccurate. The less accurate features would guide the learning of their small counterparts, ing in a lower AP for this class. Model complexity and timing. The feature intertwiner only increases three light-weight conv. layers at the make-up and critic units. The usage of class buffer could take up a few GPU memory on-the-fly; however, since we adopt an'all-history' strategy, the window size is just 1 instead of a much larger K. The additional cost to the overall model parameters is also from the OT module for each level; however, we find using just one conv. layer for the critic H and two conv. layers with small kernels for generator F is enough to achieve good . Training on 8 GPUs with batch size of 8 takes around 3.4 days; this is slower than Mask-RCNN reported in . The memory cost on each card is 9.6 GB, compared with baseline 8.3 GB. The inference runs at 325ms per image (input size is 800) on a Titan Pascal X, increasing around 5% time compared to baseline (308 ms). We do not intentionally optimize the codebase, however. In this paper, we propose a feature intertwiner module to leverage the features from a more reliable set to help guide the feature learning of another less reliable set. This is a better solution for generating a more compact centroid representation in the high-dimensional space. It is assumed that the high-level semantic features within the same category should resemble as much as possible among samples with different visual variations. The mutual learning process helps two sets to have closer distance within the cluster in each class. The intertwiner is applied on the object detection task, where a historical buffer is proposed to address the sample missing problem during one mini-batch and the optimal transport (OT) theory is introduced to enforce the similarity among the two sets. Since the features in the reliable set serve as teacher in the feature learning, careful preparation of such features is required so that they would match the information in the small-object set. This is why we design different options for the large set and finally choose OT as a solution. With aid of the feature intertwiner, we improve the detection performance by a large margin compared to previous state-of-the-arts, especially for small instances. Feature intertwiner is positioned as a general alternative to feature learning. As long as there exists proper division of one reliable set and the other less reliable set, one can apply the idea of utilizing the reliable set guide the feature learning of another, based on the hypothesis that these two sets share similar distribution in some feature space. One direction in the future work would be applying feature intertwiner into other domains, e.g., data classification, if proper set division are available. Self-supervised learning. The buffer in the feature intertwiner can be seen as utilizing non-visual domain knowledge on a set of data to help supervise the feature learning for another set in highdimensional space. Such a spirit falls into the self-supervised learning domain. In, Chen et al. proposed a knowledge distillation framework to learn compact and accurate object detectors. A teacher model with more capacity is designed to provide strong information and guide the learning of a lite-weight student model. The center loss BID29 ) is formulated to learn a class center and penalize samples that have a larger distance with the centroid. It aims at enlarging inter-class resemblance with cross-entropy (CE) loss as well as narrowing down innerclass divergence for face recognition. In our work, the feature intertwiner gradually aggregates statistics of a meta-subset and utilizes them as targets during the feature learning of a less accurate (yet holding a majority) subset. We are inspired by the proposal-split mechanism in object detection domain to learn recognition at separate scales in the network. The self-paced learning framework deals with two sets as well, where the easy examples are first introduced to optimize the hidden variable and later on during training, the hard examples are involved. There is no interaction between the two sets. The division is based on splitting different samples. In our framework, the two sets mutually help and interact with each other. The goal is towards optimizing a more compact class centroid in the feature space. These are two different branches of work. Optimal transport (OT) has been applied in two important tasks. One is for transfer learning in the domain adaption problem. Lu et al. BID17 explored prior knowledge in the cost matrix and applied OT loss as a soft penalty for bridging the gap between target and source predictions. Another is for estimating generative models. In BID23, a metric combined with OT in primal form with an energy distance in a highly discriminative feature representation with unbiased gradients. Genevay et al. BID6 presents the first tractable method to train large-scale generative models using an OT-based loss. We are inspired by these works in sense that OT metric is favorably competitive to measure the divergence between two distributions supported on low-dimensional manifolds. In this paper we adopt the ResNet model BID3 with feature pyramid dressings BID12 ) constructed on top. It generates five levels of feature maps to serve as inputs for the subsequent RPN and detection branches. Denote the level index as l = {1, . . ., 5} and the corresponding feature maps as P l. Level l = 1 is the most shallow stage with more local details for detecting tiny objects and level l = 5 is the deepest stage with high-level semantics. Let A = {a j} denote the whole set of proposals generated by RPN from l 2 to l 6 (level six is generated from l 5, for details refer to BID12). The region proposals are divided into different levels from l 2 to l 5: DISPLAYFORM0 where a 0 =4 as in BID12; base=224 is the canonical ImageNet pre-training setting. TAB6 shows a detailed breakdown 4 of the proposal allocation based on Eqn.. We can see most proposals from RPN focus on identifying small objects and hence are allocated at shallow level l = 2. The threshold is set to be the ratio of RoI output's area over the area of feature map. For example, threshold on l = 3 is obtained by (14/64) 2, where 14 is the RoI output size as default setting. Proposals whose area is below the threshold suffer from the inherent design during RoI operation -these feature outputs are up-sampled by a simple interpolation. The information of small regions is already lost and RoI layer does not help much to recover them back. As is shown on the fourth row ("below # / above #"), such a case holds the majority. This observation brings in the necessity of designing a meta-learner to provide guidance on feature learning of small objects due to the loophole during the RoI layer. For level l in the network, we define small proposals (or RoIs) to be those already assigned by and large to be those above l: DISPLAYFORM1 where the superscript s,b denotes the set of small and large proposals, respectively. The last two rows in TAB6 show an example of the assignment. These RoIs are then fed into the RoI-pooling layer 5 to generate output features maps for the subsequent detection pipeline to process. One may wonder the last level do not have large objects for reference based on Eqn. FORMULA10. In preliminary experiments, leaving proposals on the last level out of the intertwiner could already improve the overall performance; however, if the last level is also involved (since the buffer is shared across all levels), AP for large objects also improves. See the experiments in Sec. 4.1 for detailed analysis. Let u, u indicate the individual sample after degenerating high-dimensional features P m|l, P l from two spaces into low manifolds. u, u are vectors of dimension k. The number of samples in these two distributions is denoted by C 1 and C 2, respectively. The OT metric between two joint probability distributions supported on two spaces (U, U) is defined as the solution of the linear program BID2. Denote the data and reference distribution as P ψ, P r ∈ Prob(U) 6, respectively, we have the continuous form of OT divergence: DISPLAYFORM0 where γ is a coupling; Γ is the set of couplings that consists of joint distributions. Intuitively, γ(u, u) implies how much "mass" must be transported from u to u in order to transform the distribution P ψ into P r; Q is the "ground cost" to move a unit mass. Eqn. above becomes the p-Wasserstein distance (or loss, divergence) between probability measures when U is equipped with a distance D U and Q = D U (u, u) p, for some exponent p. The biased version of Sinkhorn divergence used in Table 1 is defined by: DISPLAYFORM1 More analysis on Table 1. All these options have been discussed explicitly at the beginning of Sec. 3.3. Option (a) is inferior due to the inappropriateness of feature maps; (b) serves as the baseline and used as the default setting in TAB3. Options in (c) verifies that up-sampling feature maps from higher-level onto current level is preferable; F being a neural net ensures better improvement. Options in (d) illustrates the case where a supervision signal is imposed onto pair (P l, P m|l) to make better alignment between them. We can observe that OT outperforms other variants in this setup. Moreover, we tried a biased version BID6 of the Sinkhorn divergence. However, it does not bring in much gain compared to the previous setup. Besides, it could burden system efficiency during training (although it is minor considering the total time per iteration). Such a phenomenon could from an improper update of critic and generator inside the OT module, since the gradient flow would be iterated twice more for the last two terms above. Extending OT divergence to image classification. We also testify OT divergence on CIFAR-10 where feature maps between stages are aligned via OT. Test error decreases by around 1.3%. This suggests the potential application of OT in various vision tasks. Different from OT in generative models, we deem the channel dimension as different samples to compare, instead of batch-wise manner as in BID23; and treat the optimization of F and H in a unified min problem, as opposed to the adversarial training BID6.6.4 COMPARISON TO STATE-OF-THE-ARTS ON COCO AND PASCAL VOC To further verify the effectiveness of the feature intertwiner, we further conduct experiments on the PASCAL VOC 2007 dataset. The are shown in Table 5. Two network structures are adopted. For ResNet-101, the division of the four levels are similar as ResNet-101-FPN on COCO; for VGG-16, we take the division similarly as stated in SSD BID14. Specifically, the output of layer'conv7','conv8 2','conv9 2' and'conv10 2' are used for P 2 to P 5, respectively. Our method performs favorably against others in both backbone structures on the PASCAL dataset. We adopt the stochastic gradient descent as optimizer. Initial learning rate is 0.01 with momentum 0.9 and weight decay 0.0001. Altogether there are 13 epoches for most models where the learning rate is dropped by 90% at epoch 6 and 10. We find the warm-up strategy BID8 barely improves the performance and hence do not adopt it. The gradient clip is introduced to prevent training loss to explode in the first few iterations, with maximum gradient norm to be 5. Batch size is set to 8 and the system is running on 8 GPUs. The object detector is based on Mask-RCNN (or Faster-RCNN). RoIAlign is adopted for better performance. The model is initialized with the corresponding ResNet model pretrained on ImageNet. The new proposed feature intertwiner module is trained from scratch with standard initialization. The basic backbone structure for extracting features is based on FPN network BID12, where five ResNet blocks are employed with up-sampling layers. The region proposal network consists of one convolutional layer with one classification and regression layer. The classifier structure is similar as RPN's -one convolution plus one additional classification/regression head. Non-maximum suppression (NMS) is used during RPN generation and detection test phase. Threshold for RPN is set to 0.7 while the value is 0.3 during test. We do not adopt a dense allocation of anchor templates as in some literature BID14 ); each pixel on a level only has the number of anchors the same as the number of aspect ratios (set to 0.5, 1 and 2). Each level l among the five stages owns a unique anchor size: 32, 64, 128, 256, and 512. The detailed network architecture on the make-up layer and critic layer are shown below. Output size Layers in the make-up module B × C l × 14 × 14 conv2d(C l, C l, k = 3, padding = 1) B × C l × 14 × 14 batchnorm2d(C l) B × C l × 14 × 14 relu(·) Table 6: Network structure of the make-up unit, which consists of one convolutional layer without altering the spatial size. Input: RoI output of the small-set feature map P l. We denote the output of the make-up layer as P l. B is the batch size in one mini-batch; C l is the number of channels after the feature extractor in ResNet blocks for each level. For example, when l = 2, C l = 256, etc. Layers in the critic module B × 512 × 7 × 7 conv2d(C l, 512, k = 3, padding = 1, stride = 2) B × 512 × 7 × 7 batchnorm2d B × 512 × 7 × 7 relu(·) B × 1024 × 1 × 1 conv2d(512, 1024, k = 7) B × 1024 × 1 × 1 batchnorm1d B × 1024 × 1 × 1 relu(·) B × 1024 × 1 × 1 sigmoid(·) Table 7: Network structure of the critic unit. Input: for large set, it is the RoI output of the large-set feature map P m|l and for small set, it is the output of the make-up layer P l. B is the batch size in one mini-batch; C l is the number of channels in ResNet blocks.
(Camera-ready version) A feature intertwiner module to leverage features from one accurate set to help the learning of another less reliable set.
1,415
scitldr
Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner. Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario. A key modelling decision is to what extent the architecture should be shared across tasks. On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models. On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur. Ideally, the network should adaptively identify which parts of the network to share in a data driven way. Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference. Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting. Continual learning (CL), sometimes called lifelong or incremental learning, refers to an online framework where the knowledge acquired from learning tasks in the past is kept and accumulated so that it can be reused in the present and future. Data belonging to different tasks could potentially be non i.i.d. (; ; ; ; ;). A continual learner must be able to learn a new task, crucially, without forgetting previous tasks (; ; ;). In addition, CL frameworks should continually adapt to any domain shift occurring across tasks. The learning updates must be incremental -i.e, the model is updated at each task only using the new data and the old model, without access to all previous data (from earlier tasks) -due to speed, security and privacy constraints. A compromise must be found between adapting to new tasks and enforcing stability to preserve knowledge from previous tasks. Excessive adaptation could lead to inadvertent forgetting of how to perform earlier tasks. Indeed, catastrophic forgetting is one of the main pathologies in continual learning (; ; ; ; ; a; ; ; ; ; ; ;). Many approaches to continual learning employ an architecture which is divided a priori into (i) a slowly evolving, global part; and (ii) a quickly evolving, task-specific, local part. This is one way to enable multi-task transfer whilst mitigating catastrophic forgetting, which has proven to be effective (b; ;), albeit with limitations. Specifying a priori the shared global, and task-specific local parts in the architecture restricts flexibility. As more complex and heterogeneous tasks are considered, one would like a more flexible, data-driven approach to determine the appropriate amount of sharing across tasks. Here, we aim at automating the architecture adaptation process so that each neuron of the network can either be kept intact, i.e. acting as global, or adapted to the new task locally. Our proposed variational inference framework is flexible enough to learn the range within which the adaptation parameters can vary. We introduce for each neuron one binary parameter controlling whether or not to adapt, and two parameters to control the magnitude of adaptation. All parameters are learnt via variational inference. We introduce our framework as an expansion of the variational continual learning algorithm , whose variational and sequential Bayesian nature makes it convenient for our modelling and architecture adaptation procedure. Our modelling ideas can also be applied to other continual learning frameworks, see the Appendix for a brief discussion. We highlight the following contributions: A modelling framework which flexibly automates the adaptation of local and global parts of the (multi-task) continual architecture. This optimizes the tradeoff between mitigating catastrophic forgetting and improving task transfer. A probabilistic variational inference algorithm which supports incremental updates with adaptively learned parameters. The ability to combine our modelling and inference approaches without any significant augmentation of the architecture (no new neurons are needed). State-of-the-art in six experiments on five datasets, which demonstrate the effectiveness of our framework in terms of overall accuracy and reducing catastrophic forgetting. We briefly discuss three related approaches to continual learning: (a) regularisation-based, (b) architecture-based and (c) memory-based. We provide more details of related work in Section A in the Appendix. (a) A complementary approach to CLAW is the regularisation-based approach to balance adaptability with catastrophic forgetting: a level of stability is kept via protecting parameters that greatly influence the prediction against radical changes, while allowing the rest of the parameters to change without restriction (; ; ; ; ; ; c). The elastic weight consolidation (EWC) algorithm by is a seminal example, where a quadratic penalty is imposed on the difference between parameter values of the old and new tasks. One limitation is the high level of hand tuning required. (b) The architecture-based approach aims to deal with stability and adaptation issues by a fixed division of the architecture into global and local parts (b; ; ; ; ; ; b). (c) The memory-based approach relies on episodic memory to store data (or pseudo-data) from previous tasks (; ; ; ; ; ; ; ; ; ; van de ; ;). Limitations include overheads for tasks such as data storage, replay, and optimisation to select (or generate) the points. CLAW can as well be seen as a combination of a regularisation-based approach (the variational inference mechanism) and a modelling approach which automates the architecture building process in a data-driven manner, avoiding the overhead ing from either storing or generating data points from previous tasks. CLAW is also orthogonal to (and simple to combine with, if needed) memory-based methods. In this paper, we use Variational Continual Learning as the underlying continual learning framework. However, our methods apply to other frameworks, see Appendix (Section A.1). VCL is a variational Bayesian framework where the posterior of the model parameters θ is learnt and updated continually from a sequence of T datasets, {x, where t = 1, 2, . . ., T and N t is the size of the dataset associated with the t-th task. More specifically, denote by p(y|θ, x) the probability distribution returned by a discriminative classifier with input x, output y and parameters θ., we approximate the intractable posterior p(θ|D 1:t) after observing the first t datasets via a tractable variational distribution q t as: where q 0 is the prior p, p(t), and Z t is the normalizing constant which does not depend on θ but only on the data D. This framework allows the approximate posterior q t (θ) to be updated incrementally from the previous approximate posterior q t−1 (θ) in an online fashion. In VCL, the approximation in is performed by minimizing the following KL-divergence over a family Q of tractable distributions: This framework can be enhanced to further mitigate catastrophic forgetting by using a coreset , i.e. a representative set of data from previously observed tasks that can serve as memory and can be revisited before making a decision. As discussed in the Related Work, this leads to overhead costs of memory and optimisation (selecting most representative data points). Previous work on VCL considered simple models without automatic architecture building or adaptation. In earlier CL approaches, the parts of the network architecture that are shared among the learnt tasks are designated a priori. To alleviate this rigidity and to effectively balance adaptation and stability, we propose a multi-task, continual model in which the adaptation of the architecture is data-driven by learning which neurons need to be adapted as well as the maximum adaptation capacity for each. All the model parameters (including those used for adaptation) are estimated via an efficient variational inference algorithm which incrementally learns from data of the successive tasks, without a need to store (nor generate) data from previous tasks and with no expansion in the network size. With model parameters θ, the overall variational objective we aim at maximising at task with index t is equivalent to the following online marginal likelihood: We propose a framework where the architecture, whose parameters are θ, is flexibly adapted based on the available tasks, via a learning procedure that will be described below. With each task, we automate the adaptation of the neuron contributions. Both the adaptation decisions (i.e. whether or not to adapt) and the maximum allowed degree of adaptation for every neuron are learnt. We refer to the binary adaptation variable as α. There is another variable s that is learnt in a multi-task fashion to control the maximum degree of adaptation, such that the expression b = s 1+e −a − 1 limits how far the task-specific weights can differ from the global weights, in case the respective neuron is to be adapted. The parameter a depicts unconstrained adaptation, as described later. 2 We illustrate the proposed model to perform this adaptation by learning the probabilistic contributions of the different neurons within the network architecture on a task-by-task basis. We follow this with the inference details. Steps of the proposed modeling are listed as follows: • For a task T, the classifier that we are modeling outputs: • The task-specific weights w T can be expressed in terms of their global counterparts as follows: The symbol • denotes an element-wise (Hadamard) multiplication. • For each task T and each neuron j at layer i, α T i,j is a binary variable which indicates whether the corresponding weight is adapted (α T i,j = 1) or unadapted (α T i,j = 0). Initially assume that the adaptation probability α T i,j follows a Bernoulli distribution with probability p i,j 3, α T i,j ∼ Bernoulli(p i,j). Since this Bernoulli is not straightforward to optimise, and to adopt a scalable inference procedure based on continuous latent variables, we replace this Bernoulli with a Gaussian that has an equivalent mean and variance from which we draw α T i,j. For the sake of attaining higher fidelity than what is granted by a standard Gaussian, we base our inference on a variational Gaussian estimation. Though in a context different from continual learning and with different estimators, the idea of replacing Bernoulli with an equivalent Gaussian has proven to be effective with dropout (; . The approximation of the Bernoulli distribution by the corresponding Gaussian distribution is achieved by matching the mean and variance. The mean and variance of the Bernoulli distribution are p i,j, p i,j (1 − p i,j), respectively. A Gaussian distribution with the same mean and variance is used to fit α • The variable b T controls the strength of the adaptation and it limits the range of adaptation via: So that the maximum adaptation is s. The variable a T is an unconstrained adaptation value, similar to that in . The addition of 1 is to facilitate the usage of a probability distribution while still keeping an adaptation range allowing for the attenuation or amplification of each neuron's contribution. • Before facing the first dataset and learning task t = 1, the prior on the weights q 0 (w) = p(w) is chosen to be a log-scale prior, which can be expressed as: p(log |w|) ∝ c, where c is a constant. The log-scale prior can alternatively be described as: At a high level, adapting neuron contributions can be seen as a generalisation of attention mechanisms in the context of continual learning. Applying this adaptation procedure to the input leads to an attention mechanism. However, our approach is more general since we do not apply it only to the very bottom (i.e. input) layer, but throughout the whole network. We next show how our variational inference mechanism enables us to learn the adaptation parameters. We describe the details related to the proposed variational inference mechanism. The adaptation parameters are included within the variational parameters. The (unadapted version of the) model parameters θ consist of the weight vectors w. To automate adaptation, we perform inference on p i,j, which would have otherwise been a hyperparameter of the prior (; ;). Multiplying w by (1 + bα) where α is distributed according to, then from with random noise variable ∼ N: From and, the corresponding KL-divergence between the variational posterior of w, q(w|γ) and the prior p(w) is as follows. The subscripts are removed when q in turn is used as a subscript for improved readability. The variational parameters are γ i,j and p i,j. where the switch from to is due to the entropy computation of the Gaussian q(w i,j |γ i,j) defined in. The switch from to is due to using a log-scale prior, similar to Appendix C in and to Section 4.2 in . E q(w|γ) log | | is computed via an accurate approximation similar to equation in , with slightly different values of k 1, k 2 and k 3. This is a very close approximation via numerically pre-computing E q(w|γ) log | | using a third degree polynomial ). This is the form of the KL-divergence between the approximate posterior after the first task and the prior. Afterwards, it is straightforward to see how this KL-divergence applies for the subsequent tasks in a manner similar to, but while taking into account the new posterior form and original prior. The KL-divergence expression derived in is to be minimised. By minimising with respect to p i,j and then using samples from the respective distributions to assign values to α i,j, adapted contributions of each neuron j at each layer i of the network are learnt per task. Values of p i,j are constrained between 0 and 1 during training via projected gradient descent. Using to express the value of b i,j, and neglecting the constant term therein since it does not affect the optimisation, the KL-divergence in is equivalent to: Values of a i,j are straightforwardly learnt by minimising with respect to a i,j. This subsection explains how to learn the maximum adaptation variable s i,j. Values of the maximum s i,j of the logistic function defined in are learnt from multiple tasks. For each neuron j at layer i, there is a general value s i,j and another value that is specific for each task t, referred to as s i,j,t. This is similar to the meta-learning procedure proposed in . The following procedure to learn s is performed for each task t such that: (i) the optimisation performed to learn a task-specific value s i,j,t benefits from the warm initialisation with the general value s i,j rather than a random initial condition; and then (ii) the new information obtained from the current task t is ultimately reflected back to update the general value s i,j. • First divide the sample N t into two halves. For the first half, depart from the general value of s i,j as an initial condition, and use the assigned data examples from task t to learn the taskspecific values s i,j,t for the current task t. For neuron j at layer i, refer to the second term in, The set of parameters θ contains s as well as other parameters, but we focus here on s in the f notation since the following procedure is developed to optimise s. Also, refer to the loss of the (classification) function f as Err(f) = CE(f (x, θ) y), where CE stands for the cross-entropy: • Now use the second half of the data from task t to update the general learnt value s i,j: Where ω 1 and ω 2 are step-size parameters. When testing on samples from task t after having faced future tasks t + 1, t + 2,..., the value of s i,j used is the learnt s i,j,t. There is only one value per neuron, so the overhead ing from storing such values is negligible. The key steps of the algorithm are listed in Algorithm 1. Input: A sequence of T datasets, {x, where t = 1, 2, . . ., T and N t is the size of the dataset associated with the t-th task. Output: q t (θ), where θ are the model parameters. Initialise all p(|w i,j |) with a log-scale prior, as in. for the current task t. for i = 1... # layers do for j = 1... # neurons at layer i do Compute p i,j using stochastic gradient descent on. Compute s i,j,t using. Update the corresponding general value s i,j using. end for end for end for At task t, the algorithmic complexity of a single joint update of the parameters θ based on the additive terms in 2 ), where L is the number of layers in the network, D is the (largest) number of neurons within a single layer, E is the number of samples taken from the random noise variable, and M is the minibatch size. Each α is obtained by taking one sample from the corresponding p, so that does not in an overhead in terms of the complexity. Our experiments mainly aim at evaluating the following: (i) the overall performance of the introduced CLAW, depicted by the average classification accuracy over all the tasks; (ii) the extent to which catastrophic forgetting can be mitigated when deploying CLAW; and (iii) the achieved degree of positive forward transfer. The experiments demonstrate the effectiveness of CLAW in achieving state-of-the-art continual learning measured by classification accuracy and by the achieved reduction in catastrophic forgetting. We also perform ablations in Section D in the Appendix which exhibit the relevance of each of the proposed adaptation parameters. We perform six experiments on five datasets. The datasets in use are: MNIST , notMNIST , Fashion-MNIST , Omniglot and CIFAR-100 . We compare the obtained by CLAW to six different state-of-the-art continual learning algorithms: the VCL algorithm (original form and one with a coreset), the elastic weight consolidation (EWC) algorithm , the progress and compress (P&C) algorithm, the reinforced continual learning (RCL) algorithm , the one referred to as functional regularisation for continual learning (FRCL) using Gaussian processes and the learn-to-grow (LTG) algorithm (b). Our main metric is the all-important classification accuracy. We consider six continual learning experiments, based on the MNIST, notMNIST, Fashion-MNIST, Omniglot and CIFAR-100 datasets. The introduced CLAW is compared to two VCL versions: VCL with no coreset and VCL with a 200-point coreset assembled by the K-center method , EWC, P&C, RCL, FRCL (its TR version) and LTG 4. All the reported classification accuracy values reflect the average classification accuracy over all tasks the learner has trained on so far. More specifically, assume that the continual learner has just finished training on a task t, then the reported classification accuracy at time t is the average accuracy value obtained from testing on equally sized sets each belonging to one of the tasks 1, 2,..., t. For all the classification experiments, statistics reported are averages of ten repetitions. Statistical significance and standard error of the average classification accuracy obtained after completing the last two tasks of each experiment are displayed in Section E in the Appendix. As can be seen in Figure 1, CLAW achieves state-of-the-art classification accuracy in all the six experiments. The minibatch size is 128 for Split MNIST and 256 for all the other experiments. More detailed descriptions of the of every experiment are given next: Permuted MNIST Using MNIST, Permuted MNIST is a standard continual learning benchmark (a; ;). For each task t, the corresponding dataset is formed by performing a fixed random permutation process on labeled MNIST images. This random permutation is unique per task, i.e. it differs for each task. For the hyperparameter λ of EWC, which controls the overall contribution from previous data, we experimented with two values, λ = 1 and λ = 100. We report the latter since it has always outperformed EWC with λ = 1 in this experiment. EWC with λ = 100 has also previously produced the best EWC classification . In this experiment, fully connected single-head networks with two hidden layers are used. There are 100 hidden units in each layer, with ReLU activations. Adam is the optimiser used in the 6 experiments with η = 0.001, β 1 = 0.9 and β 2 = 0.999. Further experimental details are given in Section C in the Appendix. Results of the accumulated classification accuracy, averaged over tasks, on a test set are displayed in Figure 1a. After 10 tasks, CLAW achieves significantly (check the Appendix) higher classification than all the competitors. Split MNIST In this MNIST based experiment, five binary classification tasks are processed in the following sequence: 0/1, 2/3, 4/5, 6/7, and 8/9 . The architecture used consists of fully connected multi-head networks with two hidden layers, each consisting of 256 hidden units with ReLU activations. As can be seen in Figure 1b, CLAW achieves the highest classification accuracy. Split Fashion-MNIST Fashion-MNIST is a dataset whose size is the same as MNIST but it is based on different (and more challenging) 10 classes. The five binary classification tasks here are: T-shirt/Trouser, Pullover/Dress, Coat/Sandals, Shirt/Sneaker, and Bag/Ankle boots. The architecture used is the same as in Split notMNIST. In most of the continual learning tasks (including the more significant, later ones) CLAW achieves a clear classification improvement (Figure 1d). Omniglot This is a sequential learning task of handwritten characters of 50 alphabets (a total of over 1,600 characters with 20 examples each) belonging to the Omniglot dataset . We follow the same way via which this task has been used in continual learning before ); handwritten characters from each alphabet constitute a separate task. We thus have 50 tasks, which also allows to evaluate the scalability of the frameworks in comparison. The model used is a CNN. To deal with the convolutions in CLAW, we used the idea proposed and referred to as the local reparameterisation trick by Kingma et al. (2014;, where a single global parameter is employed per neuron activation in the variational distribution, rather than employing parameters for every constituent weight element 5 . Further details about the CNN used are given in Section C. The automatically adaptable CLAW achieves better classification accuracy (Figure 1e). This dataset consists of 60,000 colour images of size 32 × 32. It contains 100 classes, with 600 images per class. We use a split version CIFAR-100. Similar to , we perform a 20-task experiment with a disjoint subset of five classes per task. CLAW achieves significantly higher classification accuracy (Figure 1f) -also higher than the previous state of the art on CIFAR-100 by. Details of the used CNN are in Section C. A that can be taken from Figure 1 (a-f) is that CLAW consistently achieves state-of-the-art (in all the 6 experiments). It can also be seen that CLAW scales well. For instance, the difference between CLAW and the best competitor is more significant with Split notMNIST than it is with the first two experiments, which are based on the smaller and less challenging MNIST. Also, CLAW achieves good with Omniglot and CIFAR-100. To assess catastrophic forgetting, we show how the accuracy on the initial task varies over the course of the training procedure on the remaining tasks. Since Omniglot (and CIFAR-100) contain a larger number of tasks: 50 tasks, i.e. 49 remaining tasks after the initial task, this setting is more relevant for Omniglot and CIFAR-100. We nonetheless display the for Split MNIST, Split notMNIST, Split Fashion-MNIST, Omniglot and CIFAR-100. As can be seen in Figure 2, CLAW (at times jointly) achieves state-of-the-art performance retention degrees. Among the competitors, P&C and LTG also achieve high performance retention degrees. An empirical that can be made out of this and the previous experiment, is that CLAW achieves better overall continual learning , partially thanks to the way it addresses catastrophic forgetting. The idea of adapting the architecture by adapting the contributions of neurons of each layer also seems to be working well with datasets like Omniglot and CIFAR-100, giving directions for imminent future work where CLAW can be extended for other application areas based on CNNs. The purpose of this experiment is to assess the impact of learning previous tasks on the current task. In other words, we want to evaluate whether an algorithm avoids negative transfer, by evaluating the relative performance achieved on a unique task after learning a varying number of previous tasks. From Figure 3, we can see that CLAW achieves state-of-the-art in 4 out of the 5 experiments (at par in the fifth) in terms of avoiding negative transfer. We introduced a continual learning framework which learns how to adapt its architecture from the tasks and data at hand, based on variational inference. Rather than rigidly dividing the architecture into shared and task-specific parts, our approach adapts the contributions of each neuron. We achieve The impact of learning previous tasks on a specific task (the last task) is inspected and used as a proxy for evaluating forward transfer. This is performed by evaluating the relative performance achieved on a unique task after learning a varying number of previous tasks. This means that the value at x-axis = 1 refers to the learning accuracy of the last task after having learnt solely one task (only itself), the value at 2 refers to the learning accuracy of the last task after having learnt two tasks (an additional previous task), etc. Overall, CLAW achieves state-of-the-art in 4 out of the 5 experiments (at par in the fifth) in terms of avoiding negative transfer. Best viewed in colour. that without having to expand the architecture with new layers or new neurons. Results of six different experiments on five datasets demonstrate the strong empirical performance of the introduced framework, in terms of the average overall continual learning accuracy and forward transfer, and also in terms of effectively alleviating catastrophic forgetting. We begin by briefly summarising the contents of the Appendix below: • Related works are described in Section A, followed by a brief discussion on the potential applicability of CLAW to another continual learning (CL) framework in Section A.1. • In Section E, we provide the statistical significance and standard error of the average classification accuracy obtained after completing the last two tasks from each experiment. • Further experimental details are given in Section C. • In Section D and Figures 4-8, we display the of performed ablations which manifest the relevance of each adaptation parameter. A complementary approach to CLAW, which could be combined with it, is the regularisation-based approach to balance adaptability with catastrophic forgetting: a level of stability is kept via protecting parameters that greatly influence the prediction against radical changes, while allowing the rest of the parameters to change without restriction . In , the regulariser is based on synapses where an importance measure is locally computed at each synapse during training, based on their respective contributions to the change in the global loss. During a task change, the less important synapses are given the freedom to change whereas catastrophic forgetting is avoided by preventing the important synapses from changing . The elastic weight consolidation (EWC) algorithm, introduced by , is a seminal example of this approach where a quadratic penalty is imposed on the difference between parameter values of the old and new tasks. One limitation of EWC, which is rather alleviated by using minibatch or stochastic estimates, appears when the output space is not low-dimensional, since the diagonal of the Fisher information matrix over parameters of the old task must be computed, which requires a summation over all possible output labels (; ; . In addition, the regularisation term involves a sum over all previous tasks with a term from each and a hand-tuned hyperparameter that alters the weight given to it. The accumulation of this leads to a lot of hand-tuning. The work in is based on penalising confident fitting to the uncertain knowledge by a maximum entropy regulariser. Another seminal algorithm based on regularisation, which can be applied to any model, is variational continual learning (VCL) which formulates CL as a sequential approximate (variational) inference problem. However, VCL has only been applied to simple architectures, not involving any automatic model building or adaptation. The framework in incrementally matches the moments of the posterior of a Bayesian neural network that has been trained on the first and then the second task, and so on. Other algorithms pursue regularisation approaches based on sparsity (; . For example, the work in (c) encourages sparsity on the neuron activations to alleviate catastrophic forgetting. The l 2 distance between the top hidden activations of the old and new tasks is used for regularisation in . This approach has achieved good , but is computationally expensive due to the necessity of computing at least a forward pass for every new data point through the network representing the old task . Other regularisation-based continual learning algorithms include . Another approach is the architecture-based one where the principal aim is to administer both the stability and adaptation issues via dividing the architecture into reusable parts that are less prone to changes, and other parts especially devoted to individual tasks (b; ; ; ; ; a;). To learn a new task in the work by Rusu et al. (2016a), the whole network from the previous task is first copied then augmented with a new part of the architecture. Although this is effective in eradicating catastrophic forgetting, there is a clear scalability issue since the architecture growth can be prohibitively high, especially with an increasing number of tasks. The work introduced in (b) bases its continual learning on neural architecture search, whereas the representation in is optimised such that online updates minimize the error on all samples while limiting forgetting. The framework proposed by interestingly aims at solving this neural architecture structure learning problem, while balancing the tradeoff between adaptation and stability, via designed reinforcement learning (RL) strategies. When facing a new task, the optimal number of neurons and filters to add to each layer is cast as a combinatorial optimisation problem solved by an RL strategy whose reward signal is a function of validation accuracy and network complexity. Another RL based framework is the one presented by where catastrophic forgetting is mitigated at multiple time scales via RL agents with a synaptic model inspired by neuroscience. Bottom layers (those near the input) are generally shared among the different tasks, while layers near the output are task-specific. Since the model structure is usually divided a priori and no automatic architecture learning nor adaptation takes place, alteration on the shared layers can still cause performance loss on earlier tasks due to forgetting . A clipped version of maxout networks is developed in where parameters are partially shared among examples. The method in is based a dynamic network expansion accomplished by a generative adversarial network. The memory-based approach, which is the third influential approach to address the adaptationcatastrophic forgetting tradeoff, relies on episodic memory to store data (or pseudodata) from previous tasks (; ; ; ;). A major limitation of the memory-based approach is that data from previous tasks may not be available in all real-world problems (; . Another limitation is the overhead ing from the memory requirements, e.g. storage, replay, etc. In addition, the optimisation required to select the best observation to replay for future tasks is a source of further overhead . In addition to the explicit replay form, some works have been based on generative replay (; ; ; ; ; ; van de ;). train a deep generative model based on generative adversarial networks (b;) to mimic past data. This mitigates the aforementioned problem, albeit at the added cost of the training of the generative model and sharing its parameters. Alleviating catastrophic forgetting via replay mechanisms has also been adopted in reinforcement learning, e.g. . A similar approach was introduced by where gradients of the previous task (rather than data examples) are stored so that a trust region consisting of gradients of all previous tasks can be formed to reduce forgetting. Other algorithms based on replay mechanisms include (a; . Equivalent tradeoffs to the one between adaptation and stability can be found in the literature since the work in , in which a balance was needed to resolve the stabilityplasticity dilemma, where the latter refers to the ability to rapidly adapt to new tasks. The works introduced in (; shed light on the tradeoff between adaptation and stability, where they explore measures of intransigence and forgetting. The former refers to the inability to adapt to new tasks and data, whereas an increase in the latter clearly signifies an instability problem. Other recent works tackling the same tradeoff include where the transfer-interference (interference is catastrophic forgetting) tradeoff is optimised for the sake of maximising transfer and minimising interference by an algorithm based on experience replay and meta-learning. Other recent algorithms include the ORACLE algorithm by , which addresses the sensitivity of a continual learner to the order of tasks it encounters by establishing an order robust learner that represents the parameters of each task as a sum of task-shared and task-specific parameters. The algorithm in achieves functional regularisation by performing approximate inference over the function (instead of parameter) space. They use a Gaussian process obtained by assuming the weights of the last neural network layer to be Gaussian distributed. Our model is also related to the multi-task learning approach (; ; ;). As mentioned in the main document, ideas of the proposed CLAW can be applied to continual learning frameworks other than VCL. The latter is more relevant for the inference part of CLAW since both are based on variational inference. As per the modeling ideas, e.g. the binary adaptation parameter depicting whether or not to adapt, and the maximum allowed adaptation, these can be integrated within other continual learning frameworks. For example, the algorithm in utilises reinforcement learning to adaptively expand the network. The optimal number of nodes and filters to be added is cast as a combinatorial optimisation problem. In CLAW, we do not expand the network. As such, an extension of the work in can be inspired by CLAW where not only the number of nodes and filters to be added is decided for each task, but also a soft and more general version where an adaptation based on the same network size is performed such that the network expansion needed in can be further moderated. In this section, we provide information about the statistical significance and standard error of CLAW and the competing continual learning frameworks. In Table 1, we list the average accuracy values (Figure 1 in the main document) obtained after completing the last two tasks from each of the six experiments. A bold entry in Table 1 denotes that the classification accuracy of an algorithm is significantly higher than its competitors. Significance are identified using a paired t-test with p = 0.05. Each average accuracy value is followed by the corresponding standard error. Average classification accuracy ing from CLAW is significantly higher than its competitors on the 6 experiments. 99.2 ± 0.2 % 93. 98.7 ± 0.3 % 95.8 ± 0.4 % 96.9 ± 0.5 % 92.9 ± 0.4 % 97.8 ± 0.4 % 97.7 ± 0.2 % 96.1 ± 0.6 % 97.8 ± 0.3 % Split notMNIST (task 5) 98.4 ± 0.2 % 92.1 ± 0.3 % 96.0 ± 0.3 % 92.3 ± 0.4 % 96.9 ± 0.5 % 97.3 ± 0.5 % 95.2 ± 0.7 % 97.4 ± 0.3 % Split Fashion-MNIST (task 4) 93.2 ± 0.2 % 90.0 ± 0.3 % 90.7 ± 0.2 % 89.4 ± 0.4 % 91.4 ± 0.3 % 91.1 ± 0.3 % 90.4 ± 0.2 % 92.5 ± 0.4 % Split Fashion-MNIST (task 5) 92.5 ± 0.2 % 88.0 ± 0.2 % 88.5 ± 0.4 % 87.6 ± 0.3 % 90.8 ± 0.2 % 89.7 ± 0.4 % 87.7 ± 0.4 % 91.1 ± 0.3 % Omniglot (task 49) 84.5 ± 0.2 % 81.1 ± 0.3 % 81.8 ± 0.3 % 78.2 ± 0.3 % 82.8 ± 0.2 % 80.1 ± 0.4 % 79.9 ± 0.3 % 83.6 ± 0.3 % Omniglot (task 50) 84.6 ± 0.3 % 80.7 ± 0.3 % 81.1 ± 0.4 % 77.3 ± 0.3 % 82.7 ± 0.3 % 80.2 ± 0.4 % 79.8 ± 0.5 % 83.5 ± 0.3 % CIFAR-100 (task 19) 95.6 ± 0.3 % 78.7 ± 0.4 % 80.8 ± 0.3 % 63.1 ± 0.5 % 68.3 ± 0.6 % 63.7 ± 0.6 % 77.4 ± 0.7 % 86.2 ± 0.4 % CIFAR-100 (task 20) 95.6 ± 0.3 % 77.2 ± 0.4 % 79.9 ± 0.4 % 62.4 ± 0.4 % 65.5 ± 0.6 % 60.4 ± 0.6 % 76.8 ± 0.6 % 85.6 ± 0.5 % Here are some additional details about the datasets in use: The MNIST dataset is used in both the Permuted MNIST and Split MNIST experiments. The MNIST (Mixed National Institute of Standards and Technology) dataset is a handwritten digit dataset. Each MNIST image consists of 28 × 28 pixels, which is also the pixel size of the notMNIST and Fashion-MNIST datasets. The MNIST dataset contains a training set of 60,000 instances and a test set of 10,000 instances. As mentioned in the main document, each experiment is repeated ten times. Data is randomly split into three partitions, training, validation and test. A portion of 60% of the data is reserved for training, 20% for validation and 20% for testing. Statistics reported are the averages of these ten repetitions. Number of epochs required per task to reach a saturation level for CLAW (and the bulk of the methods in comparison) was 10 epochs for all experiments except for Omniglot and CIFAR-100 (15 epochs). Used values of ω 1 and ω 2 are 0.05 and 0.02, respectively. For Omniglot, we used a network similar to the one used in, which consists of 4 blocks of 3 × 3 convolutions with 64 filters, followed by a ReLU and a 2 × 2 max-pooling. The same CNN is used for CIFAR-100. CLAW achieves clearly higher classification accuracy on both Omniglot and CIFAR-100 (Figures 1e and 1f). The plots displayed in this section empirically demonstrate how important the main adaptation parameters are in achieving the classification performance levels reached by CLAW. In each of the Figures 4-9, the classification performance of CLAW is compared to the following three cases: 1) when the parameter controlling the maximum degree of adaptation is not learnt in a multi-task fashion, i.e. when the respective general value s i,j is used instead of s i,j,t. 2) when adaptation always happens, i.e. the binary variable denoting the adaptation decision is always activated. 3) when adaptation never takes place. The differences in classification accuracy between CLAW and each of the other three plots in Figures 4-9 empirically demonstrate the relevance of each adaptation parameter.
A continual learning framework which learns to automatically adapt its architecture based on a proposed variational inference algorithm.
1,416
scitldr
High-dimensional data often lie in or close to low-dimensional subspaces. Sparse subspace clustering methods with sparsity induced by L0-norm, such as L0-Sparse Subspace Clustering (L0-SSC), are demonstrated to be more effective than its L1 counterpart such as Sparse Subspace Clustering (SSC). However, these L0-norm based subspace clustering methods are restricted to clean data that lie exactly in subspaces. Real data often suffer from noise and they may lie close to subspaces. We propose noisy L0-SSC to handle noisy data so as to improve the robustness. We show that the optimal solution to the optimization problem of noisy L0-SSC achieves subspace detection property (SDP), a key element with which data from different subspaces are separated, under deterministic and randomized models. Our provide theoretical guarantee on the correctness of noisy L0-SSC in terms of SDP on noisy data. We further propose Noisy-DR-L0-SSC which provably recovers the subspaces on dimensionality reduced data. Noisy-DR-L0-SSC first projects the data onto a lower dimensional space by linear transformation, then performs noisy L0-SSC on the dimensionality reduced data so as to improve the efficiency. The experimental demonstrate the effectiveness of noisy L0-SSC and Noisy-DR-L0-SSC. High-dimensional data often lie in or close to low-dimensional subspaces. Sparse subspace clustering methods with sparsity induced by 0 -norm, such as 0 -Sparse Subspace Clustering (0 -SSC) , are demonstrated to be more effective than its 1 counterpart such as Sparse Subspace Clustering (SSC). However, these 0 -norm based subspace clustering methods are restricted to clean data that lie exactly in subspaces. Real data often suffer from noise and they may lie close to subspaces. We propose noisy 0 -SSC to handle noisy data so as to improve the robustness. We show that the optimal solution to the optimization problem of noisy 0 -SSC achieves subspace detection property (SDP), a key element with which data from different subspaces are separated, under deterministic and randomized models. Our provide theoretical guarantee on the correctness of noisy 0 -SSC in terms of SDP on noisy data. We further propose Noisy-DR-0 -SSC which provably recovers the subspaces on dimensionality reduced data. Noisy-DR-0 -SSC first projects the data onto a lower dimensional space by linear transformation, then performs noisy 0 -SSC on the dimensionality reduced data so as to improve the efficiency. The experimental demonstrate the effectiveness of noisy 0 -SSC and Noisy-DR-0 -SSC. Clustering is an important unsupervised learning procedure for analyzing a broad class of scientific data in biology, medicine, psychology and chemistry. On the other hand, high-dimensional data, such as facial images and gene expression data, often lie in low-dimensional subspaces in many cases, and clustering in accordance to the underlying subspace structure is particularly important. For example, the well-known Principal Component Analysis (PCA) works perfectly if the data are distributed around a single subspace. The subspace learning literature develops more general methods that recover multiple subspaces in the original data, and subspace clustering algorithms aim to partition the data such that data belonging to the same subspace are identified as one cluster. Among various subspace clustering algorithms, the ones that employ sparsity prior, such as Sparse Subspace Clustering (SSC) and 0 -Sparse Subspace Clustering (0 -SSC) , have been proven to be effective in separating the data in accordance with the subspaces that the data lie in under certain assumptions. Sparse subspace clustering methods construct the sparse similarity matrix by sparse representation of the data. Subspace detection property (SDP) defined in Section 4.1 ensures that the similarity between data from different subspaces vanishes in the sparse similarity matrix, and applying spectral clustering on such sparse similarity matrix leads to compelling clustering performance. prove that when the subspaces are independent or disjoint, SDP can be satisfied by solving the canonical sparse linear representation problem using data as the dictionary, under certain conditions on the rank, or singular value of the data matrix and the principle angle between the subspaces. SSC has been successfully applied to a novel deep neural network architecture, leading to the first deep sparse subspace clustering method. Under the independence assumption on the subspaces, low rank representation Liu et al. (2010; is also proposed to recover the subspace structures. Relaxing the assumptions on the subspaces to allowing overlapping subspaces, the and the LowRank Sparse Subspace Clustering achieve subspace detection property with high probability. The geometric analysis in shows the theoretical on subspace recovery by SSC. In the following text, we use the term SSC or 1 -SSC exchangeably to indicate the Sparse Subspace Clustering method in . Real data often suffer from noise. Noisy SSC proposed in handles noisy data that lie close to disjoint or overlapping subspaces. While 0 - has guaranteed clustering correctness via subspace detection property under much milder assumptions than previous subspace clustering methods including SSC, it assumes that the observed data lie in exactly in the subspaces and does not handle noisy data. In this paper, we present noisy 0 -SSC, which enhances 0 -SSC by theoretical guarantee on the correctness of clustering on noisy data. It should be emphasized that while 0 -SSC on clean data empirically adopts a form of optimization problem robust to noise, it lacks theoretical analysis on the correctness of 0 -SSC on noisy data. In this paper, the correctness of noisy 0 -SSC on noisy data in terms of the subspace detection property is established. Our analysis is under both deterministic model and randomized models, which is also the model employed in the geometric analysis of . Our randomized analysis demonstrates potential advantage of noisy 0 -SSC over its 1 counterpart as more general assumption on data distribution can be adopted. Moreover, we present Noisy Dimensionality Reduced 0 -Sparse Subspace Clustering (Noisy-DR-0 -SSC), an efficient version of noisy 0 -SSC which also enjoys robustness to noise. Noisy-DR-0 -SSC first projects the data onto a lower dimensional space by random projection, then performs noisy 0 -SSC on the dimensionality reduced data. Noisy-DR-0 -SSC provably recovers the underlying subspace structure in the original data from the dimensionality reduced data under deterministic model. Experimental demonstrate the effectiveness of both noisy 0 -SSC and Noisy-DR-0 -SSC. We use bold letters for matrices and vectors, and regular lower letter for scalars throughout this paper. The bold letter with superscript indicates the corresponding column of a matrix, e.g. A i is the i-th column of matrix A, and the bold letter with subscript indicates the corresponding element of a matrix or vector. · F and · p denote the Frobenius norm and the vector p -norm or the matrix p-norm, and diag(·) indicates the diagonal elements of a matrix. H T ⊆ R d indicates the subspace spanned by the columns of T, and A I denotes a submatrix of A whose columns correspond to the nonzero elements of I (or with indices in I without confusion). σ t (·) denotes the t-th largest singular value of a matrix, and σ min (·) indicates the smallest singular value of a matrix. supp(·) is the support of a vector, P S is an operator indicating projection onto the subspace S. We hereby introduce the notations for subspace clustering on noisy data considered in this paper. The uncorrupted data matrix is denoted by Y = [y 1, . . ., y n] ∈ R d×n, where d is the dimensionality and n is the size of the data. The uncorrupted data Y lie in a union of is the additive noise. x i = y i + n i is the noisy data point that is corrupted by the noise n i. n k = n, and denote the corresponding columns in X by X (k). The data X are normalized such that each column has unit 2 -norm in our deterministic analysis. We consider deterministic noise model where the noise Z is fixed and max n i ≤ δ. Note that our analysis can be extended to a random noise model which is common and also considered by noisy , and the random noise model assumes that columns of Z are sampled i.i.d. and max n i ≤ δ with high probability. Note that such random noise model does not require spherical symmetric noise as that in. 0 - proposes to solve the following 0 sparse representation problem and it proves that the subspace detection property defined in Definition 1 is satisfied with the globally optimal solution to. We resort to solve the 0 regularized sparse approximation problem below to handle noisy data for 0 -SSC, which is the optimization problem of noisy 0 -SSC: The definition of subspace detection property for noisy 0 -SSC and noiseless 0 -SSC, i.e. 0 -SSC on noiseless data, is defined in Definition 1 below. Definition 1. (Subspace detection property for noisy and noiseless 0 -SSC) Let Z * be the optimal solution to. The subspaces {S k} K k=1 and the data X satisfy subspace detection property for noisy 0 -SSC if Z i is a nonzero vector, and nonzero elements of Z i correspond to the columns of X from the same subspace as y i for all 1 ≤ i ≤ n. Similarly, in the noiseless setting where X = Y, let Z * be the optimal solution to. The subspaces {S k} K k=1 and the data X satisfy the subspace detection property for noiseless 0 -SSC if Z i is a nonzero vector, and nonzero elements of Z i correspond to the columns of X that from the same subspace as y i for all 1 ≤ i ≤ n. We say that subspace detection property holds for x i if nonzero elements of Z * i correspond to the data that lie in the same subspace as y i, for either noisy 0 -SSC or noiseless 0 -SSC. Similar to , we introduce the deterministic, semi-random and fullyrandom models for the analysis of noisy 0 -SSC. • Deterministic Model: the subspaces and the data in each subspace are fixed. • Semi-Random Model: the subspaces are fixed but the data are independent and identically distributed in each of the subspaces. • Fully-Random Model: both the subspaces and the data of each subspace are independent and identically distributed. The data in the above definitions refer to clean data without noise. We refer to semi-random model and fully-random model as randomized models in this paper. The theoretical on the subspace detection property for noisy 0 -SSC are presented in this section under deterministic model and randomized models. All the data Y are normalized to have unit norm for illustration purpose, so they lie on the surface of the sphere. S 1 and S 2 are two subspaces in the three-dimensional ambient space. The subspace spanned by y i ∈ S 1 and y j ∈ S 2 is an external subspace, and the intersection of this external subspace and S 1 is a dashed line y i OA. We introduce the definition of general position and external subspace before our analysis on noisy 0 -SSC. The assumption of general condition is rather mild. In fact, if the data points in X (k) are independently distributed according to any continuous distribution, then they almost surely in general position. Let the distance between a point x ∈ R d and a subspace S ⊆ R d be defined as d(x, S) = inf y∈S x− y 2, the definition of external subspaces is presented as follows. Figure 1 illustrates an example of external subspace. spanned by a set of linear independent points {y ij} L j=1 ⊆ Y is defined to be an external subspace of y if. The point y is said to be away from its external subspaces if are the set of all external subspaces of y of dimension no greater than d for y, i.e. }. All the data points in Y (k) are said to be away from the external subspaces if each of them is away from the its associated external spaces. Remark 1. (Subspace detection property holds for noiseless 0 -SSC under the deterministic model) It can be verified that the following statement is true. Under the deterministic model, suppose data is noiseless, If all the data points in Y (k) are away from the external subspaces for any 1 ≤ k ≤ K, then the subspace detection property for 0 -SSC holds with the optimal solution Z * to. To present our theoretical of the correctness of noisy 0 -SSC, we also need the definitions of the minimum restricted eigenvalue and the subspace separation margin, which are defined as follows. In the following analysis, we employ β to denote the sparse code of datum x i so that a simpler notation other than Z i is dedicated to our analysis. Definition 4. The minimum restricted eigenvalue of the uncorrupted data is defined as for r ≥ 1. In addition, the normalized minimum restricted eigenvalue of the uncorrupted data is defined byσ We have the following perturbation bound for the distance between a data point and the subspaces spanned by noisy and noiseless data, which is useful to establish the conditions when the subspace detection property holds for noisy 0 -SSC. Lemma 1. Let β ∈ R n and Y β has full column rank. Suppose δ <σ Y,r where r = β 0, then X β is a full column rank matrix, and The optimization problem of noisy 0 -SSC is separable. For each 1 ≤ i ≤ n, the optimization problem with respect to the sparse code of i-th data point is Lemma 2 shows that the optimal solution to the noisy 0 -SSC problem is also that to a 0 -minimization problem with tolerance to noise. Lemma 2. Let nonzero vector β * be the optimal solution to the noisy 0 -SSC problem for point then β * is the optimal solution to the following sparse approximation problem with the uncorrupted data as the dictionary: where c * Define B(x i, c 0) = {x : x − x i ≤ c 0} be the ball centered at x i with radius c 0. If B(x i, c 0) is away from the corresponding confusion area, i.e. all the external subspaces in H yi,d k, then subspace detection property holds with the solution to a proper sparse approximation problem where x i is approximated by the uncorrupted data, as shown in the following Lemma. Lemma 3. Suppose Y is in general position and Then the subspace detection property holds for x i with the optimal solution to the following sparse approximation problem, denoted by β *, i.e. nonzero elements of β * correspond to the columns of X from the same subspace as y i. Now we use the above to present the main on the correctness of noisy 0 -SSC. Theorem 1. (Subspace detection property holds for noisy 0 -SSC) Let nonzero vector β * be the optimal solution to the noisy 0 -SSC problem for point x i with β * 0 = r * > 1, and c * Then the subspace detection property holds for x i with β *. Here τ 0, τ 1,σ * Y and σ * X are defined in Lemma 2. Remark 2. When δ = 0 and there is no noise in the data X, the conditions for the correctness of noisy 0 -SSC in Theorem 1 almost reduce to that for noiseless 0 -SSC. To see this, the conditions are reduced to B(y i, c *) ∩ H = ∅, which are exactly the conditions required by noiseless 0 -SSC, namely data are away from the external subspaces by choosing λ → 0 and it follows that c * = 0. While Theorem 1 establishes geometric conditions under which the subspace detection property holds for noisy 0 -SSC, it can be seen that these conditions are often coupled with the optimal solution β * to the noisy 0 -SSC problem. In the following theorem, the correctness of noisy 0 -SSC is guaranteed in terms of λ, the weight for the 0 regularization term in, and the geometric conditions independent of the optimal solution to. Let M i > 0 be the minimum distance between y i ∈ S k and its external subspaces when y i is away from its external subspaces, i.e. The following two quantities related to the spectrum of clean and noisy data, µ r and σ X,r, are defined as follows with r > 1 for the analysis in Theorem 2. Theorem 2. (Subspace detection property holds for noisy 0 -SSC under deterministic model, with conditions in terms of λ) Let nonzero vector β * be the optimal solution to the noisy 0 -SSC problem for point x i with β * 0 = r *, n k ≥ d k + 1 for every 1 ≤ k ≤ K, and there exists 1 < r 0 ≤ d such that 1 < r * ≤ r 0. Suppose Y is in general position, y i ∈ S k for some 1 ≤ k ≤ K, δ < min 1≤r<r0σY,r, and and Then if where λ 0 max{λ 1, λ 2} and λ1 inf{0 < λ < 1 : the subspace detection property holds for x i with β * . Here M i, µ r0 and σ X,r0 are defined in, and respectively. Remark 3. The two conditions and and hold, λ 1 and λ 2 can always be chosen in accordance with and. Remark 4. It can be observed from condition that noisy 0 -SSC encourages sparse solution by a relatively large λ so as to guarantee the subspace detection property. This theoretical finding is consistent with the empirical study shown in the experimental . In this subsection, the correctness of noisy 0 -SSC is analyzed when the clean data in each subspace are distributed at random. We assume that the data in subspace S (k) are i.i.d. isotropic samples on sphere of radius √ d k centered at the origin according to some continuous distribution, for In addition, for each 1 ≤ k ≤ K, we assume that the following condition holds: (a) There exists a constant M ≥ 1 such that for any t > 0, any y ∈ Y (k), and any vector v with unit 2 -norm, Intuitively, condition (a) requires that the projection of any data point onto arbitrary unit vector is bounded from both sides with relatively large probability. This condition is also required in to derive lower bound for the least singular value of a random matrix with independent isotropic columns. In order to meet the conditions in Theorem 2 so as to guarantee the subspace detection property under randomized models, the following lemma is presented and it provides the geometric concentration inequality for the distance between a point y ∈ Y (k) and any of its external subspaces. It renders a lower bound for M i, namely the minimum distance between y i ∈ S k and its external subspaces. Lemma 4. Under randomized models, given 1 ≤ k ≤ K and y ∈ Y (k), suppose H ∈ H yi,d k is any external subspace of y. Then for any t > 0, We then have the following regarding to the subspace detection property of noisy 0 -SSC under randomized models. 0 -SSC under randomized models, with conditions in terms of λ) Under randomized models, let nonzero vector β * be the optimal solution to the noisy 0 -SSC problem for point x i with β * 0 = r *, n k ≥ d k + 1 for every 1 ≤ k ≤ K, and there exists 1 < r 0 ≤ d such that 1 < r * ≤ r 0. Suppose the data in each subspace are i.i.d. isotropic samples according to some continuous distribution that satisfies condition (a).. For t > 0 such that 1 − 2t and where λ 0 max{λ 1, λ 2} and λ 1 inf{0 < λ < 1 : Then with probability at least 2), the subspace detection property holds for x i with β *. Remark 5. Note that there is no assumption on the distribution of subspaces in Theorem 3, so it is not required that the subspaces should have uniform distribution, an is required in the geometric analysis of 1 - and its noisy version. In addition, while; 4 NOISY 0 -SSC ON DIMENSIONALITY REDUCED DATA: NOISY-DR-0 -SSC Albeit the theoretical guarantee and compelling empirical performance of noisy 0 -SSC to be shown in the experimental , the computational cost of noisy 0 -SSC is high with the high dimensionality of the data. In this section, we propose Noisy Dimensionality Reduced 0 -SSC (Noisy-DR-0 -SSC) which performs noisy 0 -SSC on dimensionality reduced data. The theoretical guarantee on the correctness of Noisy-DR-0 -SSC under deterministic model as well as its empirical performance are presented. Noisy-DR-0 -SSC performs subspace clustering by the following two steps: 1) obtain the dimension reduced dataX = PX with a linear transformation P ∈ R p×d (p < d). 2) perform noisy 0 -SSC on the compressed dataX: If p < d, Noisy-DR-0 -SSC operates on the compressed dataX rather than on the original data, so that the efficiency is improved. High-dimensional data often exhibits low-dimensional structures, which often leads to low-rankness of the data matrix. Intuitively, if the data is low rank, then it could be safe to perform noisy 0 -SSC on its dimensionality reduced version by the linear projection P, and it is expected that P can preserve the information of the subspaces contained in the original data as much as possible, while effectively removing uninformative dimensions. To this end, we propose to choose P as a random projection induced by randomized low-rank approximation of the data. The key idea is to obtain an approximate low-rank decomposition of the data. Using the random projection induced by such low-rank approximation as the linear transformation P, the clustering correctness hold for Noisy-DR-0 -SSC with a high probability. Randomized algorithms are efficient and they have been extensively studied in the computer science and numerical linear algebra literature. They have been employed to accelerate various numerical matrix computation and matrix optimization problems, including random projection for matrix decomposition Formally, a random matrix T ∈ R n×p is generated such that each element T ij is sampled independently according to the Gaussian distribution N. QR decomposition is then performed on XT to obtain the basis of its column space, namely XT = QR where Q ∈ R d×p is an orthogonal matrix of rank p and R ∈ R p×p is an upper triangle matrix. The columns of Q form the orthogonal basis for the sample matrix XT. An approximation of X is then obtained by projecting X onto the column space of XT: QQ X = QW =X where W = Q X ∈ R p×n. In this manner, a randomized low-rank decomposition of X is achieved as follows: We present probabilistic on the correctness of Noisy-DR-0 -SSC using the random projection induced by randomized low-rank decomposition of the data X, namely P = Q, in Theorem 4. In the sequel,x = Px for any x ∈ R n. To guarantee the subspace detection property on the dimensionality-reduced dataX, it is crucial to make sure that the conditions, such as and in Theorem 2, still hold after the linear transformation. We denote byβ * the optimal solution to. We also define the following quantities in the analysis of the subspace detection property, which correspond to M i,σ Y,r, σ X,r and µ r used in the analysis on the original data: where Hỹ is all the external subspaces ofỹ i with dimension no greater thand k in the transformed space by P.σỸ Theorem 4. (Subspace detection property holds for Noisy-DR-0 -SSC under deterministic model) Let nonzero vector β * be the optimal solution to the noisy 0 -SSC problem for point x i with β * 0 = r *, n k ≥ d k + 1 for every 1 ≤ k ≤ K, and there exists 1 < r 0 ≤ d such that 1 < r * ≤ r 0. Suppose Y is in general position, δ < min 1≤r<r0σY,r, andM i,δ M i − δ. Suppose the following conditions hold: for all y i ∈ S k and 1 ≤ k ≤ K. then with probability at least 1 − 6e −p, the subspace detection property holds forx i withβ *. Herẽ M i,μ r andσX,r0 are defined in, and respectively. We employ Proximal Gradient Descent (PGD) to optimize the objective function of noisy 0 -SSC and Noisy-DR-0 -SSC. For example, in the k-th iteration of PGD for problem, the variable β is updated according to where g(β) x i − Xβ 2 2, T θ is an element-wise hard thresholding operator: It is proved in that the sequence {β (k) } generated by PGD converges to a critical point of, denoted byβ. Let β * be the optimal solution to. Theorem 5 in to problem shows that the β * −β 2 is bounded. Theorem 5 establishes the conditions under whichβ is also the optimal solution to. The following theorem demonstrates thatβ = β * if λ is two-side bounded andβ min = min t:βt =0 |β t | is sufficiently large. Theorem 5. (Conditions that the sub-optimal solution by PGD is also globally optimal) If and µ thenβ = β *. We demonstrate the performance of noisy 0 -SSC and Noisy-DR-0 -SSC, with comparison to other competing clustering methods including K-means (KM), Spectral Clustering (SC), noisy SSC, Sparse Manifold Clustering and Embedding (SMCE) and. With the coefficient matrix Z obtained by the optimization of noisy 0 -SSC or Noisy-DR-0 -SSC, a sparse similarity matrix is built by W =, and spectral clustering is performed on W to obtain the clustering . Two measures are used to evaluate the performance of different clustering methods, i.e. the Accuracy (AC) and the Normalized Mutual Information (NMI). We use randomized rank-p decomposition of the data matrix in Noisy-DR-0 -SSC with p = min{d,n} 10. It can be observed that noisy 0 -SSC and Noisy-DR-0 -SSC always achieve better performance than other methods in Table 1, including the noisy SSC on dimensionality reduced data (Noisy DR-SSC). Throughout all the experiments we find that the best clustering accuracy is achieved whenever λ is chosen by 0.5 < λ < 0.95, justifying our theoretical finding claimed in Remark 4 and in Theorem 5. More experimental on the CMU Multi-PIE data are shown in Table 2. For all the methods that involve random projection, we conduct the experiments for 30 times and report the average performance. Note that the cluster accuracy of SSC-OMP on the extended Yale-B data set is reported according to. The time complexity of running PGD for noisy 0 -SSC and Noisy-DR-0 -SSC are O(T nd) and O(T pd) respectively, where T is the maximum iteration number. The actual running time of both algorithms confirms such time complexity, and we observe that Noisy-DR-0 -SSC is always more than 8.7 times faster than noisy 0 -SSC with the same number of iterations. We present provable noisy 0 -SSC that recovers subspaces from noisy data through 0 -induced sparsity in a robust manner, with the theoretical guarantee on its correctness in terms of subspace detection property under both deterministic and randomized models. Experimental shows the superior performance of noisy 0 -SSC. We also propose Noisy-DR-0 -SSC which performs noisy 0 -SSC on dimensionality reduced data and still provably recovers the subspaces in the original data. Experiment demonstrate the effectiveness of both noisy 0 -SSC and Noisy-DR-0 -SSC. β = 0. Perform the above analysis for all 1 ≤ i ≤ n, we can prove that the subspace detection property holds for all 1 ≤ i ≤ n. The following proposition is used for proving Lemma 1. Lemma B. (Perturbation of distance to subspaces) Let A, B ∈ R m×n are two matrices and rank(A) = r, rank(B) = s. Also, E = A − B and E 2 ≤ C, where · 2 indicates the spectral norm. Then for any point x ∈ R m, the difference of the distance of x to the column space of A and B, i.e. |d(x, Proof. Note that the projection of x onto the subspace H A is AA + x where A + is the Moore-Penrose pseudo-inverse of the matrix A, so d(x, H A) equals to the distance between x and its projection, namely It follows that According to the perturbation bound on the orthogonal projection in; , Since EA, combining and So that is proved. Proof of Lemma 1. We have yi = xi − ni, and σmin(It follows that σmin(X β) ≥ σ Y,r − √ rδ > 0 and X β has full column rank. Also, X β − Y β 2 ≤ X β − Y β F ≤ √ rδ. According to Lemma B, A.3 PROOF OF LEMMA 2 Proof. xi − Xβ * 2 2 + λ β * 0 ≤ xi − X0 2 2 + λ 0 0 = 1 ⇒ c * = xi − Xβ * 2 < 1. We first prove that β * is the optimal solution to the sparse approximation problem min β β 0 s.t. xi − Xβ 2 ≤ c *, βi = 0. To see this, suppose there is a vector β such that xi − Xβ 2 ≤ c * and β 0 < β that y − xi 2 ≤ c0 since c0 ≥ d(xi, S k). Also, d k points in Y (k) can linearly represent y since Y (k) is in general position, and it follows that β and V A have orthonormal columns with U A U A = V A V A = I. Then QA = U QA ΣV QA is the singular value decomposition of QA with U QA = QU A and V QA = V A. This is because the columns of U QA are orthonormal since the columns Q are orthonormal: U QA U QA = U A Q QU A = I, and Σ is a diagonal matrix with nonnegative diagonal elements. It follows that σmin(QA) = σmin(A) for any A ∈ R p×q. For a point xi = yi + ni, after projection via P, we have the projected noiseñi = Pni. Because the magnitude of the noise in the projected data is also bounded by δ. Also, Let β ∈ R n,Ỹ β = PY β with β 0 = r. Then σmin(QỸ β) = σmin(Ỹ β)). Since Therefore, it follows from that if thenỸ is also in general position. In addition, since λ ≥ 1 r 0, we have λ β * 0 ≤ L ≤ 1, and it follows that β * 0 ≤ 1 λ ≤ r0. Based on we have |σỸ,r −σ Y,r | ≤ Cp,p 0 + 2δ it follows that δ < min 1≤r<r 0σỸ,r because δ < min 1≤r<r 0σ Y,r − Cp,p 0 − 2δ √ r0. Again, for β ∈ R n with β 0 = r ≤ r0, we have |σmin(X β) − σmin(X β)| = |σmin(QX β) − σmin(X β)| It can be verified that |σX,r − σ X,r | ≤ Cp,p 0 Combining and Lemma D, noting that σ X,r 0 − Cp,p 0, since we haveM i,δ where yi ∈ S k. Based on and, we haveμ
We propose Noisy-DR-L0-SSC (Noisy Dimension Reduction L0-Sparse Subspace Clustering) to efficiently partition noisy data in accordance to their underlying subspace structure.
1,417
scitldr
Mode connectivity provides novel geometric insights on analyzing loss landscapes and enables building high-accuracy pathways between well-trained neural networks. In this work, we propose to employ mode connectivity in loss landscapes to study the adversarial robustness of deep neural networks, and provide novel methods for improving this robustness. Our experiments cover various types of adversarial attacks applied to different network architectures and datasets. When network models are tampered with backdoor or error-injection attacks, our demonstrate that the path connection learned using limited amount of bonafide data can effectively mitigate adversarial effects while maintaining the original accuracy on clean data. Therefore, mode connectivity provides users with the power to repair backdoored or error-injected models. We also use mode connectivity to investigate the loss landscapes of regular and robust models against evasion attacks. Experiments show that there exists a barrier in adversarial robustness loss on the path connecting regular and adversarially-trained models. A high correlation is observed between the adversarial robustness loss and the largest eigenvalue of the input Hessian matrix, for which theoretical justifications are provided. Our suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness. Recent studies on mode connectivity show that two independently trained deep neural network (DNN) models with the same architecture and loss function can be connected on their loss landscape using a high-accuracy/low-loss path characterized by a simple curve (; ;). This insight on the loss landscape geometry provides us with easy access to a large number of similar-performing models on the low-loss path between two given models, and use this to devise a new model ensembling method. Another line of recent research reveals interesting geometric properties relating to adversarial robustness of DNNs (; 2018; b;). An adversarial data or model is defined to be one that is close to a bonafide data or model in some space, but exhibits unwanted or malicious behavior. Motivated by these geometric perspectives, in this study, we propose to employ mode connectivity to study and improve adversarial robustness of DNNs against different types of threats. A DNN can be possibly tampered by an adversary during different phases in its life cycle. For example, during the training phase, the training data can be corrupted with a designated trigger pattern associated with a target label to implant a backdoor for trojan attack on DNNs . During the inference phase when a trained model is deployed for task-solving, prediction-evasive attacks are plausible , even when the model internal details are unknown to an attacker . In this research, we will demonstrate that by using mode connectivity in loss landscapes, we can repair backdoored or error-injected DNNs. We also show that mode connectivity analysis reveals the existence of a robustness loss barrier on the path connecting regular and adversarially-trained models. We motivate the novelty and benefit of using mode connectivity for mitigating training-phase adversarial threats through the following practical scenario: as training DNNs is both time-and resource-consuming, it has become a common trend for users to leverage pre-trained models released in the public domain 1. Users may then perform model fine-tuning or transfer learning with a small set of bonafide data that they have. However, publicly available pre-trained models may carry an unknown but significant risk of tampering by an adversary. It can also be challenging to detect this tampering, as in the case of a backdoor attack 2, since a backdoored model will behave like a regular model in the absence of the embedded trigger. Therefore, it is practically helpful to provide tools to users who wish to utilize pre-trained models while mitigating such adversarial threats. We show that our proposed method using mode connectivity with limited amount of bonafide data can repair backdoored or error-injected DNNs, while greatly countering their adversarial effects. Our main contributions are summarized as follows: • For backdoor and error-injection attacks, we show that the path trained using limited bonafide data connecting two tampered models can be used to repair and redeem the attacked models, thereby ing in high-accuracy and low-risk models. The performance of mode connectivity is significantly better than several baselines including fine-tuning, training from scratch, pruning, and random weight perturbations. We also provide technical explanations for the effectiveness of our path connection method based on model weight space exploration and similarity analysis of input gradients for clean and tampered data. • For evasion attacks, we use mode connectivity to study standard and adversarial-robustness loss landscapes. We find that between a regular and an adversarially-trained model, training a path with standard loss reveals no barrier, whereas the robustness loss on the same path reveals a barrier. This insight provides a geometric interpretation of the "no free lunch" hypothesis in adversarial robustness (; ;). We also provide technical explanations for the high correlation observed between the robustness loss and the largest eigenvalue of the input Hessian matrix on the path. • Our experimental on different DNN architectures (ResNet and VGG) and datasets (CIFAR-10 and SVHN) corroborate the effectiveness of using mode connectivity in loss landscapes to understand and improve adversarial robustness. We also show that our path connection is resilient to the considered adaptive attacks that are aware of our defense. To the best of our knowledge, this is the first work that proposes using mode connectivity approaches for adversarial robustness. 2 AND RELATED WORK Let w 1 and w 2 be two sets of model weights corresponding to two neural networks independently trained by minimizing any user-specified loss l(w), such as the cross-entropy loss. Moreover, let φ θ (t) with t ∈ be a continuous piece-wise smooth parametric curve, with parameters θ, such that its two ends are φ θ = w 1 and φ θ = w 2. To find a high-accuracy path between w 1 and w 2, it is proposed to find the parameters θ that minimize the expectation over a uniform distribution on the curve , where q θ (t) is the distribution for sampling the models on the path indexed by t. Since q θ (t) depends on θ, in order to render the training of high-accuracy path connection more computationally tractable, proposed to instead use the following loss term, where U is the uniform distribution on. The following functions are commonly used for characterizing the parametric curve function φ θ (t). Polygonal chain . The two trained networks w 1 and w 2 serve as the endpoints of the chain and the bends of the chain are parameterized by θ. For instance, the case of a chain with one bend is φ θ (t) = 2 (tθ + (0.5 − t) ω 1 ), 0 ≤ t ≤ 0.5 2 ((t − 0.5) ω 2 + (1 − t) θ), 0.5 ≤ t ≤ 1. Bezier curve . A Bezier curve provides a convenient parametrization of smoothness on the paths connecting endpoints. For instance, a quadratic Bezier curve with endpoints w 1 and w 2 is given by It is worth noting that, while current research on mode connectivity mainly focuses on generalization analysis (; ; ; a) and has found remarkable applications such as fast model ensembling , our show that its implication on adversarial robustness through the lens of loss landscape analysis is a promising, yet largely unexplored, research direction. scratched the surface but focused on interpreting decision surface of input space and only considered evasion attacks. Backdoor attack. Backdoor attack on DNNs is often accomplished by designing a designated trigger pattern with a target label implanted to a subset of training data, which is a specific form of data poisoning (; ;). A backdoored model trained on the corrupted data will output the target label for any data input with the trigger; and it will behave as a normal model when the trigger is absent. For mitigating backdoor attacks, majority of research focuses on backdoor detection or filtering anomalous data samples from training data for re-training (; ;), while our aim is to repair backdoored models for models using mode connectivity and limited amount of bonafide data. Evasion attack. Evasion attack is a type of inference-phase adversarial threat that generates adversarial examples by mounting slight modification on a benign data sample to manipulate model prediction . For image classification models, evasion attack can be accomplished by adding imperceptible noises to natural images and ing in misclassification . Different from training-phase attacks, evasion attack does not assume access to training data. Moreover, it can be executed even when the model details are unknown to an adversary, via black-box or transfer attacks . Error-injection attack. Different from attacks modifying data inputs, error-injection attack injects errors to model weights at the inference phase and aims to cause misclassification of certain input samples . At the hardware level of a deployed machine learning system, it can be made plausible via laser beam and row hammer to change or flip the logic values of the corresponding bits and thus modifying the model parameters saved in memory. Here we report the experimental , provide technical explanations, and elucidate the effectiveness of using mode connectivity for studying and enhancing adversarial robustness in three representative themes: (i) backdoor attack; (ii) error-injection attack; and (iii) evasion attack. Our experiments were conducted on different network architectures (VGG and ResNet) and datasets (CIFAR-10 and SVHN). The details on experiment setups are given in Appendix A. When connecting models, we use the cross entropy loss and the quadratic Bezier curve as described in. In what follows, we begin by illustrating the problem setups bridging mode connectivity and adversarial robustness, summarizing the of high-accuracy (low-loss) pathways between untampered models for reference, and then delving into detailed discussions. Depending on the context, we will use the terms error rate and accuracy on clean/adversarial samples interchangeably. The error rate of adversarial samples is equivalent to their attack failure rate as well as 100%-attack accuracy. Inference on training set Inference on test set Legend Figure 1: Loss and error rate on the path connecting two untampered VGG models trained on CIFAR-10. The path connection is trained using different settings as indicated by the curve colors. The using SVHN and ResNet are given in Appendix B. The inference on test set are evaluated using 5000 samples, which are separate from what are used for path connection. Problem setup for backdoor and error-injection attacks. We consider the practical scenario as motivated in Section 1, where a user has two potentially tampered models and a limited number of bonafide data at hand. The tampered models behave normally as untampered ones on nontriggered/non-targeted inputs so the user aims to fully exploit the model power while alleviating adversarial effects. The problem setup applies to the case of one tampered model, where we use the bonafide data to train a fine-tuned model and then connect the given and the fine-tuned models. Problem setup for evasion attack. For gaining deeper understanding on evasion attacks, we consider the scenario where a user has access to the entire training dataset and aims to study the behavior of the models on the path connecting two independently trained models in terms of standard and robust loss landscapes, including model pairs selected from regular and adversarially-trained models. Regular path connection between untampered models. Figure 1 shows the cross entropy loss and training/test error rate of models on the path connecting untampered models. The untampered models are independently trained using the entire training data. While prior have demonstrated high-accuracy path connection using the entire training data (; ;), our path connection is trained using different portion of the original test data corresponding to the scenario of limited amount of bonafide data. Notably, when connecting two DNNs, a small number of clean data is capable of finding models with good performance. For example, path connection using merely 1000/2500 CIFAR-10 samples only reduces the test accuracy (on other 5000 samples) of VGG16 models by at most 10%/5% when compared to the well-trained models (those at t = 0 and t = 1), respectively. In addition, regardless of the data size used for path connection, the model having the worst performance is usually located around the point t = 0.5, as it is geometrically the farthest model from the two end models on the path. Attack implementation. We follow the procedures in to implement backdoor attacks and obtain two backdoored models trained on the same poisoned training data. The trigger Single-target backdoor attack All-targets backdoor attack Legend Figure 2: Error rate against backdoor attacks on the connection path for CIFAR-10 (VGG). The error rate of clean/backdoored samples means the standard-test-error/attack-failure-rate, respectively. pattern is placed at the right-bottom of the poisoned images as shown in Appendix C. Specifically, 10% of the training data are poisoned by inserting the trigger and changing the original correct labels to the target label(s). Here we investigate two kinds of backdoor attacks: (a) single-target attack which sets the target label T to a specific label (we choose T = class 1); and (b) all-targets attack where the target label T is set to the original label i plus 1 and then modulo 9, i.e., T = i + 1(mod 9). Their performance on clean (untriggered) and triggered data samples are given in Table 1, and the prediction errors of triggered images relative to the true labels are given in Appendix D. The backdoored models have similar performance on clean data as untampered models but will indeed misclassify majority of triggered samples. Comparing to single-target attack, all-targets attack is more difficult and has a higher attack failure rate, since the target labels vary with the original labels. Evaluation and analysis. We train a path connecting the two backdoored models with limited amount of bonafide data. As shown in Figure 2, at both path endpoints (t = {0, 1}) the two tampered models attain low error rates on clean data but are also extremely vulnerable to backdoor attacks (low error rate on backdoored samples means high attack success rate). Nonetheless, we find that path connection with limited bonafide data can effectively mitigate backdoor attacks and redeem model power. For instance, the models at t = 0.1 or t = 0.9 can simultaneously attain similar performance on clean data as the tampered models while greatly reducing the backdoor attack success rate from close to 100% to nearly 0%. Moreover, most models on the path (e.g. when t ∈ [0.1, 0.9]) exhibit high resilience to backdoor attacks, suggesting mode connection with limited amount of bonafide data can be an effective countermeasure. While having resilient models to backdoor attacks on the path, we also observe that the amount of bonafide data used for training path has a larger impact on the performance of clean data. Path connection using fewer data samples will yield models with higher error rates on clean data, which is similar to the of path connection between untampered models discussed in Section 3.1. The advantages of redeeming model power using mode connectivity are consistent when evaluated on different network architectures and datasets (see Appendix E). Comparison with baselines. We compare the performance of mode connectivity against backdoor attacks with the following baseline methods: (i) fine-tuning backdoored models with bonafide data; (ii) training a new model of the same architecture from scratch with bonafide data; (iii) model weight pruning and then fine-tuning with bonafide data using ; and (iv) random Gaussian perturbation to the model weights leading to a noisy model. The are summarized in Table 2 and their implementation details are given in Appendix E. Evaluated on different network architectures and datasets, the path connection method consistently maintains superior accuracy on clean data while simultaneously attaining low attack accuracy over the baseline methods, which can be explained by the ability of finding high-accuracy paths between two models using mode connectivity. For CIFAR-10 (VGG), even using as few as 50 bonafide samples for path connection, the subsequent model in Table 2 still remains 63% clean accuracy while constraining backdoor accuracy to merely 2.5%. The best baseline method is fine-tuning, which has similar backdoor accuracy as path connection but attains lower clean accuracy (e.g. 17% worse than path connection when using 50 bonafide samples). For SVHN (ResNet), the clean accuracy of fine-tuning can be on par with path connection, but its backdoor accuracy is significantly higher than path connection. For example, when trained with 250 samples, they have the same clean accuracy but the backdoor accuracy of fine-tuning is 58.7% higher than path connection. Training from scratch does not yield competitive given limited amount of training data. Noisy models perturbed by adding zero-mean Gaussian noises to the two models are not effective against backdoor attacks and may suffer from low clean accuracy. Pruning gives high clean accuracy but has very little effect on mitigating backdoor accuracy. Extensions. Our proposal of using mode connectivity to repair backdoor models can be extended to the case when only one tampered model is given. We propose to fine-tune the model using bonafide data and then connect the given model with the fine-tuned model. Similar to the aforementioned findings, path connection can remain good accuracy on clean data while becoming resilient to backdoor attacks. We refer readers to Appendix G for more details. In addition, we obtain similar when the two backdoored models are trained with different poisoned datasets. Technical Explanations. To provide technical explanations for the effectiveness of our proposed path connection method in repairing backdoored models, we run two sets of analysis: (i) model weight space exploration and (ii) data similarity comparison. For (i), we generate 1000 noisy versions of a backdoored model via random Gaussian weight perturbations. We find that they suffer from low clean accuracy and high attack success rate, which suggests that a good model with high-clean-accuracy and low-attack-accuracy is unlikely to be found by chance. We also report the distinct difference between noisy models and models on the path in the weight space to validate the necessity of using our path connection for attack mitigation and model repairing. More details are given in Appendix H. For (ii), we run similarity analysis of the input gradients between the end (backdoored) models and models on the connection path for both clean data and triggered data. We find that the similarity of triggered data is much lower than that of clean data when the model is further away in the path from the end models, suggesting that our path connection method can neutralize the backdoor effect. More details are given in Appendix I. The advantage of our path connection method over fine-tuning demonstrates the importance of using the knowledge of mode connectivity for model repairing. Adaptive Attack. To justify the robustness of our proposed path connection approach to adaptive attacks, we consider the advanced attack setting where the attacker knows path connection is used for defense but cannot compromise the bonafide data that are private to an user. Furthermore, we allow the attacker to use the same path training loss function as the defender. To attempt breaking path connection, the attacker trains a compromised path such that every model on this path is a backdoored Figure 3: Error rate against error-injection attack on the connection path for CIFAR-10 (VGG). The error rate of clean/targeted samples means standard-test-error/attack-failure-rate, respectively. model and then releases the path-aware tampered models. We show that our approach is still resilient to this adaptive attack. More details are given in Appendix J. Attack implementation. We adopt the fault sneaking attack for injecting errors to model weights. Given two untampered and independently trained models, the errors are injected with selected samples as targets such that the tampered models will cause misclassification on targeted inputs and otherwise will behave as untampered models. More details are given in Appendix C. Evaluation and analysis. Similar to the setup in Section 3.2, Figure 3 shows the clean accuracy and attack accuracy of the models on the path connecting two error-injected models using limited amount of bonafide data. For the error-injected models (t = {0, 1}), the attack accuracy is nearly 100%, which corresponds to 0% attack failure rate on targeted samples. However, using path connection and limited amount of bonafide data, the injected errors can be removed almost completely. Varying the size of path training data consistently sanitizes the error-injected models and mainly affects the standard test error. Most of the models on the path can attain nearly 100% fault tolerance (i.e. 100% attack failure rate) to the injected errors. The models on the path near t = 0 or t = 1 have comparable performance on clean data and exhibit strong fault tolerance to injected errors. Similar findings are observed across different network architectures and datasets (see Appendix F). Comparison with baselines and extensions. In Table 3, we adopt the same baselines as in Section 3.2 to compare with path connection. We find that only path connection and training-from-scratch can successfully sanitize the error-injected models and attain 0% attack accuracy, and other baselines are less effective. Table 3 also shows the clean accuracy of path connection is substantially better than the effective baselines, suggesting novel applications of mode connectivity for finding accurate and adversarially robust models. The extensions to other settings are discussed in Appendix G. Technical explanations and adaptive attack. Consistent with the in backdoor attacks, we explore the model weight space to demonstrate the significant difference between the models on our connection path and random noisy models. We also show that the similarity of error-injected images are much lower than that of clean images. In addition, our path connection is resilient to the advanced path-aware error-injection attack. More details are given in Appendices H, I and J. To gain insights on mode connectivity against evasion attacks, here we investigate the standard and adversarial-robustness loss landscapes on the same path connecting two untampered and independently trained models. The path is trained using the entire training dataset for minimizing equation 2 with standard cross entropy loss. The robustness loss refers to the cross entropy of class predictions on adversarial examples generated by evasion attacks and their original class labels. Higher robustness loss suggests the model is more vulnerable to evasion attacks. In addition, we will investigate the robustness loss landscape connecting regular (non-robust) and adversarially-trained (robust) models, where the path is also trained with standard cross entropy loss. We will also study the behavior of the largest eigenvalue of the Hessian matrix associated with the cross entropy loss and the data input, which we call the input Hessian. As adversarial examples are often generated by using the input gradients, we believe the largest eigenvalue of the input Hessian can offer new insights on robustness loss landscape, similar to the role of model-weight Hessian on quantifying generalization performance (; a). Attack Implementation. We uniformly select 9 models (t = {0.1, 0.2, . . ., 0.9}) on the path and run evasion attacks on each of them using the ∞ -norm-ball based projected gradient descent (PGD) method proposed in . The robustness loss is evaluated using the non-targeted adversarial examples crafted from the entire test set, and the attack perturbation strength is set to = 8/255 with 10 iterations. We also use the PGD method for adversarial training to obtain adversarially-trained models that are robust to evasion attacks but pay the price of reduced accuracy on clean data . Evaluation and Analysis. To study the standard and robustness loss landscapes, we scrutinize the models on the path connecting the following pairs of models: (i) independently trained regular (non-robust) models; (ii) regular to adversarially-trained models; and (iii) independently adversariallytrained models. These are shown in Figure 4. We summarize the major findings as follows. • No standard loss barrier in all cases: Regardless of the model pairs, all models on the paths have similar standard loss metrics in terms of training and test losses, which are consistent with the previous on the "flat" standard loss landscape for mode connectivity (; ;). The curve of standard loss in case (ii) is skewed toward one end due to the artifact that the adversarially-trained model (t = 1) has a higher training/test error than the regular model (t = 0). • Robustness loss barrier: Unlike standard loss, we find that the robustness loss on the connection path has a very distinct characteristic. In all cases, there is a robustness loss barrier (a hill) between pairs of regular and adversarially-trained models. The gap (height) of the robustness loss barrier is more apparent in cases (ii) and (iii). For (ii), the existence of a barrier suggests the modes of regular and adversarially-trained models are not connected by the path in terms of robustness loss, which also provides a geometrical evidence of the "no free lunch" hypothesis that adversarially robust models cannot be obtained without additional costs (; ;). For (iii), robustness loss barriers also exist. The models on the path are less robust than the two adversarially-trained models at the path end, despite they have similar standard losses. The suggest that there are essentially no better adversarially robust models on the path connected by regular training using standard loss. • High correlation between the largest eigenvalue of input Hessian and robustness loss: Inspecting the largest eigenvalue of input Hessian H t (x) of a data input x on the path, denoted by λ max (t), we observe a strong accordance between λ max (t) and robustness loss on the path, verified by the high empirical Pearson correlation coefficient (PCC) averaged over the entire test set as reported in Figure 4. As evasion attacks often use input gradients to craft adversarial perturbations to x, the eigenspectrum of input Hessian indicates its local loss curvature and relates to adversarial robustness . The details of computing λ max (t) are given in Appendix K. Below we provide technical explanations for the empirically observed high correlation between λ max (t) and the oracle robustness loss on the path, defined as max δ ≤ l(w(t), x + δ). Proposition 1. Let f w (·) be a neural network classifier with its model weights denoted by w and let l(w, x) denote the classification loss (e.g. cross entropy of f w (x) and the true label y of a data sample x). Consider the oracle robustness loss max δ ≤ (w(t), x + δ) of the model t on the path, where δ denotes a perturbation to x confined by an -ball induced by a vector norm ·. Assume (a) the standard loss l(w(t), x) on the path is a constant for all where ∇ x l(w(t), x) is the input gradient and H t (x) is the input Hessian of l(w(t), x) at x. Let c denote the normalized inner product in absolute value for the largest eigenvector v of H t (x) and ∇ x l(w(t), x), T v| ∇xl(w(t),x) = c. Then we have max δ ≤ l(w(t), x + δ) ∼ λ max (t) as c → 1. Proof: The proof is given in Appendix L. Assumption (a) follows by the existence of high-accuracy path of standard loss landscape from mode connectivity analysis. Assumption (b) assumes the local landscape with respect to the input x can be well captured by its second-order curvature based on Taylor expansion. The value of c is usually quite large, which has been empirically verified in both regular and adversarially-trained models . Extensions. Although we find that there is a robustness loss barrier on the path connected by regular training, we conduct additional experiments to show that it is possible to find an robust path connecting two adversarially-trained or regularly-trained model pairs using adversarial training , which we call the "robust connection" method. However, model ensembling using either the regular connection or robust connection has little gain against evasion attacks, as the adversarial examples are known to transfer between similar models . We refer readers to Appendix M for more details. This paper provides novel insights on adversarial robustness of deep neural networks through the lens of mode connectivity in loss landscapes. Leveraging mode connectivity between model optima, we show that path connection trained by a limited number of clean data can successfully repair backdoored or error-injected models and significantly outperforms several baseline methods. Moreover, we use mode connectivity to uncover the existence of robustness loss barrier on the path trained by standard loss against evasion attacks. We also provide technical explanations for the effectiveness of our proposed approach and theoretically justify the empirically observed high correlation between robustness loss and the largest eigenvalue of input Hessian. Our findings are consistent and validated on different network architectures and datasets. The performance of regular path connection of untampered models on SVHN with ResNet is presented in Figure A1. Inference on training set Inference on test set Legend Figure A1: Loss and error rate on the path connecting two untampered ResNet models trained on SVHN. The path connection is trained using different settings as indicated by the curve colors. The inference on test set are evaluated using 5000 samples, which are separate from what are used for path connection. Backdoor attack The backdoor attack is implemented by poisoning the training dataset and then training a backdoored model with this training set. To poison the training set, we randomly pick 10% images from the training dataset and add a trigger to each image. The shape and location of the trigger is shown in Figure A2. Meanwhile, we set the labels of the triggered images to the target label(s) as described in Section 3.2. Error-injection attack We select 1000 images from the test set and pick 4 images as targeted samples with randomly selected target labels for inducing attack. The target labels of the 4 selected images are different from their original correct labels. The goal of the attacker is to change the classification of the 4 images to the target labels while keeping the classification of the remaining 996 images unchanged through modifying the model parameters. To obtain the models with injected errors on CIFAR-10, we first train two models with a clean accuracy of 88% and 86%, respectively. Keeping the classification of a number of images unchanged can help to mitigate the accuracy degradation incurred by the model weight perturbation. After perturbing the model weights, the 4 errors can be injected into the model successfully with 100% accuracy for their target labels. The accuracy for other clean images become 78% and 75%, respectively. We show the prediction error of the triggered data relative to the true labels on all datasets and networks in Figure A3. The error rate means the fraction of triggered images having top-1 predictions different from the original true labels. The prediction error rate of triggered data is high at path ends (t = 0, 1) since the two end models are tampered. It has similar trend as standard test error for models not too close to the path ends, suggesting path connection can find models having good classification accuracy on triggered data. Figure A3: Prediction error rate against backdoor attacks on the connection path. Implementation Details for Table 2 For our proposed path connection method, we train the connection using different number of images as given in Table 2 for 100 epochs and then report the performance of the model associated with a selected index t on the path. For the fine-tuning and training-from-scratch methods, we report the model performance after training for 100 epochs. For the random Gaussian perturbation to model weights, we evaluate the model performance under Gaussian noise perturbations on the model parameters. There are two given models which are the models at t = 0 and t = 1. The Gaussian noise has zero mean with a standard deviation of the absolute value of the difference between the two given models. Then we add the Gaussian noise to the two given models respectively and test their accuracy for clean and triggered images. For Gaussian noise, the experiment is performed multiple times (50 times) and we report the average accuracy. We can see that adding Gaussian noise perturbations to the model does not necessarily change the model status from robust to non-robust or from non-robust to robust. The path connection or evolution from the model at t = 0 to that t = 1 follows a specific path achieving robustness against backdoor attack rather than random exploration. For pruning, we use the filter pruning method to prune filters from convolutional neural networks (CNNs) that are identified as having a small effect on the output accuracy. By removing the whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. We first prune about 60% of its parameters for VGG or 20% parameters for ResNet. Then we retrain the network with different number of images as given in Table 2 for 100 epochs. The clean accuracy and backdoor accuracy are as reported. Figure A4 shows the error rates of clean and backdoored samples using CIFAR-10 on the connection path against single-target attacks. Figure A4: Error rate of single-target backdoor attack on the connection path for CIFAR-10. The error rate of clean/backdoored samples means standard-test-error/attack-failure-rate, respectively. Figure A5 shows the error rates of clean and backdoored samples using SVHN on the connection path against single-target attacks. Figure A5: Error rate of single-target backdoor attack on the connection path for SVHN. The error rate of clean/backdoored samples means standard-test-error/attack-failure-rate, respectively. Table A2 shows the performance comparison of path connection and other baseline methods against single-target backdoor attack evaluated on CIFAR-10 (ResNet) and SVHN (VGG). Implementation Details for Table 3 For our proposed path connection method, we train the connection using different number of images as given in Table 3 for 100 epochs and then report the performance of the model associated with a selected index on the path. The start model and end model have been injected with the same 4 errors (misclassifying 4 given images), starting from two different unperturbed models obtained with different training hyper-parameters. For the fine-tuning and training-from-scratch methods, we report the model performance after training for 100 epochs. For the random Gaussian perturbation to model weights, we evaluate the model performance under Gaussian noise perturbations on the model parameters. There are two given models which are the models at t = 0 and t = 1. The Gaussian noise has zero mean with a standard deviation of the absolute value of the difference between the two given models. Then we add the Gaussian noise to the two given models respectively and test their accuracy for clean and triggered images. The experiment is performed multiple times (50 times) and we report the average accuracy. We can see that adding Gaussian noise perturbations to the model does not necessarily change the model status from robust to non-robust or from non-robust to robust. The path connection or evolution from the model at t = 0 to that t = 1 follows a specific path achieving robustness against backdoor attack rather than random exploration. The training-from-scratch baseline method usually obtains the lowest clean test accuracy, especially in the case of training with 50 images, where training with so little images does not improve the accuracy. Figure A6 shows the performance of path connection against error-injection attacks evaluated on CIFAR-10 (ResNet) and SVHN (VGG). Figure A6: Error rate against error-injection attack on the connection path for CIFAR-10 (VGG). The error rate of clean/targeted samples means standard-test-error/attack-failure-rate, respectively. Table 3 and A3 show the performance comparison of path connection and other baseline methods against error-injection attacks evaluated on all combinations of network architectures and datasets. In Sections 3.2 and 3.3, we consider the scenario where the two given models are tampered using in the same way -using the same poisoned dataset for backdoor attack and the same targeted images for error-injection attack. Here we discuss how to apply path connection to the case when only one tampered model is given. In addition, we show that the the resilient effect of path connection against these attacks still hold when the two given models are tampered in a different way. Backdoor and error-injection attacks given one tampered model We propose to first fine-tune the model using the bonafide data and then connect the original model with the fine-tuned model. The fine-tuning process uses 2000 images with 100 epochs. The path connection are shown in Figure A7. The start model is a backdoored model with high accuracy for triggered images. The end model is a fine-tuned model where the triggers do not have any effects to cause any misclassification. We can see that through path connection, we can eliminate the influence of the triggers quickly in some cases. For example, with 250 images, the error rate of triggered images reaches 100% at t = 0.25 while the clean accuracy at this point is lower than the fine-tuned model at t = 1. Similarly, for the case of error-injection attack, we first fine-tune the model using the bonafide data and then connect the original model with the fine-tuned model. We follow the same settings of the backdoor attack for the finetuning and path connection. The performance of the one tampered model case is shown in Figure A8 (a). We can see that the effects of injected errors can be eliminated quickly through path connection while the clean accuracy is kept high. Backdoor and error-injection attacks given two differently tampered models If the given two backdoored models are trained with different poisoned datasets (e.g. different number of poisoned images), the path connection method works as well in this case. We train two backdoored models by poisoning 30% and 10% of the training dataset, respectively. The performance of the path connection between the two models are shown in Figure A9. We can see that the connection can quickly remove the adversarial effects of backdoor attacks. Single target attack Legend Figure A7: Error rate under backdoor attack on path connection for CIFAR-10 (VGG). The error rate of clean/triggered samples means the standard-test-error/attack-failure-rate, respectively. One tampered model Two tampered models with different injected errors Legend Figure A8: Error rate against error-injection attack on the connection path for CIFAR-10 (VGG). The error rate of clean/targeted samples means standard-test-error/attack-failure-rate, respectively. If the two given models with injected errors are trained with different settings (e.g. different total number of training images to inject the errors), the path connection method works as well in this case. For the start and end models, the number of injected errors is set to 4. The number of images with the same classification requirement is set to 996 for the start model, and 1496 for the end model, respectively. The performance of path connection is shown in Figure A8 (b). We can observe that it is able to obtain a robust model with high clean accuracy. Single-target attack All-targets attack Legend Figure A9: Error rate against backdoor attack on the connection path for CIFAR-10 (VGG). The error rate of clean/triggered samples means the standard-test-error/attack-failure-rate, respectively. To explore the model weight space, we add Gaussian noise to the weights of a backdoored model to generate 1000 noisy models of the backdoored model at t = 0. The standard normal Gaussian noise has zero mean and a standard deviation of the absolute difference between the two end models on the path. The distribution of clean accuracy and backdoor accuracy of these noisy models are reported in Figure A10 (a). The show that the noisy models are not ideal for attack mitigation and model repairing since they suffer from low clean accuracy and high attack success rate. In other words, it is unlikely, if not impossible, to find good models by chance. We highlight that finding a path robust to backdoor attack between two backdoored models is highly non-intuitive, considering the high failure rate of adding noise. We can observe similar phenomenon for the injection attack in Figure A10 (b). (a) backdoor attack (b) error-injection attack Figure A10: Clean and attack accuracy distribution for 1000 noisy models. To provide a technical explanation on why our path connection approach can mitigate backdoor and injection attacks, we compare the similarity of input gradients between the models on the connection path (t ∈) and the end models (t = 0 or t = 1) in terms of clean data and tampered data. The rationale of inspecting input gradient similarity can be explained using the first-order approximation of the training loss function. Let l(x|w t) denote the training loss function of a data input x given a model w t on the connection path, where t ∈. Then the first-order Taylor series approximation on l(x|w t) with respect to a data sample x i is l(x|w t) = l(where ∇ x l(x i |w t) is the gradient of l(x|w t) when x = x i. Based on the flat loss of mode connectivity, we can further assume l(x i |w t) is a constant for any t ∈. Therefore, for the same data sample x i, the model w t (t ∈) will behave similarly as the end model w 0 (or w 1) if its input gradient ∇ x l(x i |w t) is similar to ∇ x l(x i |w 0) (or ∇ x l(x i |w 1)). Figure A11 shows the average cosine similarity distance of the input gradients between the models on the path and the end models for the backdoor attack and the injection attack. The pairwise similarity metric for each data sample is defined as m = |s − 1|/2, where s is the cosine similarity of the input gradients between the models on the path and the end models. Smaller m means higher similarity of the input gradients. Comparing the minimum of the solid and dashed curves of different colors on the path respectively, which corresponds to the similarity to either one of the two end models, we find that the similarity of clean data are consistently higher than that of tampered data. Therefore, the sanitized models on the path can maintain similar clean data accuracy and simultaneously mitigate adversarial effects as these models are dissimilar to the end models for the tampered data. Path for the injection attack Figure A11: Similarity distance between the models on the path and the two end models for CIFAR-10 (VGG). Smaller distance value means higher similarity. We consider the advanced attack setting where the attacker knows path connection is used for defense but cannot compromise the bonafide data. Furthermore, we allow the attacker to use the same path training loss function as the defender. Backdoor attack. To attempt breaking path connection, the attacker first separately trains two backdoored models with one poisoned dataset. Then the attacker uses the same poisoned dataset to connect the two models and hence compromises the models on the path. Note that when training this tampered path, in addition to learning the path parameter θ, the start and end models are not fixed and they are fine-tuned by the poisoned dataset. Next the adversary releases the start and end models (t = 0, 1) from this tampered path. Finally, the defender trains a path from these two models with bonafide data. We conduct the advanced (path-aware) single-target backdoor attack experiments on CIFAR-10 (VGG) by poisoning 10% of images in the training set with a trigger. Figure A12 (a) shows the entire path has been successfully compromised due to the attacker's poisoned path training data, yielding less than 5% attack error rate on 10000 test samples. Figure A12 (b) shows the defense performance with different number of clean data to connect the two specific models released by the attacker. We find that path connection is still resilient to this advanced attack and most models on the path (e.g. t ∈ [0.25, 0.75]) can be repaired. Although this advanced attack indeed decreases the portion of robust models on the path, it still does not break our defense. In Table A4, we also compare the generalization and defense performance and demonstrated that path connection outperforms other approaches. Moreover, in the scenario where two tampered models are close in the parameter space (in the extreme case they are identical), we can leverage the proposed path connection method with one tampered model to alleviate the adversarial effects. The details are discussed in the "Extensions" paragraph and Appendix G. Table A4: Performance against path-aware single-target backdoor attack on CIFAR-10 (VGG). Clean accuracy Backdoor accuracy Method / Bonafide data size 2500 1000 500 250 50 2500 1000 500 250 50 Path connection (t = 0.27) 90% 87% 83% 81% 75% 3.8% 2.9% 3.6% 4.2% 5.6% Fine-tune 88% 84% 82% 80% 69% 4.1% 4.6% 4.4% 3.9% 5.9% Train from scratch 58% 48% 36% 28% 20% 0.6% 0.5% 0.9% 1.7% 1.9% Noisy model (t = 0) 38% 38% 38% 38% 38% 91% 91% 91% 91% 91% Noisy model (t = 1) 35% 35% 35% 35% 35% 86% 86% 86% 86% 86% Prune 87% 85% 83% 81% 79% 29% 48% 69% 77% 81% Error-injection attack. For the advanced attack in the error-injection setting, the attacker first trains two models with injected errors and then connects the two models with clean data. Then, the attacker tries to inject errors to the three models w 1, w 2 and θ. Note that based on Equation or, all the models on the path can be expressed as a combination of these three models. Thus, the whole path is expected to be tampered. Next, the attacker releases the start and end models from the "bad" path. Finally, the defender trains a path from these two models with bonafide data. We conduct the advanced (path-aware) error-injection attack experiments on CIFAR-10 (VGG) by injecting 4 errors. Figure A13 (a) shows that the entire path has been successfully compromised by the advanced path-aware injection. While the clean error rate is stable across the path, at least 3 out of 4 injected errors (corresponding to 25% attack error rate) yield successful attacks. After training a path to connect the two released models with the bonafide data, the clean error and attack error are shown in Figure A13 (b). We can observe that path connection can effectively eliminate the injected errors on the path (i.e., high attack error rate). Evasion attack After training the connection with the training set for 100 epochs, adversarial examples of the whole test set generated by the ∞ -norm based PGD method with attack strength = 8/255 and 10 attack iterations are used to test the performance of the path connection. Adversarial training For adversarial training, at each model weight update stage, we first generate adversarial examples with the PGD method . The attack strength is set to = 8/255 with 10 attack iterations. After the adversarial examples are obtained, we then update the model weights based on the training losses induced by these adversarial examples with their correct labels. Input Hessian As the adversarial examples perform small perturbations around the clean images to increase the robustness loss function, thus incurring a misclassification, it relates to the Hessian matrix of the loss function with reference to the input images. A Hessian matrix (or Hessian) is the second-order partial derivatives of a scalar-valued function, describing the local curvature of a function with many variables. In general, large Hessian spectrum means the function reaches a sharp minima, thus leading to a more vulnerable model as the input can leave this minima with small distortions. By contrast, in the case of flat minima with small Hessian spectrum, it takes more efforts for the input to leave the minima. As we find the robustness loss barrier on the path, we are interested in the evolution of the input Hessian for the models on the path. We uniformly pick the models on the connection and compute the largest eigenvalue of the Hessian w.r.t. to the inputs. To deal with the high dimension difficulties for the Hessian calculation, the power iteration method is adopted to compute the largest eigenvalue of the input Hessian with the first-order derivatives obtained by back-propagating. Unless otherwise specified, we continue the power iterations until reaching a relative error of 1E-4. For ease of visual comparison, in Figure 4, we plot the log value of the largest eigenvalue of the input Hessian together with the error rate and loss for clean images and adversarial examples. We note that the Pearson correlation coefficient (PCC) is indeed computed using the original largest eigenvalue of input Hessian and robustness loss. As demonstrated in Figure 4, the evolution of the input Hessian is very similar to the change of the loss of adversarial examples. As we can see, the largest eigenvalue on the path does not necessarily have a high correlation with the error rate of the clean images in the training set and test set or the adversarial examples. Instead, it seems to be highly correlated with the loss of adversarial examples as they share very similar shapes. This inspires us to explore the relationship between the largest eigenvalue of the input Hessian and the robustness loss. For simplicity, here we drop the the notation dependency on the input data sample x. It also suffices to consider two models w:= w(t) and w + ∆w:= w(t + ∆t) for some small ∆t on the path for our analysis. We begin by proving the following lemma. Lemma 1. Given assumption (a), for any vector norm ·, for any data sample x, Figure A14: Loss, error rate, attack success rate and largest eigenvalue of input Hessian on the path connecting different model pairs on CIFAR-10 (VGG) using robust connection. The path is obtained with robust training method. The error rate of training/test data means standard training/test error, respectively. There is still a robustness loss barrier between non-robust and robust models, but not there is no robustness loss barrier between robust model pair and non-robust model pair (verified by small loss variance and flat attack success rate on the path). There is also a high correlation between the robustness loss and the largest eigenvalue of input Hessian, and their Pearson correlation coefficient (PCC) is given in the title of each plot. where To solve the problem, we first samplet from the uniform distribution U and we can obtain the model φ θ (t). Based on the model φ θ (t), we find the perturbation maximizing the loss within the range S, max We can use projected gradient descent method for the maximization. where S denotes the projection to the feasible perturbation space S, and sgn(·) denotes element-wise sign function taking the value of either 1 or −1. After finding the perturbationδ maximizing the loss, we would minimize the expectation. At each iteration, we make a gradient step for θ as follows, We show the robust connection for a pair of non-robust and non-robust models, a pair of non-robust and robust models, and a pair of robust models in Figure A14. We can observe that with the robust training method, there is still a robustness loss barrier between the non-robust and robust models. However, for the robust connection of robust model pair and non-robust (regular) model pairs, there is no robustness loss barrier, verified by small loss variance and flat attack success rate on the path. Our also suggest that there is always a loss barrier between non-robust and robust models, no matter using standard or robust loss functions for path connection, as intuitively the two models are indeed not connected in their respective loss landscapes. Moreover, the attack success rate of adversarial examples are relatively small on the whole path compared with the robust connection of non-robust and robust models. Model ensembling Here we test the performance of (naive) model ensembling against evasion attacks. Given two untampered and independently trained CIFAR-10 (VGG) models, we first build a regular connection of them. Then we randomly choose models on the path (randomly choose the value of t) and take the average output of these models as the final output if given an input image for classification. The adversarial examples are generated based on the start model (t = 0) or end model (t = 1) and we assume the attacker does not have any knowledge about the connection path nor the models on the path. We use these adversarial examples to test the performance of the model ensembling with the models on the connection. The attack success rate of adversarial examples can decrease from 85% to 79%. The defense improvement of this naive model ensembling strategy is not very significant, possibly due to the well-known transferrability of adversarial examples . Similar findings can be concluded for the robust connection method. To evaluate the proposed path connection method on more complicated image recognition benchmarks, we demonstrate its performance against backdoor and error-injection attacks on CIFAR-100 as shown in Figure A15. The experimental setting is similar to that of CIFAR-10, where the two end models are backdoored/error-injected, and the connection is trained with various bonafide data sizes. We can observe that our method is still able to remove the adversarial effects of backdooring or error-injection and repair the model. For the evasion attack, we investigate two cases, 1) the connection of two regular models and 2) the connection of regular and adversarially-trained models. The performance of adversarial training on CIFAR-100 is not as significant as that on CIFAR-10. So we do not investigate the connection of two adversarially-trained models. Their loss and eigenvalue on the path are shown in Figure A16. We can observe that there is also a high correlation between the robustness loss (loss of adversarial examples) and the largest eigenvalue of input Hessian. We demonstrate the performance of fine-tuning with various hyper-parameter configurations in this section. For CIFAR-10 (VGG), we perform fine-tuning with different learning rate and the number of total epochs with the bonafide data of 2500 images and 1000 images, respectively. The clean accuracy and attack accuracy are shown in Figure A17. We also plot the clean and attack accuracy obtained through our path connection method from Table 2 in Figure A17 as a reference. As observed from Figure A17 (a), larger learning rate (such as 0.05) can decrease the attack accuracy more rapidly, but the clean accuracy may suffer from a relatively large degradation. Small learning rate (such as 0.01) can achieve high clean accuracy, but the attack accuracy may decrease with a much slower speed, leading to high attack accuracy when fine-tuning stops. This is more obvious if we use less bonafide data (reducing data size from 2500 to 1000) as shown in Figure A17 (b). Fine-tuning performs worse with lower clean accuracy and higher attack accuracy. Since fine-tuning is quite sensitive to these hyper-parameters, we conclude that it is not easy to choose an appropriate learning rate and the number of fine-tuning epochs, especially considering the user is not able to observe the attack accuracy in practice. On the other hand, in Figure A17 (a) our path connection method can achieve the highest clean accuracy. In Figure A17 (b), although the clean accuracy of lr = 0.01 is Figure A16: Loss, error rate, attack success rate and largest eigenvalue of input Hessian on the path connecting different model pairs on CIFAR-100 (VGG) using standard loss. The error rate of training/test data means standard training/test error, respectively. In all cases, there is no standard loss barrier but a robustness loss barrier. There is also a high correlation between the robustness loss and the largest eigenvalue of input Hessian, and their Pearson correlation coefficient (PCC) is reported in the title. higher than that of path connection, its attack accuracy remains high (about 40%), which is much larger than that of path connection (close to 0%). In Appendix E, we perform multiple runs for the Gaussian noise experiment and only report the average accuracy. The variance is not reported since the average accuracy is able to demonstrate that the Gaussian noise method is not a good choice for removing adversarial effects. To investigate the stability of the path connection method with respect to every possible factor, it will cost considerable amount of time and resource to run all experiment setups considering the various attack methods, datasets, and model architectures. So here we mainly perform one representative experiment setup with multiple runs and show their mean and standard deviation. Figure A18 shows the error bars of the error rate computed over 10 runs for path connection against backdoor attack. The dataset is CIFAR-10 and the model architecture is ResNet. For each bonafide data size, we train 10 connections with different hyper-parameter settings, that is, starting from random initializations and using various learning rates (randomly set learning rate to 0.005, 0.01 or 0.02). Their average value and standard deviation are shown in Table A5. We can observe that although the connection may start from different initializations and trained with different learning rates, their performance on the path are close with a relatively small variance, demonstrating the stability of our proposed method. (a) test error bar (b) attack error bar Figure A18: Error rate against backdoor attack for CIFAR-10 (ResNet). Q STRATEGY TO CHOOSE THE PARAMETER t In our proposed method, we need to choose a model on the path as the repaired model, that is, choosing the value of t. For different datasets/models, the best choice of t may vary. So we discuss some general principles to choose an appropriate t in this section. We note that in practice the user is not able to observe the attack error rate as the user does not have the knowledge about certain attacks. If the user is able to observe the accuracy on the whole clean test set, we suggest the user to choose the model (a value of t ∈) with a test accuracy a − ∆a, where a is the accuracy of the end model and ∆a represents a threshold. Based on the performance evaluations (Figures 2, 3, A4, A5, and A6), setting ∆a to 6% should be an appropriate choice, which is able to eliminate the effects of all attacks without significantly sacrificing the clean accuracy. If the user is not able to access the accuracy of the whole clean test set, the user has to choose t only based on the bonafide data. In this case, we suggest the user to use the k-fold cross-validation method to assess the test accuracy. This method first shuffles the bonafide data randomly and splits it into k groups. Then one group is kept to test the accuracy on the learned path and the remaining k − 1 groups are used to train the path connection. The process is repeated for each group. We perform additional experiments with the 5-fold cross validation method for CIFAR-10 (VGG) and SVHN (ResNet). The average validation error rate on the hold-out set and attack error rate against backdoor attack is shown in Figure A19. The error-injection attack is easier to counter and hence we do not explore the error-injection experiments. We can observe that since the validation data size reduces to a much smaller value (one fifth of the bonafide data size), the test error rate becomes less stable. But it generally follows the trends of the test error rate on the whole clean test set (see Figure A4). So by utilizing the k-fold cross-validation method, we can obtain the test accuracy on the limited validation set. Then the aforementioned threshold method can be used to choose the model on the path. Here we suggest setting ∆a to 10%, a more conservative threshold than the former case, as the user now does not has access to the accuracy of the whole test set. We also note that the performance of bonafide data with 50 images has a large deviation from other data size settings, so we suggest to use a larger bonafide data size with the k-fold cross validation method.
A novel approach using mode connectivity in loss landscapes to mitigate adversarial effects, repair tampered models and evaluate adversarial robustness
1,418
scitldr
Generative adversarial networks (GANs) learn to map samples from a noise distribution to a chosen data distribution. Recent work has demonstrated that GANs are consequently sensitive to, and limited by, the shape of the noise distribution. For example, a single generator struggles to map continuous noise (e.g. a uniform distribution) to discontinuous output (e.g. separate Gaussians) or complex output (e.g. intersecting parabolas). We address this problem by learning to generate from multiple models such that the generator's output is actually the combination of several distinct networks. We contribute a novel formulation of multi-generator models where we learn a prior over the generators conditioned on the noise, parameterized by a neural network. Thus, this network not only learns the optimal rate to sample from each generator but also optimally shapes the noise received by each generator. The ing Noise Prior GAN (NPGAN) achieves expressivity and flexibility that surpasses both single generator models and previous multi-generator models. Learning generative models of high-dimensional data is of perpetual interest, as its wide suite of applications include synthesizing conversations, creating artwork, or designing biological agents (; ;). Deep models, especially generative adversarial networks (GANs), have significantly improved the state of the art at modeling these complex distributions, thus encouraging further research . Whether implicitly or explicitly, works that use GANs make a crucial modeling decision known as the manifold assumption (; ;). This is the assumption that high-dimensional data lies on a single low-dimensional manifold which smoothly varies and where local Euclidean distances in the low-dimensional space correspond to complex transformations in the high-dimensional space. While generally true in many applications, this assumption does not always hold . For example, recent work has emphasized situations where the data lies not on one single manifold, but on multiple, disconnected manifolds (; ;). In this case, GANs must attempt to learn a continuous cover of the multiple manifolds, which inevitably leads to the generation of off-manifold points which lie in between . The generator tries to minimize the number of these off-manifold points, and thus they are generally just a small fraction of the total generated distribution. As such, they barely affect the typical GAN evaluation measures (like Inception and FID scores for images), which measure the quality of the generated distribution as a whole. Thus, this problem is usually ignored, as other aspects are prioritized. However, in some applications, the presence of these bad outliers is more catastrophic than slight imperfections in modeling the most dense regions of the space. For example, consider the goal of an artificial agent acting indistinguishably from a human: the famous Turing Test. Incorrectly modeling sentence density by using a given sentence structure 60% of the time instead of 40% of the time is relatively harmless. However, generating a single gibberish sentence will give away the identity of the artificial agent. Moreover, there are serious concerns about the implications this has for proofs of GAN convergence . These works address the problem of disconnected manifolds by Figure 1: The Noise-Prior GAN (NPGAN) architecture. Unlike previous work, the NP network learns a prior over the generators conditioned on the noise distribution z. This allows it to both control the sampling frequency of the generators and shape the input appropriate to each one, in an end-to-end differentiable framework. simultaneously training multiple generators and using established regularizations to coax them into dividing up the space and learning separate manifolds. Methods for getting multiple generators to generate disconnected manifolds can be divided into two categories: (i) imposing information theoretic losses to encourage output from different generators to be distinguishable (ii) changing the initial noise distribution to be disconnected . Our approach falls into the second category. Previous efforts to change the noise distribution to handle disconnectedness has exclusively taken the form of sampling from a mixture of Gaussians rather than the typical single Gaussian (with sampling fixed and uniform over the mixture). Our approach differs significantly from those previously. We use multiple generators as before, but instead of dividing up the noise space into factorized Gaussians and sending one to each generator, we let an additional neural network determine how best to divide up the noise space and dispatch it to each generator. This network learns a prior over the generators, conditioned on the noise space. Thus, we call our additional third network a noise-prior (NP) network. Previous methods have modeled the data with noise z and generators We instead propose a framework to incorporate a richer p(G i |z) into the generator. This framework is entirely differentiable, allowing us to optimize the NP network along with the generators during training. We note that with this strategy, we significantly increase the expressivity of each generator over the previous disconnected manifold models. By dividing up the space into four slices s i and sending s 1, s 3 to the first generator and s 2, s 4 to the second generator, we can generate four disconnected manifolds with just two generators. Previous work would have to devote precisely four generators to this task, with degradation in performance if fewer or more generators are chosen for the hyperparameter. Here, the prior network learns to divide the noise space appropriately for whatever number of generators is chosen, and is thus more expressive as well as more robust than previous models. Moreover, much existing work has exclusively framed the problem as, and tailored solutions for, the disconnected manifold problem. Our approach is more generalized, addressing any misspecification between noise distribution and the target distribution. This means that our approach does not become redundant or unnecessary in the case of single complex manifolds, for example. Our contributions can be summarized as: 1. We introduce the first multi-generator ensemble to learn a prior over the noise space, using a novel soft, differentiable loss formulation. 2. We present a multi-generator method that can learn to sample generators in proportion to the relative density of multiple manifolds. 3. We show how our model not only improves performance on disconnected manifolds, but also on complex-but-connected manifolds, which are more likely to arise in real situations. Several previous works have included multiple generators, mixing and matching a few commonly used features. Some use completely distinct generators , while others tie some or all of their weights . Most use a single parametric noise source (e.g. a single Gaussian) while one uses a mixture of Gaussians . Most sample the generators randomly with equal probability, but one attempts to find (in a non-differentiable way) a sampling scheme to not sample from redundant generators . encourages diversity among generator outputs by introducing a classifier that tries to identify the generator a data point came from, or whether it is a real data point (reminiscent of an auxiliary classifier or the mutual information loss of ). A more theoretical analysis of convergence and equilibrium existence in the loss landscape motivated a multiple-generator, multiple-discriminator mixture in . We discuss in detail the works with the most resemblance to our approach here: DeLiGAN The DeLiGAN was designed to handle diverse datasets with a limited amount of datapoints. It used a single generator and a Gaussian mixture model latent space. To train, a single random Gaussian i out of the mixture is chosen, and then they added µ i to the N ormal noise and multiplied it by σ i, with both µ i and σ i as learnable parameters. This differs from our work because while the noise is separated into different components, the probability of selecting each component is cut off from the gradient information in the model and is not differentiable (each Gaussian is selected with an equal probability, and this never changes). Also, every component of the noise is parameterized as a Gaussian. Finally, only one component of the noise is trained at a time (a single µ i and σ i is randomly selected for each training batch), while our model learns to model the data over the full collection of generators in each minibatch. MGAN The MGAN focused on the problem of mode collapse and addressed it by using multiple generators which are really the same network except for the first linear projection layer. They introduced a new loss term into the traditional GAN training: to encourage the generators to learn different parts of the data space, a lower bound on mutual information between the generated images and the generator they came from was maximized. This is helpful because the generators share almost weights between them and otherwise may redundantly use multiple generators to cover the same part of the space. Unlike in our work, they use a single noise source and let the single first layer of the generators learn to project it to different parts of the space before going through the same convolutions. In our work, this transformation of the noise before going to each generator is done with a separate network which gets gradient information through the generator, but is not optimized jointly with the generator weights. Moreover, like the DeLiGAN, the probability over the multiple generators was assumed to be fixed and uniform. DMWGANPL The DMWGANPL exclusively viewed multi-generator models as a solution for disconnected manifolds. Each generator is given the same single noise sample, and the same mutual information criteria (termed Q(G i |x), the probability that x came from generator G i ) as the MGAN was used to ensure each generator learned a different part of the space. Unlike the previous works, they do not assume an equal probability of selecting each generator. Instead, they sample each generator G i with probability r i. After each step of the generator and discriminator during training, the r i's are updated to maximize mutual information between their distribution and Q(G i |x). This has the primary effect of not sampling redundant generators whose output is hard to distinguish from another generator's output, and is completely disassociated from the minimax GAN game. Each generator gets the same noise sample that takes a single parametric form (N ormal), and the effect this has on the minimax game and the quality of generated images is only indirect and tangential to the objective being minimized. 3 MODEL We seek a generator G that learns to mimic P X by mapping from a noise distribution Z ∼ P Z. To do this, we train a discriminator D and pit them against each other in the standard GAN framework: min where G tries to minimize and D tries to maximize this objective. Motivated by the success of ensemble models (; ; ;), our NPGAN represents the generating function G with multiple distinct generators of the same architecture. However, rather than simply averaging equal, independent, randomly initializing models to take advantage of uncorrelated errors, we adopt insights from machine teaching and knowledge distillation . Algorithm 1 Calculating the loss L GAN optimized during training. We use a teacher network to select a generator G i conditioned on the particular input it sees. By learning a prior over the generators conditioned on the noise, this Noise Prior (NP) network delegates each input point to the appropriate generator that is optimally prepared to handle it. Thus, our total generator G can be decomposed into: When traditionally training a GAN, the generator and discriminator alternate gradient descent steps, allowing gradient information to flow through the other network while keeping it fixed and only optimizing with respect to the given network's parameters. We extend this to our third noise prior network N P, allowing gradient information to flow through the fixed generators G i and discriminator D while optimizing the GAN loss with respect to the parameters of N P. During training, we let both the NP network and the generators use a soft version of the GAN loss, weighting each generator output by the learned probabilities N P (G i |z). Then, during inference, the choice of G i is sampled from this learned prior. The full details of the training procedure are given in Algorithm 1. The N P network looks at the sample from Z and determines how best to divide it across the generators to achieve the goal of modeling P X. In the special case where P X is disconnected, N P can divide Z in multiple ways such as giving each generator a continuous area of Z, or giving some generators multiple disconnected areas of Z (as we demonstrate later in the experiments). Nothing in our model formulation is specifically designed to model disconnectedness or any other specific property in P X, however, so N P can learn to divide the sample from Z in whatever way is most appropriate for the given shape of P X. Thus, we model a distribution over our generators rather than simply sampling them uniformly and concatenating their output. Moreover, we learn this distribution over the generators with another network which is conditioned on the input noise, allowing it to choose the shape of input each generator receives. This network does not optimize a separate loss that is a heuristic indirectly related to the GAN framework, but directly participates in the GAN minimax game. To summarize, we fully and flexibly incorporate the multiple-generator framework into a GAN such that the model can learn for itself how best to use the generators. This is achieved by modeling a prior over the generators that is conditioned on the noise input and optimizing it with respect to the GAN loss directly. Our first experimental dataset (Figure 2) consists of a mixture of samples from two-dimensional Gaussians such that the three Gaussians are not sampled with equal probability (7000, 5000, and 3000 points, respectively). We compare our NPGAN's ability to model this distribution to a single generator model, MGAN , DMWGANPL , and DeLiGAN . The noise distribution for each model was a 100-dimensional U nif orm(−1, 1) except the DeLiGAN, which requires samples from N ormal. The generators in each case share the same architecture of three layers with 200-100-20 neurons per layer and Leaky ReLU activations. The discriminator in all cases had three layers of 1000-200-100 neurons and used minibatch discrimination . Initially obvious is that a single generator cannot model this distribution without generating trailing points that connect the manifolds. By looking at the underlying density plots, we see that most of the generated data lies on the manifolds, in terms of proportions of points. However, when densely sampling the noise distribution, these few off-support outliers still arise. We then evaluate all of the multi-generator models with three generators, which we know to be the true underlying number of disconnected manifolds in this synthetic situation. The MGAN and DeLiGAN fail to model each manifold with a distinct generator and thus cover multiple manifolds with one of their generators and produce a trail of points in between. This failure stems from their sampling the generators with a fixed, equal probability. Since the disconnected manifolds do not have exactly the sample probability, their model formulations cannot effectively manage this situation. The DMWGANPL does learn a prior over the generators, but this prior only learned to eliminate redundant generators. Thus, it does learn an unequal sampling of the generators and each generator produces points that are distinct from the other generators, but does so without accurately modeling the data. The NPGAN, however, assigns each manifold to an individual generator and matches the data distribution without generating any off-manifold points. This is confirmed quantitatively in Table 1, where we measure the percentage of points each model generates that are off the manifold, which we define to be any point farther from the center of any Gaussian than the largest distance of any of the true points. There we see that the NPGAN generates no points off the manifold, while the other models are all forced to generate a trail of points connecting two of the Gaussians. We next demonstrate the improved flexibility of the NPGAN over previous models by choosing two generators, imagining ourselves in the realistic case of not knowing the true underlying number of disconnected manifolds in a particular dataset (Figure 3). In this case, all of the other models must inevitably cover two manifolds with a single generator. Since each generator receives a continuous, unaltered noise distribution, this means they produces points off the manifold (Table 1). The NPGAN alone learns to model three disconnected manifolds with two generators without generating off-manifold points in between. To investigate how the NPGAN achieves this, we learn another model with Z ∼ U nif orm(−1, 1) ∈ R 2, so that we can plot the noise in two dimensions. In Figure 4a and 4c, we see the noise space for two and three generators, respectively. Notably, in both cases there are three partitions, no matter the number of generators. By learning to give one generator disconnected input, the N P network effectively models a third manifold without having a dedicated generator responsible for it. Viewing the latent space also informs us how the NPGAN can easily model non-equally sampled manifolds, Figure 4: We investigate the NP network by using two-dimensional uniform noise and plotting the learned prior over the generators. The three unequally sampled Gaussians in the data can have their density matched with three generators (c-d) or by creating a discontinuity with two generators (a-b). Table 1: Scores for each model on each artificial dataset and the number of generators used (2Gen or 3Gen). For the Gaussian data, the score is the percentage of generated points off the manifold. For the parabolas, the score represents the percentage of real points without any generated point in its neighborhood. as well, as the size of each partition of the noise space expands or contracts to match the underlying data distribution. Our next dataset explores the case where the underlying data distribution is complex but not necessarily disconnected. The other models have design choices to specifically target distinct areas of the data space with each generator. While single generator networks have difficulty with disconnected parts of the data space, there are many other ways the data distribution can be difficult for a single generator to model that have nothing to do with disconnectedness. Since the NPGAN gives the N P network full flexibility to shape the input for each generator however it needs to in order to beat the discriminator, it can aid in generating complex shapes of any kind. To investigate this we create a distribution of intersecting parabolas and test it with two and three generators for each model (Figures 5 and 6). As before, this complex shape is too difficult for a single generator to effectively model. In the DeLiGAN, the equal probability Gaussians trained alternatingly are unable to coordinate with each other and capture any of the tails of the two parabolas. The MGAN and DMWGANPL have the mutual information penalty that pushes the generated points for different generators away from each other. This not only keeps them from learning to generate intersecting shapes, but it pushes the optimization away from any solution where it requires a complex function to know which generator a particular point came from. The NPGAN, on the other hand, effectively models the data distribution with just two generators while finding a different but equally effective way of modeling it with three generators. As opposed to the previous Gaussian example, the problem here is not generating points off the manifold but leaving parts of the true manifold unmodeled. Thus, to quantitatively evaluate this dataset, we calculate the percentage of real points that do not have a generated point within an -ball centered on it (= .001). Table 1 confirms that the other models leave significant parts of the tails unmodeled, representing as much as 32% of the data in the most extreme case. The NPGAN's low score with both two and three generators corroborates that it can not only help in modeling complex distributions, but that the flexible formulation makes it robust to the specific number of generators chosen. To test the NPGAN's ability to model disconnected, unevenly sampled manifolds on real images, we combine two distinct datasets. We take 1500 images randomly from the CelebA dataset and combine them with the 6000 photographs dataset from . To effectively match this data distribution with two generators, the models will have to either learn to sample generators at a differential rate, or have one generator cover a discontinuity (or both). The images were resized to 32x32, and all models use a DCGAN architecture , with three convolutional transpose layers in the generators and three convolutional layers in the discriminator. Each convolution used stride length two, kernel size three, batch normalization on all layers except the first layer of the discriminator, and ReLU activations in the generator with Leaky ReLU activations in the discriminator. Training was performed on minibatches of 32 with an Adam optimizer with learning rate 0.0001. In the MGAN and DeLiGAN, the generators are all the same network except for the initial linear projection of the noise (or the adding of the generator-specific mean and the multiplying of the generator-specific standard deviation in the DeLiGAN). In our NPGAN and the DMWGANPL, the generators do not share weights. To compensate for the increased capacity that this would otherwise provide, we decrease the number of filters per generator learned to keep the total number of parameters across all models (within 1%). Figure 7 shows randomly selected images from each model, and there we can see the consequences of the MGAN and DeLiGAN sampling generators at a fixed rate and giving each generator the same continuous noise. In each case, one of the generators effectively generates photos, but the other generator gets caught in between generating photos and CelebA images, producing many blurry images. The DMWGANPL samples generators at a different rate, but again did so ineffectively: one generator makes photos, but the other generator makes both CelebA images and some photos that are not being made by the other generator. Even though it is an imperfect measure of capturing outliers, the FID scores reported in Table 2 show that the imbalance affects their ability to model the underlying dataset, too. To add an uncertainty measure to this score, we average the last three model checkpoints and report the mean and standard deviation. In Figure 7, we see the NPGAN learns to sample more from the generator that exclusively makes photos, while also using its ability to create discontinuity in its input to allow the other generator to make both CelebA images and a few realistic photos. Only the NPGAN is able to effectively model this disconnected, unevenly sampled dataset. Table 2: FID Scores for all models. In this section, we explore how connectedness affects the of the models for image datasets. The previous works on multiple generators have emphasized disconnectedness, but we show here that the NPGAN outperforms the alternatives even without disconnected data. The effects of other properties, like class/mode imbalance, dominate the . We test this notion by modifying the dataset from the previous section to create a connection between the Face and Photo images. To do this, we randomly choose images in each dataset and perform linear interpolation between them with a mixing coefficient α chosen from a U nif orm distribution. We add these interpolations to the Face+Photo dataset to make a ConnectedFace+Photo dataset. Conceptually, ConnectedFace+Photo takes the shape of a "barbell" with a narrow trail connecting two areas of density in data space. We then repeat the experiment of the previous section and report the . Notably, the quantitative remain essentially the same. This can be explained with a couple of observations. First, as in the artificial cases, the other models have difficulty dealing with density imbalances, and this difficulty dominates the effects of whether the data is disconnected or not. Second, as previously discussed, the FID scores in Table 2 are affected most strongly by model performance where most of the data density is as opposed to a few bad trailing outlier points. Nonetheless, the presence of wrong off-manifold outliers like those in Figure 7b -d could be severely problematic in contexts with a higher sensitivity to outliers than the FID score captures. Table 3: Outlier manifold distances for all models on FaceBed and CIFAR. Next, we explore the NPGAN's ability to model the canonical CIFAR10 dataset . Unlike in the previous case where the disconnectedness in the data was drastic enough to measurably affect sample quality as measured by FID, here our NPGAN produced essentially the same FID as our code with one-generator (26.4 to 25.8). However, as previously discussed, FID is not a good measure of whether a model produced outliers or not, since generating 1% bad samples off the manifold will be unnoticed in FID score if coupled with a slight improvement of sample quality on the other 99% of the samples. With that in mind, we introduce a new measure of how bad a model's worst samples are: outlier manifold distance. Unlike FID, our outlier manifold distance is sensitive to a model generating outliers, irrespective of how good its best samples are. We calculate this distance by finding the average distance of the 1% furthest generated points from the real data manifold, as measured by the distance to the closest real point in the last feature map of the Inception network for each generated point. The outlier manifold distance for each model is then the average of the 1% largest distances (the 1% "most anomalous" points). In Table 3, we see that NPGAN has the best outlier manifold distance of all models. As a sanity check, we also calculate it on the previous FaceBed data, and show that it confirm quantitatively what we saw qualitatively and with FID score, that other models produce outliers that are worse than NG-PAN's worst samples. For space reasons, a more comprehensive investigation into the NPGAN's use of multiple generators on CIFAR we defer to the supplement. We introduced a novel formulation of multiple-generator models with a prior over the generators, conditioned on the noise input. This in improved expressivity and flexibility by shaping each generator's input specifically to best perform that generator's task. In this section, we elaborate on the CIFAR experiment from the main text. We use a more complicated architecture here with spectral normalization, self-attention, and ResNet connections, per the best achieving models to-date. We experimented using two, three, four, and five generators in the NPGAN architecture. Figure A.1 shows images generated by the NPGAN with each number of generators. With just two generators, each one creates a wide diversity of images. On the other hand, when increasing the number of generators, each one more homogeneous. For example, in the two generator model, one of them creates dogs, cars, and frogs, while in the five-generator model each generator has specialized to just birds in the sky or just cars. Qualitatively, the noise prior is obviously learning a sensible split of the data across generators and each generator is outputting quality images. However, when comparing the two-generator, threegenerator, four-generator, and five-generator versions of NPGAN to the baseline one-generator of the same model, we do not observe any improvement in FID score. This is unsurprising for the reasons mentioned in the main text. The FID scores treat all points equally across a generated dataset, and thus will be most strongly influenced by where the most points are. A relatively small number of outliers barely register by this metric. Even current state-of-the-art image generation on CIFAR10 is no where close to perfectly modeling the data. When GANs are able to perfectly model the dataset except for trailing outliers between modes, we expect the NPGAN's improvements to be visible in FID scores on this dataset. Until then, the detection of a few bad outliers needs to be done with other evaluation techniques on this dataset. With this caveat, we note that we could achieve an FID score of 26.4 with our NPGAN, compared to 25.8 with our code and one generator, which demonstrates that the NPGAN can scale to stateof-the-art architecture without suffering in quality. The NPGAN is robust to a connected dataset while simultaneously being able to automatically solve the problems of a disconnected dataset. Furthermore, this motivated the creation of our new outlier manifold distance metric, designed to be more sensitive to the creation of outliers than the FID score. Using this metric, we see NPGAN outperform all other models. Relation to Machine Teaching In , an analogous question is posed: if a teacher network knows the function its student network is supposed to learn, what are the optimal training points to teach it as efficiently as possible? For students following a Bayesian learning approach, this is thought of as finding the best data points D to make the desired model θ *, or minimizing with respect to D: −log(p(θ * |D)). In our framework, the teacher network NP does not know the function its students should learn ahead-of-time, because this target is changing continually as the discriminator improves simultaneously. Nevertheless, the NP network is still learning to form the optimal curriculum for each individual student such that the collection of students best models the target function given the current parameters of the discriminator. Relation to knowledge distillation Our NP network also has links to the field of knowledge distillation (; ; ;), where a teacher network is trying to compress or distill the knowledge it has about a particular distribution into one or several smaller models. In the case of multiple smaller models, the teacher can be thought of as a generalist whose job it is to find the right specialist for a specific problem.
A multi-generator GAN framework with an additional network to learn a prior over the input noise.
1,419
scitldr
Recent advances in Generative Adversarial Networks (GANs) – in architectural design, training strategies, and empirical tricks – have led nearly photorealistic samples on large-scale datasets such as ImageNet. In fact, for one model in particular, BigGAN, metrics such as Inception Score or Frechet Inception Distance nearly match those of the dataset, suggesting that these models are close to match-ing the distribution of the training set. Given the quality of these models, it is worth understanding to what extent these samples can be used for data augmentation, a task expressed as a long-term goal of the GAN research project. To that end, we train ResNet-50 classifiers using either purely BigGAN images or mixtures of ImageNet and BigGAN images, and test on the ImageNet validation set. Our preliminary suggest both a measured view of state-of-the-art GAN quality and highlight limitations of current metrics. Using only BigGAN images, we find that Top-1 and Top-5 error increased by 120% and 384%, respectively, and furthermore, adding more BigGAN data to the ImageNet training set at best only marginally improves classifier performance. Finally, we find that neither Inception Score, nor FID, nor combinations thereof are predictive of classification accuracy. These suggest that as GANs are beginning to be deployed in downstream tasks, we should create metrics that better measure downstream task performance. We propose classification performance as one such metric that, in addition to assessing per-class sample quality, is more suited to such downstream tasks. Recent years have witnessed a marked improvement in sample quality in Deep Generative Models. One model class in particular, Generative Adversarial Networks BID7, has begun to generate nearly photorealistic images. While applications of adversarial training have found their way into image translation BID16 and style transfer BID5, a typically discussed goal for such models, and in particular conditional ones, is data augmentation. Such models have enjoyed limited success in these tasks thus far for large-scale datasets such as ImageNet, likely because existing models did not generate sufficiently high-quality samples. Recently, however, BigGANs BID4 have generated photorealistic images of ImageNet data up to 512×512 resolution, and moreover, achieve Inception Scores and Frechet Inception Distances similar to the dataset on which they were trained. Such suggest, though do not prove, that BigGANs are indeed capturing the data distribution. If this were true, then it seems plausible that these samples can be used in downstream tasks, especially in situations in which limited labelled data are available. In this work, we test the rather simple hypothesis that BigGANs are indeed useful for data augmentation, or more drastically, data replacement of the original data distribution. To that end, we use BigGANs for two simple experiments. First, we train ImageNet classifiers, replacing the original training set with one produced by BigGAN. Second, we augment the original ImageNet training set with samples from BigGAN. Our working hypothesis is that if BigGANs were indeed capturing the data distribution, then we could use those samples, instead of or in addition to the original training set, to improve performance on classification. That it does not -on replacement, Top-5 classification Though a negative , a more positive byproduct of the work is the introduction of a new metric that can better identify issues with GAN and other generative models. In particular, training a classifier allows us to identify, for conditional generative models, which classes are particularly poor, either due to low quality samples or underrepresentation of dataset diversity. 2.1 SETUP Our experiments are rather simple: we use BigGAN-deep (further denoted as BigGAN) models to either replace or augment the ImageNet training set, train an image classifier, and compare performance on the ImageNet validation set. In the data replacement experiments, we replace the ImageNet training set with one from BigGAN-deep, and each example from the original training set is replaced with a model sample from the same class. In the augmentation experiments, we add to the ImageNet training set, 25%, 50%, or 100% more data from BigGAN. Moreover, since the truncation trick -which resamples dimensions that are outside the mean of the distribution -seems to trade off quality for diversity, we perform experiments for a sweep of truncation parameters: 0.2, 0.42, 0.5, 0.8, 1.0, 1.5, and 2.0. 1 In addition, we compare performance on replacement and augmentation to two traditional GAN metrics: Inception Score BID14 and Frechet Inception Distance (FID) BID10, as these metrics are the current gold standard for GAN comparison. Both rely on a feature space from a classifier trained on ImageNet, suggesting that if metrics are useful at predicting performance on a downstream task, it would indeed be this one. We used ResNet-50 BID9 classifier for our models, with single-crop evaluation. The classifier is trained for 90 epochs using TensorFlow's momentum optimizer, a learning rate schedule linearly increasing from 0.0 to 0.4 for the first 5 epochs, and decreased by a factor of 10 at epochs 30, 60, and 80. It mirrors the 8,192 batch setup of BID8 with gradual warmup. TAB0 shows the performance of classifiers trained on BigGAN datasets compared to the real dataset. At every truncation level, ResNet-50 classifiers trained on BigGAN samples generalize substantially worse to real images than the classifier trained on real data. To better understand why this has occurred, we broke down the performance by class for the best-performing truncation level: 1.5. As shown in the left pane of FIG0, nearly every class suffers a drop in performance compared to the original dataset. Performance on six classes -partridge, red fox, jaguar/panther, squirrel monkey, African elephant, and strawberry -did improve over the original dataset, though the improvement for those classes was marginal. The right pane of FIG0 shows the two best and two worst performing categories, as measured by the difference in classification performance. Notably, for the two worst performing categories and two others -balloon, paddlewheel, pencil sharpener, and spatula -classification accuracy was 0% on the validation set. That said, at the best truncation levels, Top-5 Error is roughly 35%, suggesting that BigGANs are learning nontrivial distributions. Given the performance of BigGAN in the replacement experiments, one should not necessarily expect improved classifier accuracy by augmenting ImageNet training set with BigGAN samples. Figure 2 illustrates the performance of the classifiers when we increase the amount of BigGAN training data. Perhaps somewhat surprisingly, BigGAN models that sample from lower truncation values, and have lower sample diversity, are able to perform better on data augmentation compared to those models that performed the best on the data replacement experiment. In fact, for some of the lowest truncation values, one found modest improvement in classification performance: roughly 3% improvement relative on Top-1 Error (but at the cost of 1.5 times the amount of training time).Finally, Inception Score and FID had very little correlation with performance on either the replacement or augmentation experiments, suggesting that alternative metrics will be needed when we turn our attention to downstream tasks. For our replacement experiments, the correlation coefficient between Top-1 error and FID is 0.16, and Inception Score 0.86, the latter incorrectly suggesting that improved Inception Score is highly correlated with increased error. Moreover, the best-performing methods have rather poor Inception Score and FIDs. That models that perform poorly on Inception Score and Frechet Inception Distance also perform poorly on classification is no surprise; that models that perform well on Inception Scores and FID perform poorly on classification suggests that alternative metrics are needed. One can easily diagnose the issue with Inception Score: as BID2 noted, Inception Score does not account for intra-class diversity, and a training set with little intra-class diversity may make the classifier fail to generalize to a more diverse test set. FID should better account for this lack of diversity at least grossly, as the metric, calculated as F ID(P x, P y) = µ x − µ y 2 + tr(Σ x + Σ y − 2(Σ x Σ y) 1/2 ), compares the covariance matrices of the data and model distribution. By comparison, per-class classification error offers a finer measure of model performance, as it provides us a per-class metric to identify which classes have better or worse performance. While in theory one could calculate a per-class FID, FID is known to suffer from high bias BID3 for low number of samples, likely making the per-class estimates unreliable. The on augmentation highlight different desiderata for samples that are added to the dataset rather than replaced. Clearly, the samples added should be sufficiently different from the data to allow the classifier to better generalize, and yet, poorer sample quality may lead to poorer generalization compared to the original dataset. This may be the reason why extending the dataset with samples generated from a lower truncation value noise -which are higher-quality, but less diverse -perform better on augmentation than replacement. Furthermore, this may also explain why Inception Score, Frechet Inception Distance, and data replacement classification error are not predictive of data augmentation classification performance. This work encompasses two lines of work: GANs for data augmentation and improved evaluation metrics for GANs. For data augmentation, BID0 proposed an image-conditioned model for augmentation, and found improved on smaller datasets. BID6 used a GAN to generate synthetic training data of size 64×64×1 of images of liver lesions. For evaluation metrics, BID15 noted there is difficulty in designing evaluation metrics that will illustrate the general performance in the model. Despite this finding, those interested in measuring the quality of implicit generative models have proposed practical metrics to compare sample quality from different models: which have led to introduction of Inception Score and FID. BID12 recommends the use classifier two-sample tests to test GAN samples as a metric. Other measures attempt to determine other properties of generative models. BID13 constructs synthetic datasets for which precision and recall can be computed approximately and compares Inception Score and FID to changes in precision and recall. Geometry Score BID11 constructs approximate manifolds from data and samples, and uses them for GAN samples to determine whether there was mode collapse. BID1 attempt to determine the support size of GANs by using a Birthday Paradox test, though it requires a human to identify two nearly-identical samples. In this work, we investigated to what extent BigGAN, the state-of-the-art GAN on ImageNet, captures the data distribution, and to what extent those samples can be used for data augmentation. Our demonstrate that despite excellent scores on traditional GAN metrics such as Inception Score and Frechet Inception Distance, current state-of-the-art GAN models do not capture the distribution for large-scale datasets such as ImageNet. Moreover, we found only a modest improvement in classifier performance when the training set was augmented with BigGAN samples. Finally, through classifier metrics outlined in the work, we can identify on which classes BigGAN performed well, and on which ones researchers should focus their future efforts. An open question in this work is how to create metrics predictive of performance on downstream tasks. Even for the classifier metric, on data replacement did not necessarily correlate with those on data augmentation. Better evaluation metrics will help us understand to what extent GANs, or any other Deep Generative Models, can be used for downstream tasks.
BigGANs do not capture the ImageNet data distributions and are only modestly successful for data augmentation.
1,420
scitldr
Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day. This wealth of data can help to learn models that can improve the user experience on each device. However, the scale and heterogeneity of federated data presents new challenges in research areas such as federated learning, meta-learning, and multi-task learning. As the machine learning community begins to tackle these challenges, we are at a critical time to ensure that developments made in these areas are grounded with realistic benchmarks. To this end, we propose Leaf, a modular benchmarking framework for learning in federated settings. Leaf includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments. With data increasingly being generated on federated networks of remote devices, there is growing interest in empowering on-device applications with models that make use of such data BID25. Learning on data generated in federated networks, however, introduces several new obstacles:Statistical: Data is generated on each device in a heterogeneous manner, with each device associated with a different (though perhaps related) underlying data generating distribution. Moreover, the number of data points typically varies significantly across devices. The number of devices in federated scenarios is typically order of magnitudes larger than the number of nodes in a typical distributed setting, such as datacenter computing. In addition, each device may have significant constraints in terms of storage, computational, and communication capacities. Furthermore, these capacities may also differ across devices due to variability in hardware, network connection, and power. Thus, federated settings may suffer from communication bottlenecks that dwarf those encountered in traditional distributed datacenter settings, and may require faster on-device inference. Privacy and Security: Finally, the sensitive nature of personally-generated data requires methods that operate on federated data to balance privacy and security concerns with more traditional considerations such as statistical accuracy, scalability, and efficiency BID3.Recent works have proposed diverse ways of dealing with these challenges, but many of these efforts fall short when it comes to their experimental evaluation. As an example, consider the federated learning paradigm, which focuses on training models directly on federated networks BID25 BID23. Experimental works focused on federated learning broadly utilize three types of datasets: datasets that do not provide a realistic model of a federated scenario and yet are commonly used, e.g., artificial partitions of MNIST, MNIST-fashion or CIFAR-10 (; BID10 BID7 BID2 BID9 BID28 BID30 ; realistic but proprietary federated datasets, e.g., data from an unnamed social network in, crowdsourced voice commands in BID15, and proprietary data by Huawei in BID4; and realistic federated datasets that are derived from publicly available data, but which are not straightforward to reproduce, e.g., FaceScrub in BID20, Shakespeare in and Reddit in BID10 BID18 BID2.Along the same lines of federated learning, meta-learning is another learning paradigm that could use more realistic benchmarks. The paradigm is a natural fit for federated settings, as the different devices can be easily interpreted as meta-learning tasks BID4. However, the artificially generated tasks considered in popular benchmarks such as Omniglot BID12 BID6 BID29 BID26 and miniImageNet BID24 BID6 BID29 BID26 fail to challenge the current approaches in ways that real-world problems would. More recently, BID27 proposed Meta-Dataset as a more realistic meta-learning benchmark, but tasks still have no real-world interpretation. All of these datasets could thus be categorized as the first type mentioned above (unrealistic yet popular).As a final example, LEAF's datasets can allow researchers and practitioners to test multi-task learning (MTL) methods in regimes with large numbers of tasks and samples, contrary to traditional MTL datasets (e.g., the popular Landmine Detection BID33 BID21 BID32 BID25, Computer Survey BID1 BID0 BID11 and Inner London Education Authority School BID21 BID14 BID0 BID1 BID11) datasets have at most 200 tasks each).In this work, we aim to bridge the gap between artificial datasets that are popular and accessible for benchmarking, and those that realistically capture the characteristics of a federated scenario but that, so far, have been either proprietary or difficult to process. Moreover, beyond establishing a suite of federated datasets, we propose a clear methodology for evaluating methods and reproducing . To this end, we present LEAF, a modular benchmarking framework geared towards learning in massively distributed federated networks of remote devices. LEAF is an open-source benchmarking framework for federated settings. It consists of a suite of open-source datasets, an array of statistical and systems metrics, and a set of reference implementations. As shown in Figure 1, LEAF's modular design allows these three components to be easily incorporated into diverse experimental pipelines. We now detail LEAF's core components. Reference Implementations Metrics Figure 1. LEAF modules and their connections. The Datasets module preprocesses the data and transforms it into a standardized JSON format, which can integrate into an arbitrary ML pipeline. LEAF's Reference Implementations module is a growing repository of common methods used in the federated setting, with each implementation producing a log of various different statistical and systems metrics. This log (or any log generated in an appropriate format) can be used to aggregate and analyze these metrics in various ways. LEAF performs this analysis through its Metrics module. We have curated a suite of realistic federated datasets for LEAF. We focus on datasets where the data has a natural keyed generation process (where each key refers to a particular device); the data is generated from networks of thousands to millions of devices; and the number of data points is skewed across devices. Currently, LEAF consists of three datasets:• Federated Extended MNIST (FEMNIST), which serves as a similar (and yet more challenging) benchmark to the popular MNIST dataset. It is built by partitioning the data in Extended MNIST BID5 based on the writer of the digit/character. • Sentiment140 BID8 We provide statistics on these datasets in TAB1. In LEAF, we provide all necessary pre-processing scripts for each dataset, as well as small/full versions for prototyping and final testing. Moving forward, we plan to add datasets from different domains (e.g. audio, video) and to increase the range of machine learning tasks (e.g. text to speech, translation, compression, etc.).Metrics: Rigorous evaluation metrics are required to appropriately assess how a learning solution behaves in federated scenarios. Currently, LEAF establishes an initial set of metrics chosen specifically for this purpose. For example, we introduce metrics that better capture the entire distribution of performance across devices: performance at the 10th and 90th percentiles and performance stratified by natural hierarchies in the data (e.g. play in the case of the Shakespeare dataset). We also introduce metrics that account for the amount of computing resources needed from the edge devices in terms of number of FLOPS and number of bytes downloaded/uploaded. Finally, LEAF also recognizes the importance of specifying how the accuracy is weighted across devices, e.g., whether every device is equally important, or every data point equally important (implying that power users/devices get preferential treatment). Notably, considering stratified systems and accuracy metrics is particularly important in order to evaluate whether a method will systematically exclude groups of users (e.g., because they have lower end devices) and/or will underperform for segments of the population (e.g., because they produce less data).Reference implementations: In order to facilitate repro- ducibility, LEAF also contains a set of reference implementations of algorithms geared towards federated scenarios. Currently, this set is limited to the federated learning paradigm, and in particular includes reference implementations of minibatch SGD, FedAvg and Mocha BID25. Moving forward we aim to equip LEAF with implementations for additional methods and paradigms with the help of the broader research community. We now show a glimpse of LEAF in action. In particular, we highlight three of LEAF's characteristics:LEAF enables reproducible science: To demonstrate the reproducibility enabled via LEAF, we focus on qualitatively reproducing the that obtained on the Shakespeare dataset for a next character prediction task. In particular, it was noted that for this particular dataset, the FedAvg method surprisingly diverges as the number of local epochs increases. This is therefore a critical setting to understand before deploying methods such as FedAvg. To show how LEAF allows for rapid prototyping of this scenario, we use the reference FedAvg implementation and subsample 118 devices (around 5% of the total) in our Shakespeare data (which can be easily done through our framework). Results are shown in Figure 2, where we indeed see similar divergence behavior in terms of the training loss as we increase the number of epochs. LEAF provides granular metrics: As illustrated in FIG0 and FIG1, our proposed systems and statistical metrics are important to consider when serving multiple clients simultaneously. For statistical metrics, in FIG0 we show the effect of varying the minimum number of samples per user in Sentiment140 (which we denote as k). We see that, while median performance degrades only slightly with data-deficient users (i.e., k = 3), the 25th percentile (bottom of box) degrades dramatically. Meanwhile, for systems metrics, we run minibatch SGD and FedAvg for FEMNIST and calculate the systems budget needed to reach an accuracy threshold of 0.75 in FIG1. We characterize the budget in terms of total number of FLOPS across all devices and total number of bytes uploaded to network. Our Figure 2. Convergence behavior of FedAvg on a subsample of the Shakespeare dataset. We use a learning rate of 0.8 and 10 devices per round for all experiments. We are able to achieve test accuracy comparable to the obtained in. We also qualitatively replicate the divergence in training loss that is observed for large numbers of local epochs (E). demonstrate the improved systems profile of FedAvg when it comes to the communication vs. local computation trade-off, though we note that in general methods may vary across these two dimensions, and it is thus important to consider both aspects depending on the problem at hand. To demonstrate LEAF's modularity, we incorporate its Datasets module into two different experimental pipelines besides FedAvg (which has been our focus so far). In particular, we wish to validate the hypothesis that personalization strategies (be it MTL or meta-learning) outperform competing approaches in statistically heteroge- neous scenarios.1. Our first pipeline explores our hypothesis in regimes where each device holds little data. We use three different kinds of models:• A global SVM which is trained in all of the devices' data at once (Global-SVM).• A local SVM per device that is trained solely on the device's data (Local-SVM).• The same SVM model but trained in the multitask setting presented in BID25 ) (MTL-SVM).2. Our second pipeline corroborates the hypothesis in regimes with no restrictions on the amount of data per device. To do this, we run the popular algorithm Reptile BID22 (which can be shown to be a re-weighed, fine-tuned version of FedAvg) over FEM-NIST and compare it against FedAvg when trained under similar conditions. Results for both sets of experiments are presented in TAB2. For the first set of experiments, we re-cast FEMNIST as a binary classification task (digits vs. characters) and discard devices with more than 192 samples. For the second set, we run each algorithm for 1, 000 rounds, use 5 clients per round, a local learning rate of 10 −3, a training mini-batch size of 10 for 5 mini-batches, and evaluate on an unseen set of test devices. Furthermore, for Reptile we use a linearly decaying meta-learning rate that goes from 2 to 0, and evaluate by fine-tuning each test device for 50 mini-batches of size 5. It is clear that the personalized strategies outperform the competing approaches. We present LEAF, a modular benchmarking framework for learning in federated settings, or ecosystems marked by massively distributed networks of devices. Learning paradigms applicable in such settings include federated learning, metalearning, multi-task learning, and on-device learning. LEAF allows researchers and practitioners in these domains to reason about new proposed solutions under more realistic assumptions than previous benchmarks. We intend to keep LEAF up to date with new datasets, metrics and opensource solutions in order to foster informed and grounded progress in this field TAB1
We present Leaf, a modular benchmarking framework for learning in federated data, with applications to learning paradigms such as federated learning, meta-learning, and multi-task learning.
1,421
scitldr
Understanding object motion is one of the core problems in computer vision. It requires segmenting and tracking objects over time. Significant progress has been made in instance segmentation, but such models cannot track objects, and more crucially, they are unable to reason in both 3D space and time. We propose a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation. Our model includes a temporal network that learns to model temporal context and motion, which is essential to produce smooth embeddings over time. Further, our model also estimates monocular depth, with a self-supervised loss, as the relative distance to an object effectively constrains where it can be next, ensuring a time-consistent embedding. Finally, we show that our model can accurately track and segment instances, even with occlusions and missed detections, advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset. Explicitly predicting the motion of actors in a dynamic scene is a critical component of intelligent systems. Humans can seamlessly track moving objects in their environment by using cues such as appearance, relative distance, and temporal consistency. The world is rarely experienced in a static way: motion (or its absence) provides essential information to understand a scene. Similarly, incorporating past context through a temporal model is essential to segment and track objects consistently over time and through occlusions. From a computer vision perspective, understanding object motion involves segmenting instances, estimating depth, and tracking instances over time. Instance segmentation, which requires segmenting individual objects at the pixel level, has gained traction with challenging datasets such as COCO , Cityscapes and Mapillary Vistas . Such datasets, which only contain single-frame annotations, do not allow the training of video models with temporally consistent instance segmentation, nor does it allow self-supervised monocular depth estimation, that necessitates consecutive frames. Yet, navigating in the real-world involves a three-dimensional understanding of the other agents with consistent instance segmentation and depth over time. More recently, a new dataset containing video instance segmentation annotations was released, the KITTI Multi-Object and Tracking Dataset . This dataset contains pixel-level instance segmentation on more than 8,000 video frames which effectively enables the training of video instance segmentation models. In this work, we propose a new spatio-temporal embedding loss that learns to map video-pixels to a high-dimensional space 1. This space encourages video-pixels of the same instance to be close together and distinct from other instances. We show that this spatio-temporal embedding loss, jointly with a deep temporal convolutional neural network and self-supervised depth loss, produces consistent instance segmentations over time. The embedding accumulates temporal context thanks to the temporal model, as otherwise, the loss would only be based on appearance. The temporal model is a causal 3D convolutional network, which is only conditioned on past frames to predict the current embedding and is capable of real-time operation. Finally, we show that predicting depth improves the quality of the embedding as knowing the distance to an instance constrains its future location given that objects move smoothly in space. To summarise our novel contributions, we: • introduce a new spatio-temporal embedding loss for video instance segmentation, • show that having a temporal model improves embedding consistency over time, • improve how the embedding disambiguates objects with a self-supervised monocular depth loss, • handle occlusions, contrary to previous IoU based instance correspondence. We demonstrate the efficacy of our method by advancing the state-of-the-art on the KITTI MultiObject and Tracking Dataset . An example of our model's output is given by Section 1. Two main approaches exist for single-image instance segmentation: region-proposal based (; ; ;) and embedding based (; ; ;). The former method relies on a region of interest proposal network that first predicts bounding boxes then estimates the mask of the object inside that bounding box. With such strategy, one pixel can belong to the overlap of many bounding boxes, and it is largely unclear how correspondence between pixels can be learned. We instead favour the embedding based method and extend it to space and time. Capturing the inter-relations of objects using multi-modal cues (appearance, motion, interaction) is difficult, as showcased by the Multi-Object Tracking (MOT) challenge . MOT's goal is to infer the trajectories of objects and cover a wide range of applications such as biology (birds , fish , robot navigation ) and autonomous driving ). and learned a representation of objects that follows the "tracking-by-detection" paradigm where the goal is to connect detections across video frames by finding the optimal assignment of a graph-based tracking formulation (i.e. each detection is a node, and an edge is the similarity score between two detections). Collecting large-scale tracking datasets is necessary to train deep networks, but that process is expensive and time-consuming. introduced video colourisation as a self-supervised method to learn visual tracking. They constrained the colourisation problem of a grayscale image by learning to copy colors from a reference frame, with the pointing mechanism of the model acting as a tracker once it is fully trained. The colourisation model is more robust than optical flow based models, especially in complex natural scenes with fast motion, occlusion and dynamic s. extended the task of multi-object tracking to multi-object tracking and segmentation (MOTS), by considering instance segmentations as opposed to 2D bounding boxes. Motivated by the saturation of the bounding box level tracking evaluations , they introduced the KITTI MOTS dataset, which contains pixel-level instance segmentation on more than 8,000 video frames. They also trained a model which extends Mask R-CNN by incorporating 3D convolutions to integrate temporal information, and the addition of an association head that produces an association vector for each detection, inspired from person re-identification . The temporal component of their model, however, is fairly shallow (one or two layers), and is not causal, as future frames are used to segment past frames. More recently, collected a large-scale dataset from short YouTube videos (3-6 seconds) with video instance segmentation labels, and introduced a densely annotated synthetic dataset with complex occlusions to learn how to estimate the spatial extent of objects beyond what is visible. Contrary to methods relying on region proposals , embeddingbased instance segmentation methods map all pixels of a given instance to a high dimensional space with desirable properties. This overcomes several limitations of region-proposal methods. Firstly, two objects may share the same bounding box and in that situation, it is ambiguous which object mask the model should segment. Secondly, pixels can belong to two separate objects as each prediction is done independently. Finally, the number of detected objects is limited by the fixed number of proposals of the network. We propose a spatio-temporal embedding loss that extends's instance embedding loss to video: each pixel belonging to a given instance in space and time is transformed into a unique location in a high dimensional space, using cues such as appearance, context and motion. More concretely, three terms are used in the loss: the attraction loss (Equation) to ensure pixels from the same instance are close to each other, the repulsion loss (Equation) to ensure two separate instances are far from each other and a regularisation term (Equation) so that instance centers should not diverge too much from the origin. Let us denote the number of instances, K, and the subset of indices, S k, corresponding to all the pixels belonging to instance k in the video. ∀i ∈ S k, y i is the embedding for pixel position i and µ k is the mean embedding of instance k: Where ρ a denotes the attraction radius within a cluster: we want the embedding to be within ρ a of the centroid. 2ρ r denotes the repulsion radius: we want the centroids of two different clusters to be at least 2ρ r apart. Therefore, if we set ρ r > 2ρ a, a given pixel embedding of a cluster will be closer to all the pixel embeddings of its cluster than any other pixel embedding. The spatio-temporal embedding loss is the weighted sum of the attraction, repulsion and regularisation losses: During inference, each pixel of the considered frame is assigned to an instance by randomly picking an unassigned point and aggregating close-by pixels with the mean-shift algorithm . In the ideal case, with a test loss of zero, this will in perfect clustering if the repulsion radius, ρ r, is twice as large as the attraction radius, ρ a. The relative distance of objects is a strong cue to segment instances in space and time, as the motion of objects is temporally smooth. Knowing the previous distance of an object relative to the camera assists tracking as the future position will be constrained by the object's current location. Depth estimation with supervised methods requires a vast quantity of high quality annotated data, which is challenging to acquire in a range of environments as laser measurements can be imprecise in natural scenes with motion and reflections. Because we have access to video in our instance segmentation dataset, we can leverage self-supervised depth losses from monocular videos, where the supervision comes from consecutive temporal frames. In addition to predicting the depth map, ego-motion also has to be inferred, but only during training to constrain the depth network. and , we train a depth network with a separate pose estimation network with the hypothesis during training that scenes are mostly rigid, therefore assuming appearance change is mostly due to camera motion. The pixels that violate this assumption are masked from the view synthesis loss, as they otherwise create infinite holes during inference for objects that are typically seen in motion during training -more details in Appendix A.1. The training signal comes from novel view synthesis: generation of new images of the scene from a different camera pose. Let us denote by (I 1, I 2, ..., I T) a sequence of images, with target view I t and source view I s. The view synthesis loss is given by: withÎ s→t the synthesised view of I t from source image I s using the predicted depthD t and the predicted 4 × 4 camera transformationT t→s predicted from the separate pose network. The projection error function, e, is described by as a weighted sum of L 1, Structural Similarity Index (SSIM) and a smoothness regularisation term. Let us denote by p t the coordinate of a pixel in the target image I t in homogeneous coordinates. Given the camera intrinsic matrix, K, and the mapping ϕ from image plane to camera coordinate, the corresponding pixel in the source image is provided by: The projected coordinates p s are continuous values, we use the Spatial Transformer Network sampling mechanism to bilinearly interpolate the four neighbouring pixels to populate the reconstructed image I s→t. Some pixels are visible in the target image, but are not in the source images, leading to a large projection error. As advocated by , instead of summing, taking the minimum projection error greatly reduces artifacts due to occlusion and in sharper predictions. The ing view synthesis loss is: The ing video instance embedding loss is the weighted sum of the attraction, repulsion, regularisation and geometric view synthesis losses: 4 MODEL ARCHITECTURE Our model contains three components: an encoder, temporal model and the decoders. Each frame is first encoded to a more powerful and compact representation, then the temporal model learns the dynamics of the scene, and finally, the decoders output the instance embedding and depth prediction as illustrated by Figure 2. Figure 2: Our video instance segmentation and depth model architecture. The embedding, z t, is trained to explicitly encode appearance, motion and geometry cues in order to predict an instance embedding and monocular depth prediction. Encoder. We use a ResNet-18 with 14.6M parameters as our encoder, which allows the network to run in real-time on sequences of images. Temporal Model. The scene dynamics is learned with a causal 3D convolutional network composed of blocks of 3D residual convolutions (convolving in both space and time). For a given time index, t, the network only convolves over images from indices s ≤ t to compute the temporal representation z t. It therefore does not use future frames and is completely causal. The temporal model does not decimate the spatial dimension of the encoding, but slowly accumulates information over time from the previous encodings x s with s ≤ t. The temporal model is trained efficiently with convolutions as all input images are available during training, enabling parallel computations with GPUs. However, during inference, the model inherently has to be sequential, but can be made significantly faster by caching the convolutional features over time and eliminating redundant operations, as proposed by for WaveNet . Decoders. The decoders then map the temporal encoding z t to its instance embedding y t of dimension p × height × width, with p the embedding dimension, and depth d t of dimension 1 × height × width. The embedding values belonging to the same instance are pushed together in the high-dimensional space R p, and pulled away from the other instances, over the whole video. Therefore, tracking instances simply requires comparing the mean embedding of a newly segmented instance with previously segmented instances. A distance lower than ρ r indicates a match. To segment the instances, we first run a mask network (trained separately) then we cluster the segmented embeddings using mean shift to discover dense regions of embeddings. Over time, the embeddings are accumulated up until the sequence length corresponding to the sequence length used during training to constrain instances spatio-temporally. This creates increasingly dense regions over time ing in a better clustering. To ensure that embeddings of a particular instance can smoothly vary over time, the embeddings have a life span corresponding to the sequence length of the model. Pose and Mask Model. For the pose network we use a ResNet and for the mask network we use an encoder-decoder model, also based on a ResNet. Further details are in Appendix A.1. Next we describe experimental evidence which demonstrates the performance of our method by advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset . The KITTI Multi-Object Tracking and Segmentation (MOTS) dataset contains 8,008 frames with instance segmentation labels ing in a total of 26,899 annotated cars. It is composed of 21 scenes with a resolution of 375 × 1242 with consistent instance ID labels across time, allowing the training of video instance segmentation models. The frames are annotated at 10 frames per second, which is suitable for self-supervised monocular depth prediction. The ApolloScape dataset ) also contains video instance segmentation labels for 49,287 frames, but the annotations are not consistent in time, rendering the training of a temporal model impossible. NuScenes features 1,000 scenes of 20 seconds with annotations at 2Hz in a diverse range of environments (different weather, daytime, city) but only contains bounding box labels, failing to represent the fine-grained details of instance segmentation. Temporal instance segmentation is also available on short snippets of the DAVIS dataset , but each snippet is recorded by a different camera and is too short to effectively learn a depth model. For this reason, we focus on the KITTI MOTS dataset -it is the only dataset that contains consistent video instance segmentation in a sufficient quantity to train deep models. We halve the input images to our encoder to use an input RGB resolution of 3 × 192 × 640. The ing encoding is 128 × 24 × 80. The decoders then map the temporal encoding z t to its instance embedding y t of dimension p × 192 × 640, with p = 8 the embedding dimension, and depth d t of dimension 1 × 192 × 640. Except for the experiments in Table 4, we train with a sequence length of 5 which corresponds to 0.5 seconds of temporal context since the videos are 10Hz. In the loss function, we set the attraction radius ρ a = 0.5 and repulsion radius ρ r = 1.5. We weight the losses with attraction and repulsion loss weight λ 1 = λ 2 = 1.0, regularisation loss λ 3 = 0.001 and depth loss λ 4 = 1.0. Let us define multi-object tracking and segmentation metrics, which measures the quality of the segmentation as well as the consistency of the predictions over time. Contrary to bounding box detection, where a ground truth box may overlap with several predicted boxes, in instance segmentation, since each pixel is assigned to at most one instance, only one predicted mask can have an Intersection over Union (IoU) larger than a given threshold with a given ground truth mask. Let us denote by H the set of predicted ids, M the set of ground truth ids and g the mapping from hypothesis masks to ground truth masks. g: H → M ∪ ∅ is defined as: We thus define: • True positives as: T P = {h ∈ H|g(h) = ∅}, correctly assigned predicted masks. • False positives as: F P = {h ∈ H|g(h) = ∅}, predicted masks not assigned to any ground truth mask. • False negatives as: F N = {m ∈ M|g −1 (m) = ∅}, ground truth masks not covered by any hypothesis mask. • Soft number of true positives:T P = h∈T P IoU(h, g(h)) Let the function pred: M → M ∪ ∅ map a ground truth mask to its latest tracked predecessor (∅ if the ground truth mask is seen for the first time). The set IDS of ID switches is defined as the set of ground truth masks whose predecessor was tracked by a different ID. , we define the following MOTS metrics: multi-object tracking and segmentation precision (MOTSP), multi-object tracking and segmentation accuracy (MOTSA) and finally the soft multi-object tracking and segmentation accuracy (sMOTSA) that measures segmentation as well as detection and tracking quality. We also measure the average precision (AP), i.e. the normalised area under the precision/recall curve. We compare our model to the following baselines for video instance segmentation and report the in Table 2. • Single-frame embedding loss , previous state-of-the-art method where instance segmentations are propagated in time using intersection-over-union association. • Without temporal model, spatio-temporal embedding loss, without the temporal model. • Without depth, temporal model and spatio-temporal embedding loss, without the depth loss. We also report the of Mask R-CNN and Track R-CNN , even though direct comparison with the latter is not possible as their model was pretrained on Cityscapes and Mapillary Vistas, and is not causal as they use future frames to predict the current frame instance segmentations. Table 2: KITTI MOTS validation set comparing our model with baseline approaches. The static detection metrics (average precision, recall, precision) are evaluated image by image without taking into account the temporal consistency of instance segmentations. As the compared models (Without temporal model, Without depth, Ours) are all using the same mask network, they show similar performance in terms of detection. However, when evaluating performance on metrics that measures temporal consistency (MOTSA and sMOTSA), our best model shows significant improvement over the baselines. The variant without the temporal model performs poorly as it does not have any temporal context to learn a spatio-temporal embedding and therefore only relies on spatial appearance. The temporal model on the other hand learns with the temporal context and local motion, which in a better embedding. Our model, which learns to predict both a spatio-temporal embedding and monocular depth, achieves the best performance. In addition to using cues from appearance and temporal context, estimating depth allows the network to use information from the relative distance of objects to disambiguate them. Finally, we observe that our model outperforms Mask R-CNN on the temporal metrics (MOTSA and sMOTSA) even though the latter exhibits a higher detection accuracy, further demonstrating the temporal consistency quality of our spatio-temporal embedding. Our model first relies on the mask segmentation to isolate which pixel locations to consider for instance clustering. We evaluate the impact of using the ground truth mask against our predicted mask in Table 3. The performance gain is significant, hinting that a better instance segmentation would be possible by improving the mask network. Next, we evaluate the effect of clustering. In the best scenario, the validation loss would be zero, and the clustering would be perfect using the MeanShift algorithm. However, this scenario is unlikely and the clustering algorithm is affected by noisy embeddings. We evaluate the effect of this noise by clustering with the ground-truth mean for each instance, by thresholding with ρ r around the ground truth instance embedding mean. This also in a boost in the evaluation metrics, but most interestingly, a model that uses both ground truth instance embedding mean clustering and ground truth mask performs worse than a model segmented with ground truth mask and our clustering algorithm. This is because our clustering algorithm accumulates embeddings from past frames and therefore creates an attraction force for the mean shift algorithm that enables the instances to be matched more consistently. GT Table 3: Comparing the effect of noisy against ground-truth clustering and mask segmentation on the KITTI MOTS dataset. Our model learns a spatio-temporal embedding that clusters video-pixels from a given instance. Correspondence of instances between frames is achieved by matching detected instances to previous instances if the embedding distance is below the repulsion radius, p a. Instance tracking can occur for an arbitrarily long sequence of time, as long as the embedding changes smoothly over time, which is likely the case as temporal context and depth must evolve gradually. However, when the spatio-temporal embedding is trained over sequences which are too long, the embedding learning collapses. This is because the attractive loss term is detrimental between distant frames, it pressures pixels from the same instance to be have corresponding embeddings when their appearance and depth is no longer similar. It also suggests our model is able to reason over lower order motion cues more effectively than longer term dynamics. This is seen experimentally in Table 4. Table 4: Influence of the sequence length on model performance. This indicates that our model can learn short-term motion features effectively, but not long-term cues. We reason that this is because over longer sequences, the loss prevents the embedding smoothly shifting, which naturally occurs to changing pose, appearance, context and lighting in the scene. We find the optimum sequence length on this dataset to be five. The instance segmentation of our model is consistent across frames as instances are clustered in both space and time. This provides more robust clustering compared to a per-frame approaches. We demonstrate this with the following scenarios showing tracking through partial (Figure 3 and full occlusion ( Figure 5), as well as continuous tracking through noisy detections (Figure 4). Additional examples and failure cases of our model are shown in Appendix A.2 and a video demo can be viewed here: https://youtu.be/pqRPXRUlQ2I In each example, we show from left to right: RGB input image, ground truth instance segmentation, predicted instance segmentation, embedding visualised in 2D, embedding visualised in RGB and predicted monocular depth. The embedding is visualised in 2D with the corresponding mean shift clustering. Each color represents a different instance, the inner circle is the attraction radius of the instance mean embedding, and the outer circle is the repulsion radius of each instance. Additionally, we also visualise the embedding spatially in 3D, by projecting its three principal components to an RGB image. We show in Appendix A.2 that incorporating depth context greatly improves the quality of the embedding, especially in complex scenarios such as partial or total occlusion. We also observe that the embedding is much more structured when incorporating 3D geometry information. We proposed a new spatio-temporal embedding loss that generates consistent instance segmentation over time. The temporal network models the past temporal context and the depth network constrains the embedding to aid disambiguation between objects. We demonstrated that our model could effectively track occluded instances or instances with missed detections, by leveraging the temporal context. Our method advanced the state-of-the-art at video instance segmentation on the KITTI Multi-Object and Tracking Dataset. Encoder. The encoder is a ResNet-18 convolutional layer , with 128 output channels. Temporal model. The temporal model contains 12 residual 3D convolutional blocks, with only the first and last block convolving over time. Each residual block is the succession of: projection layer of kernel size 1×1×1 to halve the number of channels, 3D causal convolutional layer t×3×3, projection layer 1 × 1 × 1 to double the number of channels. We set the temporal kernel size to t = 2, and the number of output channels to 128. Decoders. The decoders for instance embedding and depth estimation are identical and consist of 7 convolutional layers with channels and 3 upsampling layers. The final convolutional layer contains p channels for instance embedding and 1 channel for depth. Depth Masking. During training, we remove from the photometric reprojection loss the pixels that violate the rigid scene assumption, i.e. the pixels whose appearance do not change between adjacents frames. We set the mask M to only include pixels where the reprojection error is lower with the warped imageÎ s→t than the unwarped source image I s: M = min s e(I t,Î s→t) < min s e(I t, I s) Pose Network. The pose network is the succession of a ResNet-18 model followed by 4 convolutions with channels. The last feature map is averaged to output a single 6-DoF transformation matrix. Mask Network. The mask network is trained separately to mask the and is the succession of the Encoder and Decoder described above. The following examples show qualitative and failure examples of our video instance segmentation model on the KITTI Multi-Object and Tracking Dataset. From left to right: RGB input image, ground truth instance segmentation, predicted instance segmentation, embedding visualised in 2D, embedding visualised in RGB and predicted monocular depth. We show that our model greatly benefits from depth estimation, with the learned embedding being more structured, and correctly tracking objects in difficult scenarios such as partial or total occlusion. (a) Without depth estimation. (b) With depth estimation Figure 10: Without depth, the car circled in red is wrongly tracked in frame 5 and 9, while our model correctly tracks it as the network has learned a consistent embedding based not only on appearance, but also on 3D geometry. Also, the RGB projection of the embedding from our model is considerably better and much more structured.
We introduce a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation, even with occlusions and missed detections, using appearance, geometry, and temporal context.
1,422
scitldr
As reinforcement learning continues to drive machine intelligence beyond its conventional boundary, unsubstantial practices in sparse reward environment severely limit further applications in a broader range of advanced fields. Motivated by the demand for an effective deep reinforcement learning algorithm that accommodates sparse reward environment, this paper presents Hindsight Trust Region Policy Optimization (HTRPO), a method that efficiently utilizes interactions in sparse reward conditions to optimize policies within trust region and, in the meantime, maintains learning stability. Firstly, we theoretically adapt the TRPO objective function, in the form of the expected return of the policy, to the distribution of hindsight data generated from the alternative goals. Then, we apply Monte Carlo with importance sampling to estimate KL-divergence between two policies, taking the hindsight data as input. Under the condition that the distributions are sufficiently close, the KL-divergence is approximated by another f-divergence. Such approximation in the decrease of variance and alleviates the instability during policy update. Experimental on both discrete and continuous benchmark tasks demonstrate that HTRPO converges significantly faster than previous policy gradient methods. It achieves effective performances and high data-efficiency for training policies in sparse reward environments. Reinforcement Learning has been a heuristic approach confronting a great many real-world problems from playing complex strategic games (; ;) to the precise control of robots (; ;), in which policy gradient methods play very important roles . Among them, the ones based on trust region including Trust Region Policy Optimization (a) and Proximal Policy Optimization have achieved stable and effective performances on several benchmark tasks. Later on, they have been verified in a variety of applications including skill learning , multi-agent control , imitation learning , and have been investigated further to be combined with more advanced techniques (; ;). One unresolved core issue in reinforcement learning is efficiently training the agent in sparse reward environments, in which the agent is given a distinctively high feedback only upon reaching the desired final goal state. On one hand, generalizing reinforcement learning methods to sparse reward scenarios obviates designing delicate reward mechanism, which is known as reward shaping ; on the other hand, receiving rewards only when precisely reaching the final goal states also guarantees that the agent can focus on the intended task itself without any deviation. Despite the extensive use of policy gradient methods, they tend to be vulnerable when dealing with sparse reward scenarios. Admittedly, policy gradient may work in simple and sufficiently rewarding environments through massive random exploration. However, since it relies heavily on the expected return, the chances in complex and sparsely rewarding scenarios become rather slim, which often makes it unfeasible to converge to a policy by exploring randomly. Recently, several works have been devoted to solving the problem of sparse reward, mainly applying either hierarchical reinforcement learning (; ; ;) or a hindsight methodology, including Hindsight Experience Replay , Hindsight Policy Gradient and their extensions . The idea of Hindsight Experience Replay(HER) is to regard the ending states obtained through the interaction under current policy as alternative goals, and therefore generate more effective training data comparing to that with only real goals. Such augmentation overcomes the defects of random exploration and allows the agent to progressively move towards intended goals. It is proven to be promising when dealing with sparse reward reinforcement learning problems. For Hindsight Policy Gradient(HPG), it introduces hindsight to policy gradient approach and improves sample efficiency in sparse reward environments. Yet, its learning curve for policy update still oscillates considerably. Because it inherits the intrinsic high variance of policy gradient methods which has been widely studied in Schulman et al. (2015b), and. Furthermore, introducing hindsight to policy gradient methods would lead to greater variance . Consequently, such exacerbation would cause obstructive instability during the optimization process. To design an advanced and efficient on-policy reinforcement learning algorithm with hindsight experience, the main problem is the contradiction between on-policy data needed by the training process and the severely off-policy hindsight experience we can get. Moreover, for TRPO, one of the most significant property is the approximated monotonic converging process. Therefore, how these advantages can be preserved when the agent is trained with hindsight data also remains unsolved. In this paper, we propose a methodology called Hindsight Trust Region Policy Optimization (HTRPO). Starting from TRPO, a hindsight form of policy optimization problem within trust region is theoretically derived, which can be approximately solved with the Monte Carlo estimator using severely off-policy hindsight experience data. HTRPO extends the effective and monotonically iterative policy optimization procedure within trust region to accommodate sparse reward environments. In HTRPO, both the objective function and the expectation of KL divergence between policies are estimated using generated hindsight data instead of on-policy data. To overcome the high variance and instability in KL divergence estimation, another f -divergence is applied to approximate KL divergence, and both theoretically and practically, it is proved to be more efficient and stable. We demonstrate that on several benchmark tasks, HTRPO can significantly improve the performance and sample efficiency in sparse reward scenarios while maintains the learning stability. From the experiments, we illustrate that HTRPO can be neatly applied to not only simple discrete tasks but continuous environments as well. Besides, it is verified that HTRPO can be generalized to different hyperparameter settings with little impact on performance level. Reinforcement Learning Formulation and Notation. Consider the standard infinite-horizon reinforcement learning formulation which can be defined by tuple (S, A, π, ρ 0, r, γ). S represents the set of states and A denotes the set of actions. π: S → P(A) is a policy that represents an agent's behavior by mapping states to a probability distribution over actions. ρ 0 denotes the distribution of the initial state s 0. Reward function r: S → R defines the reward obtained from the environment and γ ∈ is a discount factor. In this paper, the policy is a differentiable function regarding parameter θ. We follow the standard formalism of state-action value function Q(s, a), state value function V (s) and advantage function A(s, a) in. We also adopt the definition of γ-discounted state visitation distribution as ρ θ (s) = (1 − γ) et al., 2016), in which the coefficient 1 − γ is added to keep the integration of ρ θ (s) as 1. Correspondingly, γ-discounted state-action visitation distribution , also known as occupancy measure , is defined as ρ θ (s, a) = ρ θ (s) × π θ (a|s), in which π θ (a|s) stands for the policy under parameter θ. Trust Region Policy Optimization(TRPO). Schulman et al. (2015a) proposes an iterative trust region method that effectively optimizes policy by maximizing the per-iteration policy improvement. The optimization problem proposed in TRPO can be formalized as follows: in which ρθ(s) = ∞ t=0 γ t P (s t = s). θ denotes the parameter of the new policy whileθ is that of the old one. Trajectory is represented by τ = s 1, a 1, s 2, a 2,.... The objective function L T RP O (θ) can be given out in the form of expeted return: Hindsight Policy Gradient(HPG). After generalizing the concept of hindsight, combines the idea with policy gradient methods. Though goal-conditioned reinforcement learning has been explored for a long time and actively investigated in recent works (; ; ; ; ; ;), HPG firstly extends the idea of hindsight to goal-conditioned policy gradient and shows that the policy gradient can be computed in expectation over all goals. The goal-conditioned policy gradient is derived as follows: Then, by applying hindsight formulation, it rewrites goal-conditioned policy gradient with trajectories conditioned on some other goal g using importance sampling to improve sample efficiency in sparse-reward scenarios. In this paper, we propose an approach that introduces the idea of hindsight to TRPO, called Hindsight Trust Region Policy Optimization(HTRPO), aiming to further improve policy performance and sample efficiency for reinforcement learning with sparse rewards. In Section 3 and Section 4, we demonstrate how to redesign the objective function and the constraints starting from TRPO respectively. In order to apply hindsight methodology, this section presents the main steps for the derivation of HTRPO objective function. Starting from the original optimization problem in TRPO, the objective function can be written in the following variant form: The derivation process of this variant form is shown explicitly in Appendix A.1 and in Schulman et al. (2015a). Given the expression above, we consider the goal-conditioned objective function of TRPO as a premise for hindsight formulation. Similar to equation 4, Lθ(θ) can be correspondingly given out in the following form: For the record, though it seems that equation 6 makes it possible for off-policy learning, it can be used as the objective only when policy π θ is close to the old policy πθ, i.e. within the trust region. Using severely off-policy data like hindsight experience will make the learning process diverge. Therefore, importance sampling need to be integrated to correct the difference of the trajectory distribution caused by changing the goal. Based on the goal-conditioned form of the objective function, the following theorem gives out the hindsight objective function conditioned on some goal g with the distribution correction derived from importance sampling. Theorem 3.1 (HTRPO Objective Function). For the original goal g and an alternative goal g, the object function of HTRPO Lθ(θ) is given by: in which, τ = s 1, a 1, s 2, a 2,..., s t, a t. Appendix A.2 presents an explicit proof on how the hindsight-form objective function derives from equation 6. It will be solved under a KL divergence expectation constraint, which will be discussed in detail in Section 4. Intuitively, equation 7 provides a way to compute the expected return in terms of the advantage with new-goal-conditioned hindsight experiences which are generated from interactions directed by old goals. Naturally, Theorem 3.2 gives out the gradient of HTRPO objective function that will be applied to solve the optimization problem. Detailed steps of computing the gradient is presented in Appendix A.3. Theorem 3.2 (Gradient of HTRPO Objective Function). For the original goal g and an alternative goal g, the gradient ∇ θ Lθ(θ) of HTRPO object function with respect to θ is given by the following expression: in which τ = s 1, a 1, s 2, a 2,..., s t, a t. This section firstly demonstrates some techniques, with strict proof, that can be used to estimate the expectation of KL-divergence and further reduce the variance, and then presents how hindsight is applied to the constraint function of TRPO. In TRPO, the KL divergence expectation under ρθ(s) is estimated by averaging all the values of KL divergence. When they are respectively conditioned on all states collected using the old policy, this kind of estimation is exactly Monte Carlo estimation which is unbiased. However, when we only have access to hindsight experience data, the state distribution may inevitably change and the previous method for estimating the expectation of KL divergence is no longer valid. To solve this problem, we firstly transform the KL divergence to an expectation under occupancy measure ρθ(s, a) = ρθ(s) × πθ(a|s). It can be estimated using collected state-action pair (s, a), whose changed distribution can be corrected by importance sampling. Then, by making use of another f -divergence, the variance of estimation is theoretically proved to be reduced so as to facilitating a more stable training. The constraint function in KL-divergence can be naturally converted to a logarithmic form. Appendix B.1 provides a more explicit version of this conversion. Theorem 4.1 (Logarithmic Form of Constraint Function). Given two policies πθ(a|s) and π θ (a|s), the expectation of their KL-divergence over states s ∼ ρθ(s) is written as: However, simply expanding the KL-divergence into logarithmic form still leaves several problems unhandled. Firstly, the variance remains excessively high, which would cause considerable instability during the learning process. Secondly, current estimation of KL-divergence is of possible negativity. If encountering negative expectation of KL-divergence, the learning process would in fatal instability. The following Theorem 4.2 describes a technique to reduce the variance and Theorem 4.3 gives out the strict proof for the decrease of variance. Theorem 4.2 (Approximation of Constraint Function). For policy πθ(a|s) and π θ (a|s), and for η = π θ (a|s) − πθ(a|s), Theorem 4.2 demonstrates that when θ andθ is of limited difference, the expectation of log πθ(a|s)− log π θ (a|s) can be sufficiently estimated by the expectation of its square. The proof is provided in Appendix B.2. In fact, Es,a∼ρθ(s,a) 1 2 (log πθ(a|s) − log π θ (a|s)) 2 is the expectation of an fdivergence, where f (x) = 1 2 x(log x) 2. Noticeably, f (x) is a strictly convex function when x ∈ (1 e, ∞), and f = 0. Moreover, it is noteworthy that there are two corresponding major improvements through this kind of estimation. Firstly, it is guaranteed to reduce the variance which leads to a more stable performance. This merit will be explained in detail in Theorem 4.3. Another significant improvement is manifested in the elimination of negative KL-divergence, since the estimation presents itself in the form of a square which is always non-negative. log πθ(a|s) − log π θ (a|s). Theorem 4.3 illustrates that there is a decrease from the variance of log πθ(a|s) − log π θ (a|s) to the variance of its square, and furthermore indicates that the variance is effectively reduced. The proof is given in detail in Appendix B.3. In fact, the closer it is betweenθ and θ, the more the variance decreases. Based on Theorem 4.1 to Theorem 4.3, in this paper, we adopt the following form of constraint condition: In Theorem 4.4, we demonstrate that hindsight can also be introduced to the constraint function. The proof follows the methodology similar to that in Section 3, and is deducted explicitly in Appendix B.4. Theorem 4.4 (HTRPO Constraint Function). For the original goal g and an alternative goal g, the constraint between policy πθ(a|s) and policy π θ (a|s) is given by: in which = 1−γ. Theorem 4.4 implies the practicality of using hindsight data under condition g to estimate the expectation. From all illustration above, we give out the final form of the optimization problem for HTRPO: The solving process for HTRPO optimization problem is explicitly demonstrated in Appendix C and the complete algorithm procedure is included in Appendix D. This section demonstrates the validation of HTRPO on several sparse reward benchmark tasks 1. The design of our experiments aims to conduct an in-depth investigation in the following aspects: • How is the effectiveness of HTRPO? • How does each component of HTRPO contribute to its effectiveness? • How is the performance of policy gradient methods trained with hindsight data in continuous environments? • How sensitive is HTRPO to network architecture and some key parameters? We implement HTRPO on a variety of reinforcement learning environments, including Bit Flipping, Grid World and Fetch. Among them, Bit Flipping, Grid World, Fetch Reach and Fetch Push are implemented as descrete-action environments while we also conduct continuous version of experiments in Fetch Reach, Fetch Push and Fetch Slide. A glimpse of these environments is demonstrated in Figure 1 while the detailed introductions are included in Appendix F.1. The reward mechanisms are intentionally modified to sparse reward regulations. Besides, for continuous version of Fetch experiments, we apply an additional policy entropy bonus to encourage more exploration. For each trail of interaction, reward for the agent is set as the remaining number of time steps plus one, and all goals during exploration are chosen uniformly at random for both training and evaluation. During the training process, we terminate one episode either when the maximum number of time steps has elapsed or when the goal state is reached. We evaluate agents' performance by documenting 10 learning trails in the form of average return and their corresponding standard deviation. In Bit Flipping and Grid World environments, the network architecture is of two hidden layers, each with 64 hyperbolic tangent units; in Fetch environment, for both discrete and continuous implementations, the network contains two 256-unit hidden layers. For all environments mentioned above, we compare HTRPO with HPG and TRPO (a), which are chosen as the baseline algorithms. Since HPG is never applied to continuous environments in , we implement HPG to be adapted to continuous environments. Note that the way we scale the time axis is significantly different from that in. Instead of regarding a certain number of training batches as interval between evaluation steps, we directly uses the accumulated time steps the agent takes while interacting with the environments throughout episodes and batches. Besides comparing with baselines, we also ablate each component of HTRPO to investigate how significant it is for the final performance. To be specific, we adopt the "vanilla" estimation of KLdivergence which we call "HTRPO with KL1" instead of the proposed one in Section 4; we also observe the performance of our algorithm without weighted importance sampling, which is denoted as "HTRPO without WIS" in this paper. In discrete environments, we test both the official version of HPG released in and our HPG implementation while for continuous environments of Fetch, we only test our HPG due to the lack of surpport for continuous tasks in. We apply input normalization in et al., the officially released version of HPG eventually converges to similar performances with that of HTRPO in discrete environments, but sometimes, unlike our HPG, it is still far from converging under this time-step evaluation setting. This kind of distinction in converging speed between our HPG and the official HPG may be caused by the reduction of noises, since we use TDerror to update policies instead of the return corrected by importance sampling, which is adopted in HPG. Thus, for the fairness of comparison, in the following analysis, we mainly compare the properties between HTRPO and our HPG. From the we can see that in both discrete and continuous environments, HTRPO outperforms HPG significantly. Aside from assuring a good converging property, the sample efficiency of HTRPO also exceed that of HPG, for it reaches a higher average return within less time in most environments. As for TRPO, though it can converge in several simple tasks like Bit Flipping, Grid World and continuous Fetch Reach, it remains incompetent in dealing with complex control tasks including Fetch Push and Fetch Slide, in all of which HTRPO can learn a good policy. The reason is that for TRPO, it is basically impossible to acquire a positive reward at the beginning of the training in such environments, which makes the policy updates meaningless. How does each component of HTRPO contribute to its effectiveness? In both Figure 2 and Figure 3, "HTRPO with KL1" and "HTRPO without WIS" performs much worse than the complete version of HTRPO. When we estimate the KL-divergence using the "vanilla" KL-divergence defined as equation 9, it causes severe instability, which means that the estimated KL-divergence can be negative with an unacceptable probability. Considering the practicality of the experiment, the corresponding iteration will be skipped without any updates of the policy in this senario. Given the phenomenon stated above, the final performance of "HTRPO with KL1" is much worse and more unstable in all environments. As for the study of Weighted Importance Sampling, it is widely known for significantly reducing the variance , which is once again proved by the of "HTRPO without WIS". Admittedly, we can see that the performance of "HTRPO without WIS" matches the full version of HTRPO in several simple environments in Figure 2 (a)-(d) and Figure 3 (a). However, for more complex environments like Fetch Push and Fetch Slide, the variance is detrimentally larger than that in simple environments. In short, the performance of "HTRPO without WIS" has a severe degradation comparing to the full version of HTRPO. How is the performance of policy gradient methods trained with hindsight data in continuous environments? As mentioned in , it still remains unexplored that to what extent the policy gradient methods trained with hindsight data can solve continuous control tasks. In this section, we will provide the answer. We implement HTRPO in continuous control tasks including Fetch Reach, Fetch Push and Fetch Slide. HPG is tested as well for comparison. From the , we can see that with the help of input normalization, HPG can learn a valid policy in continuous control tasks. Still, HTRPO performs much better than HPG in all three environments, benefiting from a faster and more stable convergence. As illustrated in Figure 3, HTRPO eventually achieves an average success rate of 92% for Fetch Push and 82.5% for Fetch Slide. How sensitive is HTRPO to network architecture and some key parameters? To study the sensitivity of HTRPO to different network architectures, we observe the performance of HTRPO with different network settings. From the demonstrated in Appendix F.2.1, HTRPO achieves commendable performances with all three different network architectures while HPG only converges under certain settings. As for the sensitivity of HTRPO to key parameters, we mainly observe the impact of different number of alternative goals. Based on the learning curves in Appendix F.2.2, we can see that Hindishgt TRPO with more alternative goals achieves better converging speed. We have extended the monotonically converging on-policy algorithm TRPO to accommodate sparse reward environments by adopting the hindsight methodology. The optimization problem in TRPO is scrupulously derived into hindsight formulation and, when the KL-divergence in the constraint function is small enough, it can be tactfully approximated by another f -divergence in order to reduce estimation variance and improve learning stability. Experimental on a variety of environments demonstrate effective performances of HTRPO, and validate its sample efficiency and stable policy update quality in both discrete and continuous scenarios. Therefore, this work reveals HTRPO's vast potential in solving sparse reward reinforcement learning problems. We greatly acknowledge all the fundings in support of this work. With no influence to the optimal solution, we can multiply equation 3 by a constant A.2 THEOREM 3.1 Theorem 3.1 (HTRPO Objective Function). For the original goal g and an alternative goal g, the object function of HTRPO Lθ(θ) is given by: in which, τ = s 1, a 1, s 2, a 2,..., s t, a t. Proof. Starting from equation 6, for every time step t in the expectation, denote so that Split every trajectory τ into τ 1 and τ 2 where τ 1 = s 1, a 1, s 2, a 2,..., s t, a t and τ 2 = s t+1, a t+1,..., then For that γ t π θ (at|st,g) πθ(at|st,g) Aθ(s t, a t, g) is independent from τ 2 conditioned on τ 1, t,a1:t∼pθ(s1:t,a1:t|g) Thus, t,a1:t∼pθ(s1:t,a1:t|g) Following the techniques of importance sampling, the objective function can be rewritten in the form of new goal g: after expanding the objective function and cancelling terms, A.3 THEOREM 3.2 Theorem 3.2 (Gradient of HTRPO Objective Function). For the original goal g and an alternative goal g, the gradient ∇ θ Lθ(θ) of HTRPO object function with respect to θ is given by the following expression: in which τ = s 1, a 1, s 2, a 2,..., s t, a t. Proof. Starting from equation 24, since that π θ (a t |s t, g) is the only term relavant to θ, the corresponding gradient of the objective function is computed by: log πθ(a|s) − log π θ (a|s) Proof. Expand the expectation in equation 2 by the definition of KL-divergence, B.2 THEOREM 4.2 Lemma B.1 Given two distibutions p(x) and q(x), q(x) = p(x) + η(x), in which η(x) is the variation of q(x) at p(x). Proof. Consider the second order Taylor expansion of log q(x) at p(x), For the left side of equation 27, For the first term on the right side of equation 27, Theorem 4.2 (Approximation of Constraint Function). For policy πθ(a|s) and π θ (a|s), and for η = π θ (a|s) − πθ(a|s), in which Var(Y) denotes the variance of Y. Proof. Denote There always exists in which µ 1 is a constant. Then the equation can be converted by the following steps: Thus, when Y = Y 0, the two factors in equation 34, (X 1 − µ 1) and (X 2 − E(X2)) equal to 0 simultaneously. Also, it is easy to notice that when Y ∈ [0, 0.5], X 1 and X 2 are strictly increasing with the increase of Y. Thus, (X 1 − µ 1) and (X 2 − E(X2)) are either both positive or both negative, if not zero. Therefore, Lemma B.3 For any random variable Y, in which Var(Y) denotes the variance of Y. Proof. Apparently, Consequently, For that we have log πθ(a|s) − log π θ (a|s). With the transitivity of inequality, combining equation 40 and equation 41, we know that log πθ(a|s) − log π θ (a|s). B.4 THEOREM 4.4 Theorem 4.4 (HTRPO Constraint Function). For the original goal g and an alternative goal g, the constraint between policy πθ(a|s) and policy π θ (a|s) is given by: in which = 1−γ. Proof. Starting from equation 9, the constraint condition is written as: Multiply the constraint function by a constant Denote the constraint function as fθ(θ), To write the constraint function in goal-conditioned form, In a similar way with the proof for Theorem 3.1, denote every time step of fθ(θ) as fθ(θ, t), in other words, for trajectory τ 1 = s 1, a 1, s 2, a 2,..., s t, a t and τ 2 = s t+1, a t+1,...,. For that 1 2 (log πθ(a t |s t, g) − log π θ (a t |s t, g)) 2 is independent from τ 2 conditioned on τ 1, Accordingly, Furthermore, by importance sampling, for a new goal g, the constraint can be converted to the following form. in which τ = s 1, a 1, s 2, a 2,..., s t, a t. Denote = 1−γ. Based on equation 23, by expanding and canceling terms, the constraint condition can be written as C SOLVING PROCESS FOR HTRPO Based on the final form of HTRPO optimization problem, this section completes the feasibility of this algorithm with estimators for the objective function and the KL-divergence constraint. is obtained from interacting with the environment under a goal g (i). In order to generate hindsight experience, we also need to sample a set of alternative goals G = {g . The Monte Carlo estimation of HTRPO optimization problem with dataset D can be derived as follows: in which λ = N ·N g and g is supposed to follow uniform distribution. However, in experiments, we follow the alternative goal sampling method of HPG . As a , the goals of training data actually follow the distribution of alternative goals instead of uniform distribution, and the objective and KL expectation will be estimated w.r.t. the alternative goal distribution. Therefore, during the learning process, our algorithm is encouraging the agent to achieve the alternative goals. Such mechanism serves as a mutual approach for all hindsight methods (; ;), which can be seen as a merit, for the intention is to guide the agent to achieve the alternative goals and then generalize to the original goals. However, as discussed in , this kind of estimation may in excessive variance, which leads to an unstable learning curve. In order to avoid instability, we adopt the technique of weighted importance sampling introduced in and further convert the optimization problem to the following form: We provide an explicit solution method for the optimization problem above in Appendix C.2. While introducing weighted importance sampling may cause a certain level of bias which is identical to that of HPG , the bias is to decrease in inverse ratio with regard to the increase of the data theoretically . Given the limited resources, we need to tradeoff between reducing bias and enlarging batch size. By picking a appropriate batch size, the improvement of weighted importance sampling is well demonstrated in the experiments. For the HTRPO optimization problem, briefly denote the optimization problem in expression 55 and 56 as: max For any policy parameter θ in the neighborhood of the parameterθ, approximate the optimization problem with linear objective function and quadratic constraint: Noticeably, g(θ) = 0 and ∇ θ g(θ) = 0, which further simplifies the optimization problem to the following form: max Given a convex optimzation problem with a linear objective function under a quadratic constraint, many well-practiced approaches can be taken to solve the problem analytically, among which we adopt the Karush-Kuhn-Tucker(KKT) conditions . For a Lagrangian multiplier λ, Expressions in 60 form the KKT conditions of the optimization problem. Solving the KKT conditions, The policies, however, in this paper are in the form of a neural network, which makes it extremely time-comsuming to compute the Hessian matrix. Thus, we compute [∇ 2 θ g(θ)] −1 ∇ θ f (θ) with conjugate gradient algorithm by solving the following equation: in which [∇ 2 θ g(θ)]x can be practically calculated through the following expansion: Sample a trajectory τ = {(s t, a t, r t, s t+1, g, π θ (a t |s t, g))} T t=1 using current policy θ; 4: end while Collecting data 6: Sample alternative goals G = {g from achieved goals in B origin ; 7: for for τ in B origin do 10: for t = 0 to T do Compute π θ (a t |s t, g (i) ); Modify reward r t |g → r t |g (i); 13: end for 14: 15: In this section, we provide a more comprehensive demonstration for the experiments of HTRPO. In detail, section F.1 narrates a full introduction to each environment; section F.2 gives out the sensitivity analysis of HTRPO including the performance under different network architectures and different numbers of alternative goals, in which we strictly adopt the control variable method and only the studied parameter is altered; section F.3 shows the supplementary materials of the experimental data including learning curves and success rates during the training process. We finetune the hyperparameters according to experience without hyperparameter search due to limited computing resources. k-Bit Flipping. In each episode of this experiment, two arrays of length k are generated. The first array is initialized with all 0's while the second one, usually regarded as the target array, is generated randomly. At each time step, the agent is able to flip one bit of the first array from 0 to 1 or from 1 to 0. Once the first array is exactly the same with the target array, the agent reaches the goal state and is then rewarded. The maximum number of time steps is k. In this experiment, we observe the performance of HTRPO under conditions that k = 8 and that k = 16 respectively. The general process of an 8-Bit Flipping task is demonstrated in Figure 1 (a). Grid World. In this experiment, the agent starts at a position of an 11 × 11 grid with intransitable obstacles, and is trying to reach another randomly chosen position in this grid. The agent is allowed to move up, down, left and right at each time step. Moving into obstacles makes the agent remain in its current position. States of this environment is represented by 2-dimensional integral coordinates and the maximum number of time steps is 32. In Empty Maze environment, there is no impassable obstacles other than the outer boundary, and the agent starts at the upper-left corner of the grid. In Four Rooms environment , walls separate the grid into 4 rooms, each with access to its adjacent rooms through single openings. Example cases of Empty Maze and Four Rooms environments adopted in this paper are demonstrated in Figure 1 (b) and (c). Fetch. Fetch environment contains a 7-DoF Fetch robotic arm with a two-fingered parallel gripper in simulation . In Fetch Reach environment, a target position is randomly chosen and the gripper of Fetch robotic arm needs to be moved upon it. In Fetch Push, the task for the robotic arm is to push a randomly placed block towards the goal state, anther randomly picked position, which is represented by a 3-dimensional Cartesian coordinate. In Fetch Slide, the robotic arm needs to exert a force on the block for it to slide towrds a chosen goal at a certain distance. A pictorial demonstration of this environment is shown in Figure 1 In this experiment, we observe the performance of HTRPO with different network architectures. Specially, we implement the proposed algorithm under 3 different network settings, i.e. networks with a 16-unit layer, two 64-unit layers and two 256-unit layers respectively. For the record, all parameters and other settings remain the same aside from the network architecture. As demonstrated in Figure 4, each row shows the performance under different network architecture settings for each environment. A general can be drawn that networks with more hidden layers and more neurons help to speed up the convergence. However, one difference is that for HTRPO, it converges quickly in all the settings, while for HPG, it converges much slower especially when the network architecture is simple. We believe that the iteratively searching of optimal solution in the trust region helps the network converge rapidly and is more robust to different network architecture. In this experiment, how the number of alternative goals, as a key parameter, affects the performance of HTRPO is studied. We conduct all the experiments, both discrete and continuous with different number of alternative goals. For discrete environments, we set the number of alternative goals to be 10, 30, 100 and ∞ in turn. For continuous environments, we compare the performance under 10, 30, 100 alternative goals respectively. The evaluation curves are shown in 5. From the , we can see that in simple discrete environments, ∞ alternative goals produce the fastest convergence. In complex and continuous environments, 30 and 100 alternative goals lead to comparatively good performance. It is not hard to see that Hindishgt TRPO with more alternative goals achieves better converging speed, which may be credited to the corresponding increase on training samples. This is, to some extent, similar to data augmentation. In this section, we demonstrate the success rates of HTRPO during both evaluation and training. For the record, the actions during the training process are sampled by the distribution output by the network while during the evaluation process, we adopt a greedy strategy to choose the action by the mean value of the distribution. Table 3 lists the success rates of Fetch Push and Fetch Slide during evaluation, in which the ultimate values reflect the mean computed with 1000 test in each iteration. They are the only two environments listed for they are the most complex ones. Figure 7 illustrates the success rate curves during the training process. Figure 9 demonstrates the estimation of KL divergence expectation in the training process. From these data we can see that in the experiments, the approximation of equation 13 can significantly reduce the variance of KL expectation estimation. Besides, the comparison of performance between HTRPO and HTRPO with KL1 also shows the efficiency of this approximation, which helps improve the final performance significantly. Both "HTRPO" and "HTRPO without WIS" use the estimation method in equation 13 with one difference being that "HTRPO without WIS" doesn't adopt weighted importance sampling. Thus, from Figure 9, we can see that "HTRPO" demonstrates the least variance. HTRPO HTRPO with KL1 HTRPO without WIS The curves for KL1 are comparatively lower than those of equation 13. Note that in TRPO, the linear search mechanism adjust the updating step size according to the estimation of KL divergence expectation. It sets a threshold to constrain the KL divergence. For those above the threshold, the updating step size will be reduced to ensure that the estimation of KL divergence estimation falls within the threshold. This explains why the curves for KL1 are comparatively lower. However, since the estimation of KL divergence expectation in HTRPO falls near the expected value, such step size adjustment is rarely triggered. This benefits from the much lower variance of equation 13.
This paper proposes an advanced policy optimization method with hindsight experience for sparse reward reinforcement learning.
1,423
scitldr
Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques. Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction. In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams. These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance. We present a system that reformulates a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. Our rewriter is able to incorporate knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved -- outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question. We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer. By combining query reformulation, knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset. The recently released AI2 Reasoning Challenge (ARC) and accompanying ARC Corpus is an ambitious test for AI systems that perform open-domain question answering (QA). This dataset consists of 2590 multiple choice questions authored for grade-school science exams; the questions are partitioned into an Easy set and a Challenge set. The Challenge set comprises questions that cannot be answered correctly by either a Pointwise Mutual Information (PMI-based) solver, or by an Information Retrieval (IR-based) solver. also note that the simple information retrieval (IR) methodology (Elasticsearch) that they use is a key weakness of current systems, and conjecture that 95% of the questions can be answered using ARC corpus sentences. ARC has proved to be a difficult dataset to perform well on, particularly its Challenge partition: existing systems like KG 2 achieve 31.70% accuracy on the test partition. Older models such as DecompAttn BID27 and BiDAF that have shown good performance on other datasets -e.g. SQuAD BID29 ] -perform only 1-2% above random chance. 1 The seeming intractability of the ARC Challenge dataset has only very recently shown signs of yielding, with the newest techniques attaining an accuracy of 42.32% on the Challenge set BID35. 2 An important avenue of attack on ARC was identified in Boratko et al. [2018a,b], which examined the knowledge and reasoning requirements for answering questions in the ARC dataset. The authors note that "simple reformulations to the query can greatly increase the quality of the retrieved sentences". They quantitatively measure the effectiveness of such an approach by demonstrating a 42% increase in score on ARC-Easy using a pre-trained version of the DrQA model BID7. Another recent tack that many top-performing systems for ARC have taken is the use of natural language inference (NLI) models to answer the questions. The NLI task -also sometimes known as recognizing textual entailment -is to determine whether a given natural language hypothesis h can be inferred from a natural language premise p. The NLI problem is often cast as a classification problem: given a hypothesis and premise, classify their relationship as either entailment, contradiction, or neutral. NLI models have improved state of the art performance on a number of important NLP tasks BID27 and have gained recent popularity due to the release of large datasets BID4 BID46 BID43. In addition to the NLI models, other techniques applied to ARC include using pre-trained graph embeddings to capture commonsense relations between concepts BID51; as well as the current state-of-theart approach that recasts multiple choice question answering as a reading comprehension problem that can also be used to fine-tune a pre-trained language model BID35.ARC Challenge represents a unique obstacle in the open domain QA world, as the questions are specifically selected to not be answerable by merely using basic techniques augmented with a high quality corpus. Our approach combines current best practices: it retrieves highly salient evidence, and then judges this evidence using a general NLI model. While other recent systems for ARC have taken a similar approach BID26 BID25, our extensive analysis of both the rewriter module as well as our decision rules sheds new light on this unique dataset. In order to overcome some of the limitations of existing retrieval-based systems on ARC and other similar corpora, we present an approach that uses the original question to produce a set of reformulations. These reformulations are then used to retrieve additional supporting text which can then be used to arrive at the correct answer. We couple this with a textual entailment model and a robust decision rule to achieve good performance on the ARC dataset. We discuss important lessons learned in the construction of this system, and key issues that need to be addressed in order to move forward on the ARC dataset. Teaching machines how to read, reason, and answer questions over natural language questions is a long-standing area of research; doing this well has been a very important mission of both the NLP and AI communities. The Watson project BID16 -also known as DeepQAis perhaps the most famous example of a question answering system to date. That project involved largely factoid-based questions, and much of its success can be attributed to the quality of the corpus and the NLP tools employed for question understanding. In this section, we look at the most relevant prior work in improving open-domain question answering. A number of datasets have been proposed for reading comprehension and question answering. BID18 manually created a dataset of 3rd and 6th grade reading comprehension questions with short answers. The techniques that were explored for this dataset included pattern matching, rules, and logistic regression. MCTest BID31 ] is a crowdsourced dataset, and comprises of 660 elementary-level children's fictional stories, which are the source of questions and multiple choice answers. Questions and answers were constructed with a restricted vocabulary that a 7 year-old could understand. Half of the questions required the answer to be derived from two sentences, with the motivation being to encourage research in multi-hop (one-hop) reasoning. Recent techniques such as those presented by BID39 and BID48 have performed well on this dataset. The original SQuAD dataset BID29 quickly became one of the most popular datasets for reading comprehension: it uses Wikipedia passages as its source, and question-answer pairs are created using crowdsourcing. While it is stated that SQuAD requires logical reasoning, the complexity of reasoning required is far lesser than that required by the AI2 standardized tests dataset BID9 , ]. Some approaches have already attained human-level performance on the first version of SQuAD. More recently, an extended version of SQuAD was released that includes over 50,000 additional questions where the answer cannot be found in source passages BID30. While unanswerable questions in SQuAD 2.0 add a significant challenge, the answerable questions are the same (and have the same reasoning complexity) as the questions in the first version of SQuAD. NewsQA BID37 is another dataset that was created using crowdsourcing; it utilizes passages from 10, 000 news articles to create questions. Most of the datasets mentioned above are primarily closed world/domain: the answer exists in a given snippet of text that is provided to the system along with the question. On the other hand, in the open domain setting, the question-answer datasets are constructed to encompass the whole pipeline for question answering, starting with the retrieval of relevant documents. SearchQA BID14 is an effort to create such a dataset; it contains 140K question-answer (QA) pairs. While the motivation was to create an open domain dataset, SearchQA provides text that contains'evidence' (a set of annotated search ) and hence falls short of being a complete open domain QA dataset. TriviaQA BID19 is another reading comprehension dataset that contains 650K QA pairs with evidence. Datasets created from standardized science tests are particularly important because they include questions that require complex reasoning techniques to solve. A survey of the knowledge base requirements for answering questions from early science questions was performed by BID11. The authors concluded that advanced inference methods were necessary for many of the questions, as they could not be answered by simple fact based retrieval. Partially ing from that analysis, a number of science-question focused datasets have been released over the past few years. The AI2 Science Questions dataset was introduced by BID9 along with the Aristo Framework, which we build off of. This dataset contains over 1,000 multiple choice questions from state and federal science questions for elementary and middle school students. 3 The SciQ dataset BID45 contains 13,679 crowdsourced multiple choice science questions. To construct this dataset, workers were shown a passage and asked to construct a question along with correct and incorrect answer options. The dataset contained both the source passage as well as the question and answer options. Query expansion and reformulation -particularly in the area of information retrieval (IR) -is well studied BID0. The primary motivation for query expansion and reformulation in IR is that a query may be too short, ambiguous, or ill-formed to retrieve that are relevant enough to satisfy the information needs of users. In such scenarios, query expansion and reformulation have played a crucial role by generating queries with (possibly) new terms and weights to retrieve relevant from the IR engine. While there is a long history of research on query expansion BID24 ], Rocchio's relevance feedback gave it a new beginning BID32. Query expansion has since been applied to many applications, such as Web Personalization, Information Filtering, and Multimedia IR. In this work, we focus on query expansion as applied to question answering systems, where paraphrase-based approaches using an induced semantic lexicon BID15 and machine translation techniques BID13 have performed well for both structured query generation and answer selection. Open-vocabulary reformulation using reinforcement learning has also been demonstrated to improve performance on factoid-based datasets like SearchQA, though increasing the fluency and variety of reformulated queries remains an ongoing effort BID6. Retrieving relevant documents/passages is one of the primary components of open domain question answering systems BID40. Errors in this initial module are propagated down the line, and have a significant impact on the ultimate accuracy of QA systems. For example, the latest sentence corpus released by AI2 (i.e. the ARC corpus) is estimated by to contain the answers to 95% of the questions in the ARC dataset. However, even state of the art systems that are not completely IR-based (but use neural or structured representations) perform only slightly above chance on the Challenge set. This is at least partially due to early errors in passage retrieval. Recent work by BID5 and BID40 have identified improving the retrieval modules as the key component in improving state of the art QA systems. Our overall pipeline is illustrated in Figure 1 and comprises three modules: the Rewriter reformulates a question into a set of queries; the Retriever uses those queries to obtain relevant passages 3. http://data.allenai.org/ai2-science-questions DISPLAYFORM0 Figure 1: Our overall system architecture. The Rewriter module reformulates a natural-language question into queries by selecting salient terms. The Retriever module executes these queries to obtain a set of relevant passages. Using the passages as evidence, the Resolver module computes entailment probabilities for each answer and applies a decision function to determine the final answer set.from a text corpus, and the Resolver uses the question and the retrieved passages to select the final answer(s). More formally, a pair (Q, A) composed of a question Q with a set of answers a i ∈ A is passed into the Rewriter module. This module uses a term selector which (optionally) incorporates knowledge in the form of embeddings trained using Knowledge Graphs such as ConceptNet to generate a set of reformulated queries Q = {q 1, . . ., q n}. In our system, as with most other systems for ARC Challenge, for each question Q, we generate a set of queries where each query uses the same set of terms with one of the answers a i ∈ A appended to the end. This set of queries is then passed to a Retriever which issues the search over a corpus to retrieve a set of k relevant passages per query to create a set of passages P = {q 1 p 1, . . ., q 1 p k, q 2 p 1, . . ., q n p k} that are passed to the Resolver. The Resolver contains two components: the entailment model and the decision function. We use match-LSTM BID41 ] trained on SciTail ] for our entailment model and for each passage passed in we compute the probability that each answer is entailed from the given passage and question. This information is passed to the decision function which selects a non-empty set of answers to return. For the Rewriter module, we investigate and evaluate two different approaches to reformulate queries by retaining only their most salient terms: a sequence to sequence model similar to BID36 and models based on the recent work by BID47 on Neural Sequence Labeling. FIG0 provides examples of the queries that are obtained by selecting terms from the original question using each of the models described in this section. We first consider a simple sequence-to-sequence model shown in FIG3 that translates a sequence of terms in an input query into a sequence of 0s and 1s of the same length. The input terms are passed to an encoder layer through an embedding layer initialized with pre-trained embeddings (e.g., GloVe BID28). The outputs of the encoder layer are decoded, using an attention mechanism BID1, into the ing sequence of 0s and 1s that is used as a mask to select the most salient terms in the input query. Both the encoder and decoder layers are implemented with a single hidden bidirectional GRU layer (h = 128).Figure 2: Seq2Seq query reformulation model. A sequence of terms from the original query is translated into a sequence of 0s and 1s which serves as a mask used to select the most salient terms. Our second approach to identifying salient terms comprises four models implemented with the NCRF++ sequence-labeling toolkit 4 of BID47. Our basic NCRF++ model uses a bi-directional LSTM with a single hidden layer (h = 200) where the input at each token is its 300-dimensional pre-trained GloVe embedding BID28. Additional models incorporate knowledge in the form of graph embeddings derived using the ConceptNet knowledge base BID34 using three knowledge graph embedding approaches: TransH BID44, CompleX BID38, and the PPMI embeddings released with ConceptNet BID34. Entities are linked with the text by matching their surface form with phrases of up to three words. For each token in the question, we concatenate its word embedding with a 10-dimensional vector indicating whether the token is part of the surface form of a ConceptNet entity. We then append either the 300-dimensional vector corresponding to the embedding of that entity in ConceptNet, or a single randomly initialized UNK vector when a token is not linked to an entity. The final prediction is performed left-to-right using a CRF layer that takes into account the preceding label. We train the models for 50 iterations using SGD with a learning rate of 0.015 and learning rate decay of 0.05. Before integrating the rewriter module into our overall system (Figure 1), the two rewriter models (seq2seq and NCRF++) are first trained and tested on the Essential Terms dataset introduced by BID22. 5 This dataset consists of 2,223 crowd-sourced questions. Each word in a question is annotated with a numerical rating on the scale 1-5 that indicates the importance of the word. Table 1 presents the of our models evaluated on Essential Terms dataset along with those of two state-of-the-art systems: ET Classifier BID22 and ET Net BID26. ET Classifier trains an SVM using over 120 features based on the dependency parse, semantic features of the sentences, cluster representations of the words, and properties of question words. While the 4. https://github.com/jiesutd/NCRFpp 5. https://github.com/allenai/essential-terms Acc Pr Re F1ET Classifier BID22 BID22. We follow ET Net in using a random 80/10/10 train/dev/test split performed after filtering out questions that appear in the ARC dev/test sets. ET Classifier was evaluated using a 79/9/21 train/dev/test split, we follow BID26 in using an 80/10/10 split and remove questions from the Essential Terms dataset that appear in the ARC dev/test partitions. The key insights from this experimental evaluation are as follows:• NCRF++ significantly outperforms the seq2seq model with respect to all evaluation metrics (see with GloVe 840B.300d).• NCRF++ is competitive with respect to ET Net and ET Classifier (without the heavy feature engineering of the latter system). It has significantly better accuracy and recall than ET Classifier although its F1-score is 3% inferior. When used with CompleX graph embeddings BID38, it has the same precision as ET Net, but its F1-score is 4% less.• Finally, while the in Table 1 do not seem to support the need for using ConceptNet embeddings, we will see in the next section that, on ARC Challenge dev set, incorporating outside knowledge significantly increases the quality of passages that are available for downstream reasoning. Retrieving and providing high quality passages to the Resolver module is an important step in ensuring the accuracy of the system. In our system, a set of queries Q = {q 1, . . ., q n} are sent to the Retriever, which then passes these queries along with a number of passages to the Resolver module. We use Elasticsearch BID17, a state-of-the-art text indexing system. We index on the ARC Corpus that is provided as part of the ARC Dataset. claim that this 14M-sentence corpus covers 95% of the questions in the ARC Challenge, while Boratko et al. [2018a,b] observe that the ARC corpus contains many relevant facts that are useful to solve the annotated questions from the ARC training set. An important direction for future work is augmenting the corpus with other search indices and sources of knowledge from which passages can be retrieved. Given the retrieved passages, the system still needs to select a particular answer out of the answer set A. In our system we divide this process into two components: the entailment module and the decision rule. In previous systems both of these components have been wrapped into one. Separating them allows us to study each of them individually, and make more informed design choices. While reading comprehension models like BiDAF have been adapted to the multiple-choice QA task by selecting a span in the passage obtained by concatenating several IR into a larger passage, recent high-scoring systems on the ARC Leaderboard have relied on textual entailment models. In the approach pioneered by, a multiple choice question is converted into an entailment problem wherein each IR is a premise. The question is turned into a fill-in-the-blank statement using a set of handcrafted heuristics (e.g. replacing whwords). For each candidate answer, a hypothesis is generated by inserting the answer into the blank and the model's probability that the premise entails this hypothesis becomes the answer's score. We use match-LSTM [a,b] trained on SciTail as our textual entailment model. We chose match-LSTM because: multiple reading comprehension techniques have used match-LSTM as a important module in their overall architecture [a,b]; and match-LSTM models trained on SciTail achieve an accuracy of 84% on test (88% on dev), outperforming other recent entailment models such as DeIsTe and DGEM.Match-LSTM consists of an attention layer and a matching layer. Given premise P = (t p 1,t p 2, ...,t p K) and hypothesis H = (t h 1,t h 2, ...,t h N) where t p i and t h j are embedding vectors of corresponding words in premise and hypothesis. A contextual representation of premise and hypothesis is generated by encoding their embedding vectors using bi-directional LSTMs. Let p i and h j be the contextual representation of the i-th word in the premise and the j-th word in the hypothesis computed using BiLSTMs over its embedding vectors. Then, an attention mechanism is used to determine the attention weighted representation of the j word in the hypothesis as follows: DISPLAYFORM0 and where e i j = p i ·h j. The matcher layer is an LST M(m) where the in- DISPLAYFORM1 is the concatenation operator). Finally, the max-pooling over the hidden states {h m j} j=1:N of the matcher is used for softmax classification. In the initial study of the ARC Dataset, convert many existing question answering and entailment systems to work with the particular format of the ARC dataset. One of the choices that they made during this conversion was to decide how the output of the entailment systems, which consist of a probability that a given hypothesis is entailed from a premise, are aggregated to arrive at a final answer selection. The rule used, which we call the AI2 Rule for comparison, is to take the top-8 passage by Elasticsearch score after pooling all queries for a given question. Each one of these queries has a specific a i that was associated with it due to the fact that all queries are of the format Q + a i. For each of these top-8 passages, the entailment score of a i is recorded and the top entailment score is used to select an answer. In our system we decided to make this decision rule not part of the particular entailment system but rather a completely separate module. The entailment system is responsible for measuring the entailment for each answer option for each of the retrieved passages and passing this information to the decision rule. One can compute a number of statistics and filters over this information and then arrive at one or more answer selections for the overall system. In addition to the AI2 Rule described above, we also experiment filtering by Elasticsearch score per individual Q + a i query (rather than pooling scores across queries). 6 Referred to in the next section as the MaxEntail Top-k rule, this decision function select the answer(s) that have the greatest entailment probability when considering the top-k passages retained per query. Our goal in this section is to evaluate the effect that our query reformulation and filtering techniques have on overcoming the IR bottleneck in the open-domain question answering pipeline. In order to isolate those effects on the Retriever module, it is imperative to avoid overfitting to the ARC Challenge Training set. Thus, for all experiments the Resolver module uses the same match-LSTM model trained on SciTail as described in Section 5.1. In the Rewriter module, all query reformulation models are trained on the same Essential Terms data described in Section 3.1 and differ only in architecture (seq2seq vs. NCRF) and in the embedding technique used to encode knowledge. Finally, we tune the hyperparameters of our decision rules (i.e. the number of Elasticsearch passed to considered by the Resolver module) on the ARC Challenge dev set. Our on the dev set for 12 of our models and two different decision rules are summarized in FIG1 and FIG2. The final for test set are provided in Table 2. 6. Note that combining Elasticsearch scores across passages is not typically considered a good idea; according to the Elasticsearch Best Practices FAQ "... the only purpose of the relevance score is to sort the of the current query in the correct order. You should not try to compare the relevance scores from different queries." We first consider the important question of how many passages to investigate per query: we can compare and contrast FIG1 (AI2 Rule) and FIG2 (Max Entailment of top-K per answer) which varies the number of passages k that are considered. The most obvious difference is that the show that max entailment of top-k is strictly a better rule overall for both the original and split hypothesis. In addition to the overall score, keeping the top-k per answer in a smoother curve that is more amenable to calibration on the dev partition. Comparing sub-figures (a) and (b) in FIG1, we find more empirical support for our decision to investigate splitting the hypothesis. The length of the questions in the Challenge versus Easy set average 21.8 v. 19.1 words, respectively; for the answers, the length is 4.9 words versus 3.8 respectively. One possible cause for poor performance on ARC Challenge is that entailment models are easily confused by very long, story based questions. Working off the annotations of Boratko et al. [2018a,b], many of the questions of type "Question Logic" are of this nature. To address this, we "split" multi-sentence questions by (a) forming the hypothesis from only final sentence and (b) pre-pending the earlier sentences to each premise. Comparing across the figures we see that, in general, the modified hypothesis splitting leads to a small improvement in scores. We also see the effect of including knowledge via ConceptNet embeddings on the performance of the downstream reasoning task; this is particularly evident in FIG2. All of the rewritten queries are superior to using the original question. Additionally, in both FIG2 (a) and (b), the CompleX and PPMI embeddings are performing better than the base rewriter. This is a strong indication that using the knowledge in specific ways can aid downstream reasoning tasks; this is contrary to the of BID25. Considering the on the dev set, we use the test set to evaluate the following decision rules for all of our system: the AI2 Rule, Top-2 per query, and Top-30 per query. We selected Top-2 as it is the closest analog to the AI2 Rule, and Top-30 because there is a clear and long peak from our initial testing on the dev set (per FIG2 . The of our run on test set can be found in Table 2 . For the most direct comparison between the two methods (i.e., without splitting), all models using the Top-2 rule outperform the AI2 rule at at least 99.9% confidence using a paired t-test. We note that for the dev set, the split treatments nearly uniformly dominate the non-split treatments; while for the test set this is almost completely reversed (and for Original Question and PPMI, for which splitting outperforms non-splitting at 95% confidence). Perhaps more surprisingly, the more sophisticated ConceptNet embeddings are almost uniformly better on the dev set; while on the test set they are nearly uniformly worse. For context, we also provide the state of the ARC leaderboard at the time of submission with the addition to our top-performing system in Table 3. Reading Strategies BID35 42.32 68.9 ET-RR BID26 36.36 -BiLSTM Max-Out BID25 33.87 -TriAN + f(dir)(cs) + f(ind)(cs) BID51 33.39 -NCRF++/match-LSTM 33.20 52.22 KG 2 31.70 -DGEM 27.11 58.97 TableILP BID21 26.97 36.15 BiDAF 26.54 50.11 DecompAtt BID27 24.34 58.27 Table 3: Comparison of our system with state-of-the-art systems for the ARC dataset. Numbers taken from ARC Leaderboard as of Nov. 18, 2018. Of the systems above ours on the leaderboard, only BID26 report their accuracy on both the dev set (43.29%) and the test set (36.36%). We suffer a similar loss in performance from 36.37% to 33.20%, demonstrating the risk of overfitting to a (relatively small) development set in the multiplechoice setting even when a model has few learnable parameters. As in this paper, BID26 pursue the approach suggested by Boratko et al. [2018a,b] in learning how to transform a naturallanguage question into a query for which an IR system can return a higher-quality selection of . Both of these systems use entailment models similar to our match-LSTM BID41 model, but also incorporate additional co-attention between questions, candidate answers, and the retrieved evidence. BID35 present an an encouraging for combating the IR bottleneck in opendomain QA. By concatenating the top-50 of a single (joint) query and feeding the into a neural reader optimized by several lightly-supervised'reading strategies', they achieve an accuracy of 37.4% on the test set even without optimizing for single-answer selection. Integrating this approach with our query rewriting module is left for future work. In this paper, we present a system that answers science exam questions by retrieving supporting evidence from a large, noisy corpus on the basis of keywords extracted from the original query. By combining query rewriting, knowledge, and textual entailment, our system is able to outperform several strong baselines on the ARC dataset. Our rewriter is able to incorporate knowledge from ConceptNet and -in tandem with a generic entailment model trained on SciTail -achieves near state of the art performance on the end-to-end QA task despite only being trained to identify essential terms in the original source question. There are a number of key takeaways from our work: first, researchers should be aware of the impact that Elasticsearch (or a similar tool) can have on the performance of their models. Answer candidates should not be discarded based on the relevance score of their top ; while (correct) answers are likely critical to retrieving relevant , the original AI2 Rule is too aggressive in pruning candidates. Using an entailment model that is capable of leveraging knowledge in a more principled way would likely help in filtering unproductive search . Second, our corroborate those of BID26 and show that tuning to the dev partition of the Challenge set (299 questions) is extremely sensitive. Though we are unable to speculate on whether this is an artifact of the dataset or a more fundamental concern in multiple-choice QA, it is an important consideration for generating significant and reproducible improvements on the ARC dataset.
We explore how using background knowledge with query reformulation can help retrieve better supporting evidence when answering multiple-choice science questions.
1,424
scitldr
Deep CNNs have achieved state-of-the-art performance for numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret. Machine learning theory implies that such networks are highly over-parameterised and that it should be possible to reduce their size without sacrificing accuracy, and indeed many recent studies have begun to highlight specific redundancies that can be exploited to achieve this. In this paper, we take a further step in this direction by proposing a filter-sharing approach to compressing deep CNNs that reduces their memory footprint by repeatedly applying a single convolutional mapping of learned filters to simulate a CNN pipeline. We show, via experiments on CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet that this allows us to reduce the parameter counts of networks based on common designs such as VGGNet and ResNet by a factor proportional to their depth, whilst leaving their accuracy largely unaffected. At a broader level, our approach also indicates how the scale-space regularities found in visual signals can be leveraged to build neural architectures that are more parsimonious and interpretable. Deep CNNs have achieved state-of-the-art on a wide range of tasks, from image understanding (; ; ;) to natural language processing . However, these network architectures are often highly overparameterised , and thus require the supervision of a large number of input-output mappings and significant training time to adapt their parameters to any given task. Recent studies have discovered several different redundancies in these network architectures (; Hubara* et al., 2018; ; ; a; b) and certain simplicities (Pérez et al., 2018;) in the functions that they implement. For instance, showed that a large classification network can be distilled down to a small sub-network that, owing to its lucky initialisation, is trainable in isolation without compromising the original classification accuracy. observed that deep classification networks learn simplistic non-linearities for class identification, a fact that might well underlie their adversarial vulnerability, whilst challenging the need for complex architectures. Attempts at knowledge distillation have regularly demonstrated that it is possible to train small student architectures to mimic larger teacher networks by using ancillary information extracted from the latter, such as their attention patterns , predicted soft-target distributions or other kinds of meta-data . These works and others continue to expose the high level of parameter redundancy in deep CNNs, and comprise a foundational body of work towards studying and simplifying networks for safe and practical use. Our paper experiments with yet another scheme for simplifying CNNs, in the hope that it will not only shrink the effective footprint of these networks, but also open up new pathways for network understanding and redesign. In particular, we propose the use of a common set of convolutional filters at different levels of a convolutional hierarchy to achieve class disentanglement. Mathematically, we formulate a classification CNN as an iterative function in which a small set of learned convolutional mappings are applied repeatedly as different layers of a CNN pipeline (see Figure 1). In doing so, we are able to reduce the parameter count of the network by a factor proportional to its depth, whilst leaving its accuracy largely unaffected. We also investigate the introduction of non-shared linear widths n of the shared convolutional layer, compared to the baseline VGGNet , for CIFAR-10 (a) and. The compression factor is plotted on a logarithmic scale. layers before certain shared convolutional layers to enhance the flexibility of the model by allowing it to linearly combine shared filter maps for the disentanglement task. This work is partly inspired by the classic literature on image processing that has long sought to characterise natural images by collating their responses, at different image scales, to a small, canonical set of hand-crafted visual operators . Modern CNN architectures effectively still implement hierarchical feature extraction, but with the difference that there are thousands of such operators (i.e. convolutional filters) at each scale level, all of which are individually adaptable and learned via backpropagation. Our work can thus be seen as an effort to reconcile the above two non-contemporaneous approaches to image processing, in which we aim to identify a common set of visual operators for all the different scales by learning them in an end-to-end manner. Our approach bears some high-level resemblance to previous approaches (e.g. ; ; ;) that have attempted to implement, interpret and potentially improve convolutional neural networks through an iterative use of simpler modules For example, share convolutional mappings in ResNets in an attempt to approximate biological visual systems using feedback loops and recurrence, although their experimental analysis is limited to the CIFAR dataset. By contrast, our work applies the convolution-sharing paradigm to both plain feed-forward and residual constructs, and investigates the effectiveness of using only a single shared convolutional mapping for the entire network pipeline. An additional contribution of our approach is the flexibility we add to the model by coupling learned linear layers with shared convolutions while still limiting the total parameter count. Experimentally, we evaluate the accuracy vs. model size tradeoff induced by our approach on a realistic set of datasets that include Tiny ImageNet and ImageNet. A steady increase in the size of datasets and the availability of computational resources has enabled neural networks to grow deeper , denser and wider . In doing so, concerns regarding their over-parameterisation have often been ignored in favour of better test set generalisation. 1 More recently, as their performance on some benchmarks has reached near-human levels, real-world deployment of these models is being considered. This deployment has been hugely impeded by the memory requirements, latency and energy demands of their heavy computational machinery . Our approach contributes to the (extensive) literature on network compression that is focused on making these machine learning models more usable in practical scenarios. Existing compression methods can be divided into seven categories -pruning, quantisation, tensorization/tensor decomposition, knowledge distillation, custom architectures, sharing-based and hybrid methods. Many of these works are beyond the scope of this paper, but for completeness, we present a brief review in §A.1 (a more exhaustive survey can be found in). Our own work falls within the realm of sharing-based methods that seek to equate some of a network's weights or filters to reduce the number of independent parameters in the network. There are various ways of deciding which weights/filters to share, from somewhat arbitrary (if effective) approaches such as the hashing trick , to more principled approaches such as k-means clustering . A few recent works have turned their attention to sharing convolutional weight matrices in a more structured manner. Of these, LegoNet (b) shares filter groups across sets of channels, whilst FSNet (a) shares filter weights across spatial locations. In both cases, sharing is restricted to a single layer at a time. ShaResNet reuses convolutional mappings, but within the same scale level (i.e. between two max-pooling steps). The novelty of our work lies in extending this filter-sharing paradigm to an entire convolutional pipeline. We instantiate a single convolutional layer that is applied iteratively to mimic a deep convolutional feature extractor, and analyse the accuracy vs. memory tradeoff for different widths of this layer. A standard feed-forward classification CNN can be formulated as where the overall function F is a composition of the convolutional feature extractor F conv followed by a fully-connected classifier C. The convolutional sub-model F conv consists of a sequence of convolutional layers, interspersed with non-linearities (ReLUs, Max-Pooling) or regularisers (dropout, BatchNorm) or some combination thereof, denoted by R i. The function performed by each convolutional layer f i is completely specified by a set of weights and biases that we denote using W i. Crucially, the weights and biases for each different layer are independent. The number of parameters in layer f i is then simply the size of W i, calculated as where n in i in the number of input channels to f i, n out i is the number of output channels, is the volume of f i, and k i is the size of its (square) convolutional filters. In practice, the n out i term for the biases is dominated by that for the weights, and so we disregard it in what follows. Letting W conv = L i=1 W i denote all the parameters in F conv (i.e. disregarding the comparatively small contributions from the non-convolutional layers), the total parameter count is given by Note that for many common architectures, there exists some k such that ∀i, k i = k (e.g. for VGGNet, k = 3). For such architectures, Equation 3 can then be further simplified to is the mean volume per network layer. Our method proposes a crude simplification to such architectures, namely to instantiate a single convolutional layer f, and apply it L successive times in order to implement a convolutional pipeline of equivalent depth to the original model. In particular, we enforce the following constraint: This simplifies the CNN architecture in Equation 1 tõ Whilst our analysis focuses purely on the convolutional layers, it is interesting to note that when the R i layers are all the same, the CNN architecture simplifies further to the following iterative form: The convolutional layer f in our architecture expects an input tensor with a predetermined number of channels, which we will call n. Meanwhile, the R i layers between the convolutional layers leave the number of channels unchanged. Thus, given the iterative application of f, the layer f must also output a tensor with n channels. (In practice, f is called for the first time on the input image itself, which for colour images would normally only have 3 channels. To avoid artificially limiting n to 3, we pad the input image with empty channels to produce a tensor with n channels.) We deduce that |W |, the number of parameters for f, must satisfy is the volume of f. Furthermore, since W is shared between all L convolutional layers, the total number of independent parameters inF conv must also just be |W |. The compression factor between the original architecture and its shared counterpart can thus be quantified as This is proportional to the depth L of the original network, and is down-weighted by any (multiplicative) increase in the average per-layer volume in going from the original to the shared architecture. We now turn to examine the convolutional operation in our architecture. Each layer f, the operation of which is completely specified by the weights and biases in W, takes an input tensor X of size n × h × w, where n, h and w denote the number of channels, height and width respectively. Based on X and W, we can conceptually define 2D matrices Φ(X) and Γ(W) as follows: In this, m = h × w, and each x ij is a rasterisation of a k × k patch of input tensor centred at spatial location i in channel j. Each w ij is a similar rasterisation of the k × k convolutional kernel that maps the input channel i ∈ {1, 2, . . ., n} to the output channel j ∈ {1, 2, . . ., n}, and each b j is the bias for output channel j. Then f can be defined concisely as f (X) = Ψ(Φ(X) × Γ(W)), in which Ψ reshapes the m × n tensor Φ(X) × Γ(W) back to one of size n × h × w. In practice, this simple formulation could be seen as being too restrictive, in the sense that irrespective of the convolutional iteration, each filter w ij in Γ(W) only ever operates on patches from input channel i (for example, the w 1j filters only ever operate on patches from channel 1). For this reason, we decided to investigate whether adding a way of allowing the input channels to be reorganised at various points in the overall pipeline would improve performance. In principle, one way of achieving this would be to add n × n permutation matrices at appropriate points in the pipeline, e.g. just before each pooling operation. In practice, however, to make the operations differentiable, we implement them using linear layers (i.e. 1 × 1 convolutions), thus implementing blending of the input channels rather than simply permuting them. The weights of these layers are separate for each instantiation and are learned as part of the end-to-end pipeline. It would be reasonable to expect this added flexibility to yield a significant increase in performance, and indeed our in §5 show this to be the case. Nevertheless, it is notable that even without this added flexibility, our shared architectures already achieve extremely good performance on the datasets on which we tested, demonstrating that our underlying approach of sharing filters between layers makes sense even in the absence of permutation/blending. We evaluate our filter-sharing approach on four well-known image classification benchmarks: CIFAR-10, CIFAR-100, Tiny ImageNet and ImageNet. Details of these datasets can be found in §A.2. For this study, we work with two different architectures, one closely inspired by VGGNet , and the other by ResNet . VGGNet-like Architectures. We base our VGGNet-like architectures on VGG-16, which consists of 5 convolutional blocks followed by 3 linear layers. Each block is followed by a max-pooling step and contains several convolutional layers with different channel counts (in order: 2 layers with 64 channels, 2 with 128, 3 with 256, 3 with 512 and 3 layers with 512 channels). By contrast, in our case, we define a single convolutional layer with a fixed number of input and output channels n, and then use it repeatedly in the same arrangement as above (see Table 3 in §A.3 for more details). We define four variants of this convolutional feature extractor for our study. E-VGGNet is our equivalent of VGGNet, with n channels per layer and no sharing between the layers: we use this as a baseline. Its shared counterpart, S-VGGNet, has the same structure, but iteratively applies a single convolutional layer. SL-VGGNet is an extended version of S-VGGNet that introduces linear layers (i.e. 1 × 1 convolutions) before each max-pooling operation to allow the input channels to be blended at those points in the pipeline. Finally, since all the convolutional layers in SL-VGGNet are the same (these exclude what we call the linear layers), we define a further variant of our architecture that simplifies the network design by setting the number of layers per block to a scalar. We experiment with ∈ {2, 3}, and name the corresponding networks SL -VGGNet. Note that the predetermined number of channels n is a parameter of our architecture: we test several variants to find the best ones. We perform experiments on CIFAR-10/100 and Tiny ImageNet. For CIFAR-10, the 3 fully-connected layers that follow the feature extractor have 512, 512 and 10 output channels, respectively. For CIFAR-100, we use the same VGGNet-like architectures as for CIFAR-10, but the fully-connected layers have 1024, 1024 and 100 output channels, respectively. For Tiny ImageNet, we use a sequence of two fully-connected layers, with 2048 and 200 output channels respectively. ResNet-like Architectures. We base our ResNet-like architectures on the models proposed in. The simpler variants of these are built using'basic' blocks that essentially consist of two equally-sized 3 × 3 convolutional layers and a skip connection (see Fig. 6). The deeper variants, meanwhile, are built using'bottleneck' blocks, which similarly have a skip connection, but sandwich a single 3×3 convolutional layer between two 1×1 convolutional layers that decrease and then restore the number of channels to limit the number of free parameters. The network pipeline begins with a standalone convolutional layer that outputs a predetermined number of channels p. This is followed by a sequence of b blocks at a number of different scale levels (generally 4, but 3 for CIFAR variants). In the original architectures, each scale level (except the first) began with a strided convolutional layer that downsampled the image and doubled the number of channels. Since we want the convolutional layers in our architectures to have the same numbers of input and output channels (to facilitate sharing), we define an equivalent architecture, E-ResNet, that instead doubles the number of channels and performs downsampling using (respectively) a linear layer (i.e. 1 × 1 convolutions) and a max-pooling step at the end of each scale level. Note that, as in the original ResNet, the final scale level in our architecture ends with average pooling rather than max-pooling. Despite these modifications, the predictive performances of our E-ResNets closely match those of the original architectures. The shared variant of this architecture uses n channels for all scale levels and shares the weights across all the convolutional layers (excluding the linear layers). Since the architecture already contains the linear layers we were previously adding to allow blending of the input channels, we refer to it as SL-ResNet. For CIFAR-10/100, the standalone convolutional layer uses a kernel size of 3 × 3, and a p of 16 and 32 for each dataset, respectively. We experiment with b ∈ {3, 5, 7}'basic' blocks per scale level, and terminate the network with a 10-way linear classifier for CIFAR-10 and a 100-way classifier for CIFAR-100. See Table 4 in §A.3 for details. For Tiny ImageNet and ImageNet, we base our ResNet-like architectures on ResNet-34 and ResNet-50. ResNet-34 is built using'basic' blocks, whilst ResNet-50 uses'bottleneck' blocks. For the latter, it is clearly not possible to share filters between the layers within a block, since they are of different dimensions, so we instead use multiple shared copies of a single block. Note that the shared variants of both these models, SL-ResNet-34/50, keep the standalone convolutional layer unshared, since its kernel size is adjusted according to the dataset (3 × 3 for Tiny ImageNet and 7 × 7 for ImageNet). See Table 5 in §A.3 for details. Earlier, Fig. 2 showed the accuracy vs. compression trade-off for S-VGGNet, relative to the original VGGNet , for different widths n of the shared convolutional layer. Here, Fig. 3 illustrates the improvements in accuracy due to the learned linear layers (i.e. the blend- The compression factor C is plotted on a log scale.) and (for CIFAR-10) variants of LegoNet (b), another state-of-the-art compression method. Baseline models marked with a * were retrained for this study. ing layers) on CIFAR-10, CIFAR-100 and Tiny ImageNet. Observably, the use of the linear layers provides greater benefit for datasets that involve discriminating between a larger number of classes, such as CIFAR-100 and Tiny ImageNet. For CIFAR-10, CIFAR-100 and Tiny ImageNet we compare the accuracies of the best-performing'SL' variants of VGGNet with those of the baseline architecture (and competing compression methods for these datasets, where available) in Table 1. For CIFAR-10 (see Table 1b), we are able to achieve comparable classification accuracy to the VGGNet baseline using only n = 256 channels for our shared convolutional layer, which yields a compression factor of ≈ 17×. For CIFAR-100 (Table 1c), which has 10× more classes, we had to use n = 512 channels to achieve comparable accuracy, but this still yields a significant compression factor of 4.3. Higher compression factors can be achieved by reducing the number of channels, in exchange for some loss in accuracy. Evaluating our shared architecture on Tiny ImageNet (in Table 1d) evidences a similar trend in the , with SL2-VGGNet (n = 512 channels) achieving an accuracy comparable to the non-shared baseline, whilst using only 23% of its parameters. Detailed accuracy and memory usage numbers for E-VGGNet, S-VGGNet and SL-VGGNet, for CIFAR-10, are in Table 1a, while the for CIFAR-100 and Tiny Imagenet can be found in the appendix (see Table 6 in §A.5) We also evaluate our shared ResNet architecture (SL-ResNet) on Tiny ImageNet and ImageNet, with the shown in Table 2 (the corresponding for CIFAR-10 and CIFAR-100 can be found in the appendix, see Table 7 in §A.5). For Tiny ImageNet, our SL-ResNet34 (n = 512) variant is able to achieve a compression rate of 8.4 with only a negligible loss in accuracy. For ImageNet, the same variant similarly achieves a compression rate of 8. , LegoNet (b), FSNet (a) and Shared Wide ResNet (SWRN) . Baseline models marked with a * were retrained for this study. Figure 4: A visual depiction of the linear layers used to blend the input channels in our approach. We show the layers for the two variants in the order (left to right) in which they appear in the networks. For each layer, the input channels are ordered along the x-axis, and the output channels along the y-axis. For each output channel (row), we highlight the lowest 32 weights (in terms of absolute value) in blue, and the highest 32 in red. an accuracy trade-off, we achieve a greater compression rate than competing methods that achieve similar accuracies. Note that SWRN is able to achieve state-of-the-art levels of accuracy, but does not provide savings in the number of parameters. Visualising the weights of the blending layers that we learn for the SL-variants of our approach reveals interesting patterns in the way in which these layers blend (or use) the input channels (see Fig. 4). For each layer, the continuous blue vertical lines signify that a subset of the input feature maps are barely used by any of the output channels, thus effectively suppressing the information they carry. (Interestingly, the location of the vertical blue lines changes from one scale to the next, thus showing that different subsets of input channels go unused at different scales.) This is significant, because it implies that the weights associated with the unused channels can be selectively pruned without affecting performance. Our next experiment with the pruning method of shows how we can exploit this observation to significantly reduce the size of our shared networks. Our best-performing SL variants have a relatively small number of parameters in the convolutional layers, but a relatively high number of parameters in the linear layers. Tables 2a and 2b show how the parameter count for these variants increases with the number of channels n and the depth (34 to 50). Notably, using bottleneck blocks, as we do for our SL-ResNet50 variants, also significantly increases the parameter count. As implied by our visualisations in the previous section, we would expect serious reductions in the number of parameters in the linear layers to be possible without significantly reducing accuracy. We thus experiment with applying the magnitude-based weight pruning approach of to the linear layers to see whether this expectation is borne out in practice. We first select a proportion of the parameters to prune, then identify those weights that have the lowest absolute magnitude and set them to 0. We then evaluate on the validation split of the dataset. Note that we do not retrain the network after pruning. Our (see Figure 5) show that we can remove a significant fraction of these blending weights before starting to see a noticeable drop in the accuracy of the network. Figure 5: Analysing the effects of pruning on one of our largest models, SL-ResNet-50 (n = 512), trained on Tiny ImageNet. We iteratively zero out an increasing fraction of the linear layer parameters, starting from those having the smallest absolute value. The accuracy of the network stays constant even when 60% of the parameters are pruned, at which point the compression rate (in comparison to the non-shared baseline with equivalent performance) has increased from 1.4 to 2.6. In this paper, we leverage the regularities in visual signals across different scale levels to successfully extend the filter-sharing paradigm to an entire convolutional pipeline for feature extraction. In particular, we instantiate a single convolutional layer and apply it iteratively to simulate conventional VGGNet-like and ResNet-like architectures. We evaluate our shared architectures on four standard benchmarks -CIFAR-10, CIFAR-100, Tiny ImageNet and ImageNet -and achieve compression rates that are higher than existing sharing-based methods that have equivalent performance. We further show that even higher compression rates, with little additional loss in performance, can be achieved by combining our method with the magnitude-based weight pruning approach of. Study of our complementarity to more structured pruning techniques targeting complete filters and channels is reserved for future work. We conclude with two final observations. Firstly, our use of blending layers and a parameter to tune the width of the shared convolutional layer n makes it easy to adjust the architecture so as to achieve a desired trade-off between compression rate C and accuracy. Secondly, there are interesting connections between our work and the idea of energy-based pruning explored in , where the authors note that a significant fraction of the energy demands of deep network processing come from transferring weights to and from the file system. Our approach bypasses this bottleneck by using the same compact set of weights in an iterative manner. We aim to further investigate this aspect of our method in subsequent work. A APPENDIX Pruning methods seek to reduce the size of a network by removing (either physically or implicitly) some of a network's weights (; ; ; ;), filters or neurons . Notably, reducing the computational cost (rather than just the memory usage) of network architectures that are pruned in an unstructured manner requires the use of suitable sparse inference schemes. Quantization methods keep the number of independent parameters in a network the same, but reduce the bit-depth of the parameters and activations (; Hubara* et al., 2016; 2018) to limit the memory requirements of the network. Tensorization/tensor decomposition methods propose low-rank approximations to high-dimensional neural matrices in order to downsize trained models. Early CNN architectures such as AlexNet and VGGNet contained the bulk of their weights in the fully-connected layers. As a , various rank reduction approaches exclusively targeted the matrices in these layers . The deeper/wider these networks have become, the more the balance of weights has shifted towards the convolutional layers, giving rise to more generalised tensor decomposition schemes . Knowledge distillation ('teacher/student') methods aim to transfer the knowledge present in a cumbersome teacher model to a lightweight student model, without losing the teacher's ability to generalise well. An early approach by Bucilȃ et al. used a heavyweight ensemble to label a large set of unlabelled data, and then used this to train a compact model. Much later, proposed an alternative method that trains a shallow network to directly mimic the logits of a deep model. Subsequent methods have independently shown that training the student using temperature-scaled softmax scores or Gaussian-blurred logits of the teacher can help with regularisation. Other methods in this line of work have proposed to train deep, thin neural networks using auxiliary or intermediate cues such as hidden layer outputs or post-hoc attention maps . (d) show the bottleneck blocks. In this case, since the three convolutions have different sizes, we cannot share a single set of parameters across the whole network; instead, we consider the block as a single entity and reuse it across the network. Custom architecture methods, rather than trying to compress or distil knowledge from existing networks, propose entirely new network architectures that are smaller than existing models but still capable of providing excellent performance. Good examples include SqueezeNet and MobileNets . SqueezeNet tries to use 1 × 1 rather than 3 × 3 filters to reduce the parameter count, and tries to limit the number of input channels to those 3 × 3 filters it does use. MobileNets follow a similar tack and factorise traditional convolutional mappings into a depth-wise separable convolution (to process the spatial context) followed by a 1 × 1 convolution (to process the channels jointly). Two adjustable hyperparameters, α and ρ, pertaining to the intermediate feature resolution and the input spatial resolution, allow further resizing of the network. Hybrid methods implement some combination of the compression schemes discussed above . Whilst our approach belongs to the category of filter-sharing schemes elaborated above, we also demonstrate its complementarity and compatibility with the magnitude-based weight pruning method of. A.2 DATASETS CIFAR-10 consists of 60, 000 32 × 32 colour images, each labelled as belonging to one of 10 mutually exclusive classes. Each class contains 6, 000 images, of which 5, 000 are earmarked for training, and 1, 000 for testing (i.e. there are 50, 000 train images and 10, 000 test images overall). CIFAR-100 consists of the same 60, 000 32 × 32 images that are in CIFAR-10, but this time they are evenly split into 100 classes, each containing 500 training images and 100 testing images. Tiny ImageNet 2 is essentially a smaller, lower-resolution variant of the ImageNet dataset. It consists of 120, 000 64 × 64 images, evenly split into 200 classes. Each class contains 500 training images, 50 validation images and 50 test images. ImageNet was introduced as a large-scale image classification benchmark consisting of high-resolution photographs in 1, 000 visual categories from an even larger ontology of natural concepts (WordNet). It consists of approximately 1M training images, divided into 1, 000 disjoint object categories. Another set of 50, 000 images, evenly split into 1, 000 classes, forms the validation set. The accuracy we report for ImageNet were obtained on this validation set. A.3 NETWORK ARCHITECTURES Table 3 details the structure of our VGGNet-like architectures, whilst Tables 4 and 5 show our ResNet-like architectures (respectively used for CIFAR-10/100 and Tiny ImageNet/ImageNet). The notation is common to the tables and is as follows: conv1-x A 1 × 1 convolutional layer with x output feature channels. The core of our SL variants. We use this layer to allow shared convolutions at different scale levels to observe different blends of the feature channels output by the previous scale. Its number of input feature channels is equal to x, except in E-ResNet-50, where we use it to increase the number of channels between scale levels, and in the first scale of SL-ResNet-50, where we use it to increase the number of channels from x to 4x, to account for the expansion factor of the bottleneck blocks. conv3-x A 3 × 3 convolutional layer with x output feature channels. The number of input feature channels depends on the specific network variant: for the baselines it is equivalent to the number of output feature channels of the previous layer (or 3 for the very first layer), whilst for the E/S/SL-variants, it is equivalent to the number of output feature channels x. The stride is 1 unless otherwise specified. conv7-x A 7 × 7 convolutional layer with x output feature channels. As this layer is only used as the first layer in the ResNet variants of our architectures, it always has 3 input channels. Its stride is 1 when training a ResNet-like architecture for Tiny ImageNet, and 2 when training for ImageNet. basicblock-x A simple skip connection-based block, used in the ResNet-like architectures. As in , it consists of two 3 × 3 convolutional layers and a skip connection. In our shared architectures, the two convolutional layers share the same parameters. See Figures 6a and 6b for details of the internal architectures of the non-shared and shared block variants. bottleneck-x A skip connection-based block with a bottleneck architecture, consisting of a 1 × 1 convolution (used to reduce the number of feature channels), followed by a 3 × 3 convolutional layer, and finally by another 1 × 1 convolution (restoring the original number of feature channels). For this reason it has 4x input and output channels. Figures 6c and 6d detail the internal architectures of the standard and shared variants of the bottleneck blocks (respectively). Crucially, as mentioned in the main paper -and unlike the basicblock architectures described above -the bottleneck block is shared as a single entity, owing to the presence of differently-shaped convolutions. avgpool-x An average pooling layer operating on patches of size x × x. maxpool-x A max-pooling layer operating on patches of size x × x. FC-x A fully-connected layer with x output channels. The number of its input channels is equal to the number of outputs of the previous layer (flattened in the case the previous layer was a convolutional layer). Each spatial convolution (conv3 and conv7) is always followed by a BatchNorm layer and a ReLu. We denote in bold the convolutional layers or blocks that are shared in our S and SL architectures. The parameters of the normalisation layers are never shared, even when the corresponding convolutional weights are shared as part of an S or SL architecture. Fully-connected layers (except the very last one in each architecture) are followed by a Dropout layer. Table 3: The architectures for VGGNet and the VGGNet-like networks we trained as part of our experiments on the CIFAR-10/100 and Tiny ImageNet datasets. The notation is described in the main text. Note that the last max-pooling layer (marked with a *) is not used when training a network for Tiny ImageNet: this is in order to provide a longer feature vector to the first fullyconnected layer (specifically of size n * 3 * 3). The fully-connected layer sizes differ across datasets to account for the different numbers of classes, and are set as follows: (a Table 4 : The architectures for the ResNet-like networks we trained as part of our experiments on the CIFAR-10/100 datasets. The notation is described in the main text. The baselines Eb-ResNet(p) use p = 16 for training on CIFAR-10 (as in) and p = 32 for training on CIFAR-100. The final fully-connected layer has its output size set to the number of classes in the dataset (i.e. num c = 10 for CIFAR-10 and num c = 100 for CIFAR-100). We experiment with different values of b ∈ {3, 5, 7}. Input Resolution E-ResNet-34 E-ResNet-50 SL-ResNet-34(n) SL-ResNet-50(n) 224 × 224 conv7-64, stride-2 conv7-64, stride-2 conv7-n, stride 2 conv7-n, stride 2 conv1-256 conv1-(n * 4) maxpool-2 maxpool-2 maxpool-2 maxpool-2 7 × 7 4× basicblock-512 4× bottleneck-512 4× basicblock-n 4× botteneck-n avgpool-3 avgpool-3 avgpool-3 avgpool-3 FC-numc FC-numc FC-numc FC-numc Table 5: The architectures for the ResNet-like networks we trained as part of our experiments on the Tiny ImageNet and ImageNet datasets. The notation is described in the main text. The final fully-connected layer has its output size set to the number of classes in the dataset (i.e. num c = 200 for Tiny ImageNet and num c = 1000 for ImageNet). One important difference in the architectures for the two datasets is that, in the case of Tiny ImageNet, to account for the smaller resolution of the images, in the first scale level we use a 3 × 3 convolution without striding and suppress the first maxpool-2 layer. This has the effect of allowing us to feed the convolutional architecture with an input image of size 56 × 56. Table 6: Test accuracies and parameter counts |W conv | for our'E','S' and'SL' variants of VGGNet, for different widths n of the convolutional layer. The compression factors C for the'S' and'SL' variants are computed relative to the corresponding E-VGGNet, which contains an equal number of channels n in its convolutional layers. Note that all the models are trained from a state of random initialisation. To train our networks on the CIFAR datasets, we perform some basic data augmentation steps: we randomly decide whether or not to flip the input images horizontally, we pad the 32 × 32 images with 4 pixels and then select a random crop of size 32 × 32, and finally we normalise the RGB values to have zero mean and unit norm. During the evaluation phase, we just perform the normalisation step. We train our networks for 200 epochs, using the SGD optimiser with momentum 0.9 and weight decay 5e −4. We use an initial learning rate of 0.05 and decrease it by a factor of 2 when the error plateaus. To train our networks on the Tiny ImageNet and ImageNet datasets, we perform a similar data augmentation: we first extract a crop of a random size that is then resized to the input resolution of our network (56 × 56 for Tiny ImageNet and 224 × 224 for ImageNet), we randomly decide whether or not to perform a horizontal flip of the crop, and finally we normalise the crop. During the evaluation phase, we resize the image to a standard resolution (64×64 for Tiny ImageNet and 256 × 256 for ImageNet), extract ten crops (of size 56 × 56 for Tiny ImageNet and 224 × 224 for ImageNet) from the corners, the centre and their horizontally-mirrored variants (as in), and finally normalise the crops. We train our networks for 100 epochs, using the SGD optimiser with momentum 0.9 and weight decay 5e −4. We use an initial learning rate of 0.01 for the VGGNet-like architectures on Tiny ImageNet, 0.05 for the ResNet-like architectures on Tiny ImageNet, and 0.1 for the experiments on ImageNet. Regardless of the initial value, we decrease it by a factor of 10 when the error plateaus. A.5.1 EVALUATION ON CLASSIFICATION BENCHMARKS Table 6 presents detailed accuracy and memory usage numbers for E-VGGNet, S-VGGNet and SLVGGNet architectures trained on CIFAR-100 and Tiny ImageNet ( for CIFAR-10 can be found in the main paper, in Table 1a in §5). Similar for the'E' and'SL' variants of ResNet trained on CIFAR-10 and CIFAR-100 can be found in Table 7. Finally, an accuracy and compression rate comparison of our top-performing SL3-ResNet variant with existing baselines and competing compression methods for CIFAR-10 is shown in Table 8. Table 8: CIFAR-10: Comparing the accuracies and compression factors C of top-performing'SL' variant of the ResNet architecture , for b = 3 blocks per scale level, with the original ResNet, other baselines ResNet-18 and ResNet-34, and state-of-the-art compression methods. The compression factor of the proposed model with respect to the best performing ResNet-34 architecture is in triple digits. However, a more appropriate comparison is arguably with ResNet*, from which the model has been directly compressed by virtue of sharing the convolutional layers. The compression factor is still a significant 4.0, with a final weight count of only 181K. Note that the model marked with a * has been retrained for this study. In Fig. 7, we show the linear layers for our different variants of VGGNet, trained on three different datasets -CIFAR-10, CIFAR-100 and Tiny ImageNet. As highlighted by the continuous blue vertical lines, it is notable that in each layer, some of the input channels barely contribute towards any of the output channels. Given this, we posit that a significant proportion of the weights in the linear layers (those that apply to the least important input channels) can be pruned without affecting the accuracy in any significant manner. Preliminary , verifying this conjecture, are discussed in §5.1. Interestingly, the changing locations of these blue lines reflects the changing importance of different input channels at different scale levels. Similar for four different'SL' variants of ResNet, trained on three different datasets -CIFAR-10, CIFAR-100 and Tiny ImageNet -are presented in Fig. 8. As with our visualisations for'SLVGGNet', the continuous blue vertical lines in Figs. 8b, 8c and 8d highlight that some input channels make only a minimal contribution to any of the output channels in each layer. Once again, we believe that the weights that are applied to these less-important input channels can be pruned without affecting the accuracy in any significant manner. Some indicative that support this hypothesis can be found in §5.1. By contrast, the linear layers in Fig. 8a exhibit somewhat less regularity. From Table 7a, SL7-ResNet yields both the highest accuracy (93.2%), and the highest compression rate (3.8) for that accuracy amongst all the variants. Thus, one possible explanation for this regular distribution of linear layer weights is that the model is operating at full capacity and is using all the channels in a balanced way to achieve an optimal performance. Figure 7: A visual depiction of the linear layers used to blend the input channels in the'SL' variants of VGGNet trained on CIFAR-10, CIFAR-100 and Tiny ImageNet. The linear layers are presented in the order (left to right) in which they appear in the networks. For each layer, the input channels are ordered along the x-axis, and the output channels along the y-axis. For each output channel (row), we highlight the lowest 32 weights (in terms of absolute value) in blue, and the highest 32 in red. Figure 8: A visual depiction of the linear layers used to blend the input channels in four'SL' variants of ResNet, in the order (left to right) in which they appear in the networks. For each layer, the input channels are ordered along the x-axis, and the output channels along the y-axis. For each output channel (row), we highlight the lowest 32 weights (in terms of absolute value) in blue, and the highest 32 in red.
We compress deep CNNs by reusing a single convolutional layer in an iterative manner, thereby reducing their parameter counts by a factor proportional to their depth, whilst leaving their accuracies largely unaffected
1,425
scitldr
We extend the recent of by a spectral analysis of representations corresponding to kernel and neural embeddings. They showed that in a simple single layer network, the alignment of the labels to the eigenvectors of the corresponding Gram matrix determines both the convergence of the optimization during training as well as the generalization properties. We generalize their to kernel and neural representations and show that these extensions improve both optimization and generalization of the basic setup studied in . The well-known work of BID8 highlighted intriguing experimental phenomena about deep net trainingspecifically, optimization and generalization -and called for a rethinking of generalization in statistical learning theory. In particular, two fundamental questions that need understanding are: Optimization. Why do true labels give faster convergence rate than random labels for gradient descent? Generalization. What property of properly labeled data controls generalization? BID0 have recently tried to answer this question in a simple model by conducting a spectral analysis of the associated Gram matrix. They show that both training and generalization are better if the label vector aligns with the top eigenvectors. However, their analysis applies only to a simple two layer network. How could their insights be extended to deeper networks?A widely held intuitive view is that deep layers generate expressive representations of the raw input data. Adopting this view, one may consider a model where a representation generated by successive neural network layers is viewed as a kernel embedding which is then fed into the two-layer model of BID0. The connection between neural networks and kernel machines has long been studied; BID2 ) introduced kernels that mimic deep networks and BID6 showed kernels equivalent to certain feed-forward neural networks. Recently, BID1 ) also make the case that progress on understanding deep learning is unlikely to move forward until similar phenomena in classical kernel machines are recognized and understood. Very recently, BID4 showed that the evolution of a neural network during training can be related to a new kernel, the Neural Tangent Kernel (NTK) which is central to describe the generalization properties of the network. Here we pursue this approach by studying the effect of incorporating embeddings in the simple two layer model and we perform a spectral analysis of these embeddings along the lines of BID0. We can obtain embeddings in several ways: i. We can use an unbiased kernel such as Gaussian kernel. This choice is consistent with the maximum entropy principle and makes no prior assumption about the data. Or use a kernel which mimics or approximates deep networks ii. We could use data driven embeddings explicitly produced by the hidden layers in neural networks: either use a subset of the same training data to compute such an embedding, or transfer the inferred embedding from a different (but similar) domain. While a general transformation g(x) of the input data may have arbitrary effects, one would expect kernel and neural representations to improve performance. The interplay of kernels and data labellings has been addressed before, for example in the work of kernel-target alignment BID3 ).We do indeed observe a significant beneficial effect: Optimization. Using kernel methods such as random Fourier features (RFF) to approximate the Gaussian kernel embedding BID5 and neural embeddings, we obtain substantially better convergence in training. Generalization. We also achieve significantly lower test error and we confirm that the data dependent spectral measure introduced in BID0 significantly improves with kernel and neural embeddings. Thus this work shows empirically that kernel and neural embeddings improve the alignment of target labels to the eigenvectors of the Gram matrix and thus help training and generalization. This suggests a way to extend the insights of BID0 ) to deeper networks, and possible theoretical in this direction. Network model. In BID0, the authors consider a simple two layer network model: DISPLAYFORM0 DISPLAYFORM1 These can be written jointly as a = (a 1, .., a m) T and W = (w 1, .., w m). This network is trained on dataset of datapoints {x i} and their targets {y i}.They provide a fine grained analysis of training and generalization error by a spectral analysis of the Gram matrix: BID0 show that both training and generalization are better if the label vector y aligns with the eigenvectors corresponding to the top eigvalues of H ∞. DISPLAYFORM2 The two-layer ReLU network in this work follows the general structure as in BID0 with the difference being the addition of an embedding φ at the input layer corresponding to a kernel K. The corresponding model is: DISPLAYFORM3 and let its eigenvalues be ordered as DISPLAYFORM4 A kernel K such that the corresponding eigenvectors align well with the labels would be expected to perform well both for training optimization as well as generalization. This is related to kernel target alignment BID3. Optimization. For the simple two layer network, BID0 show that the convergence of gradient descent is controlled by DISPLAYFORM5 For our kernelized network, the corresponding convergence is controlled by DISPLAYFORM6 Generalization. For the simple two layer network, BID0 show that the generalization performance is controlled by DISPLAYFORM7 For our kernelized two layer network, the corresponding data and representation dependent measure is: DISPLAYFORM8 We perform our experiments on two commonly-used datasets for validating deep neural models, i.e., MNIST and CIFAIR-10. These datasets are used for the experiments in BID0. As in their work we only look at the first two classes and set the label y i = +1 if image i belongs to the first class and y i = −1 if it belongs to the second class. The images are normalized such that ||x i || 2 = 1. This is also done for kernel embeddings such that ||φ(x i)|| 2 = 1. The weights in equation are initialized as follows: DISPLAYFORM0 We then use the following loss function to train the model to predict the image labels. DISPLAYFORM1 For optimization, we use (full batch) gradient descent with the learning rate η. In our experiments we set k = 10 −2, η = 2 · 10 −4 similar to BID0. We first use the Gaussian kernel K(x i, x j):= exp −γ x i − x j 2. The corresponding embedding is infinite dimensional, hence we consider the fast approximations to the kernel given by random Fourier features (RFF) BID5. The idea of random Fourier features is to construct an explicit feature map which is of a dimension much lower than the number of observations, but the ing inner product approximates the desired kernel function. We use γ = 1 in all our experiments. Optimization. We first investigate the use of Gaussian kernel for a more efficient optimization of the loss function on the training data. FIG0 show the training loss at different steps respectively on MNIST and CIFAR-10 datasets. We consistently observe that the different Gaussian kernels (specified by various dimensions of the kernel) yields faster convergence of the optimization procedure on both datasets. MNIST is a simple dataset which gives incredibly high score almost immediately, as shown by the train loss FIG0 ) and by the accuracy on the test data (the table in FIG3 (c)) thus we will focus our analysis on the CIFAR-10 dataset. Similar to the setup in BID0, in FIG0, for different methods, we plot the eigenvalues of H(K)∞ and the projections of the true class labels on the eigenvectors (i.e., the projections {(v 2 's for top eigenvalues. Generalization. We next investigate the generalization performance of the Gaussian kernel method by analyzing the values of equations FORMULA7 and. TAB0 shows this quantity for different settings and kernels respectively on MNIST and CIFAR-10 datasets. We observe that in both datasets with several kernels we obtain a lower theoretical upper bound on the generalization error. It is clear that the bound improves as the dimension of the representations increases but also that the generalization bound seems quite sensitive to values of γ. In addition to the theoretical upper bound, we measure the test error for the studied datasets. FIG3 show respectively the test error and the test accuracy at different steps of the optimization by Gradient Descent for CIFAR-10. We observe that the kernel methods yield significant improvements of both the test error and the accuracy on the test dataset. We observe that the larger the kernel, the larger the improvement. Additionally, we can see a sharper reduction in test error compared to the no-kernel case. This sharp transition (after a small number of steps) is particularly interesting. Because, along such a transition, we observe a significant improvement in the accuracy on test dataset. Thus early-stopping that is commonly used in deep learning can be even more efficient when using kernel methods. Finally, similar to the no-kernel case in BID0, by comparing the plots in FIG0, 1(c) and 2(a) we find tight connections between, i) (training) optimization, ii) projection on the top eigenvalues, and iii) generalization. We can therefore improve both training and generalization with kernels since we can get better alignment of the eigenvectors belonging the largest eigenvalues and the target labels. Choosing a proper kernel and its parameters can be challenging BID7, as also seen in TAB0. Thus, we investigate a data-dependent neural kernel and embedding. For this purpose, we add a second hidden layer to the neural network with m = 10000 hidden units and ReLU activation. We pre-train this embedding using two different approaches. The first layer is then kept fix as an embedding where the rest of the network is reinitialized and trained. The first approach is to split the training data in half. We use the first subset to pre-train this three-layer network and the second subset to use for our optimization experiments. In this approach we double η to keep the step length the same. The other approach is to use data from a different domain for pre-training. For instance, we use the last two classes of the CIFAR-10 dataset for pre-training the embedding. We compare our with not using any kernel and with using a RFF kernel with embedding of size 10000. Optimization. FIG4 shows the training loss for the CIFAR-10 dataset. We observe the neural embeddings achieve faster convergence compared to the previous methods. We report the training loss for neural embedding (same label) on the second (unused) subset of the data, whereas in the other cases we report the on the full training data. If we use only the second subset for the other methods, we observe very consistent to FIG4. FIG4 (c) demonstrates the top eigenvalues as well as their eigenvector projections on the target labels. This shows that both variants of neural embeddings improve alignment of the labels to eigenvectors corresponding to larger eigenvalues (compared to the best RFF kernel). While the effect is unsurprisingly larger when pre-training on the same labels, it is still significantly better when pre-trained on other labels. Generalization. In FIG4 (b) we report the test error on the CIFAR-10. This shows that the neural embeddings perform at least comparable with the best studied RFF kernel. If the pre-training is done on the same labels we obtain a clear improvement, even if the actual training is only done on a dataset with half the size. We extended the recent of BID0 by a spectral analysis of the representations corresponding to kernel and neural embeddings and showed that such representations benefit both optimization and generalization. By combining recent connecting kernel embeddings to neural networks such as BID6 BID4, one may be able to extend the fine-grained theoretical of BID0 for two layer networks to deeper networks.
Spectral analysis for understanding how different representations can improve optimization and generalization.
1,426
scitldr
Obtaining high-quality uncertainty estimates is essential for many applications of deep neural networks. In this paper, we theoretically justify a scheme for estimating uncertainties, based on sampling from a prior distribution. Crucially, the uncertainty estimates are shown to be conservative in the sense that they never underestimate a posterior uncertainty obtained by a hypothetical Bayesian algorithm. We also show concentration, implying that the uncertainty estimates converge to zero as we get more data. Uncertainty estimates obtained from random priors can be adapted to any deep network architecture and trained using standard supervised learning pipelines. We provide experimental evaluation of random priors on calibration and out-of-distribution detection on typical computer vision tasks, demonstrating that they outperform deep ensembles in practice. Deep learning has achieved huge success in many applications. In particular, increasingly often, it is used as a component in decision-making systems. In order to have confidence in decisions made by such systems, it is necessary to obtain good uncertainty estimates, which quantify how certain the network is about a given output. In particular, if the cost of failure is large, for example where the automated system has the capability to accidentally hurt humans, the availability and quality of uncertainty estimates can determine whether the system is safe to deploy at all (; ;). Moreover, when decisions are made sequentially, good uncertainty estimates are crucial for achieving good performance quickly (; ; ;). Because any non-Bayesian inference process is potentially sub-optimal , these uncertainty estimates should ideally be relatable to Bayesian inference with a useful prior. Deep ensembles , one of the most popular methods available for uncertainty estimation in deep networks today, struggle with this requirement. While deep ensembles can be related to Bayesian inference in settings where the individual models are trained on subsets of the data, this is not how they are used in practice. In order to improve data efficiency, all ensembles are typically trained using the same data , ing in a method which does not have a theoretical justification. Moreover, deep ensembles can give overconfident uncertainty estimates in practice. On the other hand, Monte-Carlo dropout can be viewed as a certain form of Bayesian inference. However, doing so requires requires either a limit to be taken or a generalization of variational inference to a quasi-KL divergence. In practice, MC dropout can give arbitrarily overconfident estimates . More broadly, a category of approaches, known as Bayesian Neural Networks (; ;), maintains a distribution over the weights of the neural network. These methods have a sound Bayesian justification, but training them is both difficult and carries an accuracy penalty, particularly for networks with convolutional architectures . Moreover, tuning BNNs is hard and achieving a good approximation to the posterior is difficult . We use another way of obtaining uncertainties for deep networks, based on fitting random priors (; 2019). Random priors are easy to train and were found to work very well in practice . To obtain the uncertainty estimates, we first train a predictor network to fit a prior. Two examples of prior-predictor pairs are shown in the top two plots of Figure 1. On top, two predictors (green) were trained to fit two randomlygenerated priors (red). On the bottom, we obtain uncertainties from the difference between predictors and priors. Dots correspond to training points x i. Faced with a novel input point, we obtain an uncertainty (Figure 1, bottom plot) by measuring the error of the predictor network against this pattern. Intuitively, these errors will be small close to the training points, but large far from them. The patterns themselves are drawn from randomly initialized (and therefore untrained) neural networks. While this way of estimating uncertainties was known before , it did not have a theoretical justification beyond Bayesian linear regression, which is too limiting for modern applications. Contributions We provide a sound theoretical framework for obtaining uncertainty estimates by fitting random priors, a method previously lacking a principled justification. Specifically, we justify estimates in the uncertainty of the output of neural networks with any architecture. In particular, we show in Lemma 1 and Proposition 1 that these uncertainty estimates are conservative, meaning they are never more certain than a Bayesian algorithm would be. Moreover, in Proposition 2 we show concentration, i.e. that the uncertainties become zero with infinite data. Empirically, we evaluate the calibration and out-of-distribution performance of our uncertainty estimates on typical computer vision tasks, showing a practical benefit over deep ensembles and MC dropout. We are going to reason about uncertainty within the formal framework of stochastic processes. We now introduce the required notations. A stochastic process is a collection of random variables {f (x)}. We consider processes where x ∈ R K and the random-variable f (x) takes values in R M. A stochastic process has exchangeable outputs if the distribution does not change when permuting the M entries in the output vector. Allowing a slight abuse of notation, we denote the finite-dimensional distribution of the process {f (x)} for the set X = {x i} i=1,...,N as f (x 1, . . ., x N) = f (X). In practice, the finite-dimensional distribution reflects the idea of restricting the process to points x 1,..., x N and marginalizing over all the other points. Inference can be performed on stochastic processes similarly to probability distributions. In particular, we can start with some prior process {f (x)}, observe a set of N training points X = {x i} i=1,...,N and labels y = {y i} i=1,...,N and then consider the posterior process {f Xy (x)}, whose finite-dimensional distributions are given by f Xy (x 1 . . . x N) = f (x 1 . . . x N |x 1, . . ., x N, y 1, . . ., y N) for any set of testing points x 1... x N. We use subscripts to denote conditioning on the dataset throughout the paper. We denote the variance of f Xy (x) with σ 2 Xf (x). A stochastic process is called Gaussian if if all its finite-dimensional distributions are Gaussian. Given a test point x, we denote the posterior GP mean with µ Xy (x) and posterior GP variance with σ 2 X (x). We provide more on GPs in Appendix D. Intuition Uncertainties obtained from random priors have an appealing intuitive justification. Consider the networks in the top part of Figure 1. We start with a randomly initialized prior network, shown in red. Whenever we see a datapoint, we train the predictor network (green) to match this prior. Uncertainties can then be obtained by considering the squared error between the prior and the predictor at a given point. An example uncertainty estimate is shown as the shaded blue area in the bottom of Figure 1. While it may at first seem that the squared error is a poor measure of uncertainty because it can become very small by random chance, we formally show in Section 4.1 that this is very improbable. In Section 4.2, we show that this error goes down to zero as we observe more data. Similarly to GP inference, uncertainty estimation in our framework does not depend on the regression label. The prediction mean (blue curve in the bottom part of Figure 1) is obtained by fitting a completely separate neural network. In section 6, we discuss how this framework avoids the overconfidence characteristic of deep ensembles . Prior The process of obtaining network uncertainties involves randomly initialized prior networks, which are never trained. While this may at first appear very different from they way deep learning is normally done, these random networks are a crucial component of our method. We show in Section 4.1 that the random process that corresponds to initializing these networks can be interpreted as a prior of a Bayesian inference procedure. A prior conveys the information about how the individual data points are related. The fact that we are using random networks has both practical and theoretical benefits. Practically, since the prior does not depend on the data, there is no way that it can overfit. The use of random priors also has strong empirical support -randomly initialized networks have been recently used as priors to obtain state-of-the-art performance on computer vision tasks . Theoretically, using random priors satisfies the likelihood principle . Moreover, random priors can be viewed as a safe choice since they make the minimum reasonable assumption that the network architecture is appropriate for the task. In fact, whenever deep learning is used, with or without uncertainty estimates, practitioners are already implicitly making that assumption. Algorithm 1 Training the predictors. Algorithm The process of training the predictor networks is shown in Algorithm 1. The function TRAIN-UNCERTAINTIES first generates random priors, i.e. neural networks with random weights. In our notation, it corresponds to sampling functions from the prior process {f (x)}. These priors, evaluated at points from the dataset X = {x i} i=1,...,N are then used as labels for supervised learning, performed by the function FIT. After training, when we want to obtain an uncertainty estimate φ at a given test point x, we use the formulâ Here, the quantityσ 2 µ is the sample mean of the squared error. We will show in Section 4 that it is an unbiased estimator of a variable that models the uncertainty. On the other hand,v σ is the samplebased estimate of the standard deviation of squared error across bootstraps, needed to quantify our uncertainty about what the uncertainty is. The hyper-parameter β controls the degree to which this uncertainty is taken into account. Formally, the quantities are defined aŝ In the above equations, B is the number of prior functions and each prior and predictor network has M outputs. Because the predictors are trained independently, uncertainty estimates obtained from each of the B predictor-prior pairs are independent. We defer the discussion of details of network architecture to Section 5. Our experiments (Section 7) show that it is often sufficient to use B = 1 in practice. In Section 3, we introduced a process for obtaining uncertainties in deep learning. We now seek to provide a formal justification. We define the expected uncertainties as σ In other words,σ 2 µ is the expected version of the sample-based uncertaintiesσ 2 µ (x) introduced in equation 2. Since Bayesian inference is known to be optimal (; ;), the most appealing way of justifying uncertainty estimatesσ 2 µ andσ 2 µ is to relate them to a Bayesian posterior σ 2 Xf (x). We do this in two stages. First, in Section 4.1, we prove that the obtained uncertainties are larger than ones arrived at by Bayesian inference. This means that our uncertainties are conservative, ensuring that our algorithm is never more certain than it should be. Next, in Section 4.2, we show that uncertainties concentrate, i.e., they become small as we get more and more data. These two properties are sufficient to justify the use of our uncertainties in many applications. From the point of view of safety, it is preferable to overestimate the ground truth uncertainty than to underestimate it. We now show that this property holds for uncertainties obtained from random priors. We first justify conservatism for the expected uncertaintyσ Amortized Conservatism We first consider a weak form of this conservatism, which we call amortized. It guarantees thatσ 2 µ is never smaller than the average posterior uncertainty across labels sampled from the prior. Formally, amortized conservatism holds if for any test point x we havẽ Here σ 2 Xf corresponds to the second moment of the posterior process {f Xf (x)}. We will introduce a stronger version of conservatism, which does not have an expectation on the right-hand side, later in this section (eq. 8). For now, we concentrate on amortized conservatism. In Lemma 1 (proof in appendix), we show that it holds under very general conditions. Lemma 1. For any function h: R N ×(K+1) → R M, for any test point x ∈ R K and for any stochastic process {f (x)} x∈R K with all second moments finite and exchangeable outputs Relation to a GP Lemma 1 holds for any prior process {f (x)}. However, the prior process used by Algorithm 1 is not completely arbitrary. The fact that prior samples are obtained by initializing neural networks with independently sampled weights gives us additional structure. In fact, it can be shown that randomly initialized neural networks become close to GPs as the width of the layers increases. While the original due to held for a simple network with one hidden layer, it has been extended to a wide class of popular architectures, including to CNNs and RNNs of arbitrary depth;;;;;; ). Recently, it has been shown to hold for a broad class of functions trainable by gradient descent . While the precise statement of these involves technicalities which fall beyond the scope of this paper, we recall the key insight. For a family of neural networks {f W (x)}, where the weights are sampled independently and W is the width of the hidden layers, there exists a limiting kernel function k ∞ such that In other words, as the size of the hidden layers increases, the stochastic process obtained by initializing networks randomly converges in distribution to a GP. In the context of our uncertainty estimates, this makes it reasonable for W large enough to consider the prior to be a GP. We stress that the GP assumption has to hold only for the prior network, which is never trained. We do not make any assumptions about connections between the predictor training process and GPs. Strict Conservatism Denoting the posterior GP variance with σ 2 X (x), we define uncertainty estimates to be strictly conservative whenσ This statement is stronger than the amortized conservatism in equation 5. Intuitively, equation 8 can be interpreted as saying that our uncertainty estimates are never too small. This confirms the intuition expressed by that random priors do not overfit. Below, in Proposition 1, we outline how to guarantee strict conservatism formally. It is proved in Appendix F.1. Proposition 1 (Strict Conservatism in Expectation). Assume that f is a GP. Then for any function Moreover, equality holds if and only if h Xf (x) = µ Xf (x). Conservatism with Finite Bootstraps Lemma 1 above shows conservatism for expected uncertainties, i.e.σ 2 µ introduced in equation 5. However, in practice we have to estimate this expectation using a finite number of bootstraps, and use the sampled uncertaintiesσ 2 µ defined in equation 2. We now state a conservatism guarantee that holds even in the case of just one bootstrap (B = 1). The proof is deferred to Appendix F.1. Corollary 1 (Strict Conservatism for Finite Bootstraps). Assume that f is a GP. Assume that the random variableσ 2 µ (x) has finite variance upper bounded by v UB. Then with probability 1 − δ, for any function h: However, applying Corollary 1 requires the knowledge of v UB. We now provide an upper bound. Lemma 2. Assume that the GP {f (x)} is zero mean with exchangeable outputs and the function Assume that permuting the outputs of f produces the same permutation in the outputs of h Xf. With probability 1 − δ, we have where v UB is expressible in terms of observable quantities. The proof and the explicit formula for v UB is deferred to Appendix F.1. In cases where conservatism is desired, but not absolutely essential, we can avoid the torturous calculation of Lemma 2 and replace v UB with the sample-based estimatev σ (x), defined in equation 2. In this case, the conservatism guarantee is only approximate. This is how we obtained equation 1, used by the algorithm in practice. While the conservatism property in Proposition 1 is appealing, it is not sufficient on its own for the uncertainty estimates to be useful. We also need concentration, i.e. a guarantee that the uncertaintieŝ σ 2 become small with more data. We can gurantee this formally by assuming that the class of neural networks being fitted is Lipschitz-continuous and bounded. Intuitively, by assumption of Lipschitz continuity, the predictors h Xf cannot behave very differently on points from the training and test sets, since both come from the same data distribution. We can then show concentration by using standard Rademacher tools to obtain a bound on the expected uncertainty in terms of the squared error on the training set. This process is formalized in Proposition 2. Proposition 2. If the training converges, i.e. the training loss for arbitrarily large training sets, then assuming the predictors h Xf are bounded and Lipschitz continuous with constant L, then under technical conditions the uncertainties concentrate, i.e. σ 2 (x) → 0 as N → ∞ and B → ∞ with probability 1. The proof and the technical conditions are given in Appendix F. Proposition 2 assumes that the training error is zero for arbitrarily large training sets, which might at first seem unrealistic. We argue that this assumption is in fact reasonable. The architecture of our predictor networks (Figure 2, right diagram) is a superset of the prior architecture (Figure 2, left diagram), guaranteeing the existence of weight settings for the predictor that make the training loss zero. Recent on deep learning optimization have shown that stochastic gradient descent can in general be expected to find representable functions. We now re-visit the algorithm we defined in Section 3, with the aim of using the theory above to obtain practical improvements in the quality of the uncertainty estimates. Architecture and Choosing the Number of Bootstraps Our conservatism guarantee in Proposition 1 holds for any architecture for the predictor h Xf. In theory, the predictor could be completely arbitrary and does not even have to be a deep network. In particular, there is no formal requirement for the predictor architecture to be the same as the prior. On the other hand, to show concentration in Proposition 2, we had to ensure that the prior networks are representable by the predictor. In practice, we use the architecture shown in Figure 2, where the predictor mirrors the prior, but has additional layers, giving it more representational power. Moreover, the architecture requires choosing the number of bootstraps B. Our experiments in Section 7 show that even using B = 1, i.e. one bootstrap, produces uncertainty estimates of high quality in practice. Modeling Epistemic and Aleatoric Uncertainty Proposition 1 and Proposition 2 hold for any Gaussian Process prior. By choosing the process appropriately, we can model both epistemic and aleatoric uncertainty. Denote by {n(x)} a stochastic process obtained by randomly initializing neural networks and denote by {(x)σ 2 A } the noise term, modeling the aleatoric (observation) noise, where samples are obtained from (x) ∼ N at each x independently (see Appendix D for more on aleatoric noise). We can now choose the prior process as a sum {f (x)} = {n(x) + (x)σ 2 A } of epistemic component {n(x)} and the noise term. The amount of aleatoric uncertainty can be adjusted by choosing σ 2 A. Prior Choice, Weight Copying and Conservatism One question that can be asked about our architecture (Figure 2) is whether it is possible for the predictor to exactly copy the prior weights, giving zero uncertainty everywhere. A useful edge case to consider here is when we are solving a one-dimensional regression problem, σ 2 A = 0 and the both the priors and predictors are linear functions. In this case, after training on two points, the predictors will agree with the priors everywhere and uncertainty estimates will be zero. However, this is still consistent with our conservatism guarantee The reason for this is once we assume such a linear prior, we are comparing to a GP with a linear kernel. But a GP with that kernel will also have zero uncertainty after seeing two samples. In practice, this means that we have to choose the architecture of the prior networks be expressive enough, which is no different from choosing a reasonable prior for Bayesian inference. Empirically, the tested network architecture did not show weight copying. Randomized Prior Functions (RPFs) Our work was inspired by, and builds on, Randomised Prior Functions (;, but it is different in two important respects. First, the existing theoretical justification for RPFs only holds for Bayesian linear regression (, equation 3) with non-zero noise 1 added to the priors. In contrast, our are much more general and hold for any deep network with or without added aleatoric noise. Second, we are targeting a different setting. While RPFs were designed as a way of sampling functions from the posterior, we provide estimates of posterior uncertainty at a given test point. Our algorithm is based on the work by , who applied RPFs to exploration in MDPs, obtaining state-of-the art , but without justifying their uncertainty estimates formally. Our paper provides this missing justification, while also introducing a way of quantifying the error in estimating the uncertainty itself. Moreover, since focused on the application of RPFs to Reinforcement Learning, they only performed out-of-distribution evaluation on the relatively easy MNIST dataset . In contrast, in Section 7 we evaluate the uncertainties on more complex vision tasks. The term prior networks has also been used to denote deep networks that output the parameters of a prior distribution, an approach fundamentally different from our work. The main alternative approach for obtaining uncertainties in deep learning are deep ensembles . Building on the bootstrap , deep ensembles maintain several models and quantify epistemic uncertainty by measuring how their outputs vary. Crucially, deep ensembles use representations trained on regression labels, and tend to learn similar representations for different inputs with similar labels, which can lead to over-fitting the uncertainty estimates. A useful edge case to consider is if the each of the models in the ensemble is convex in the weights. In this case, models in a deep ensemble will all converge to the same weights and produce zero uncertainty. While deep learning models used in practice aren't normally convex, we show empirically in section 7 that deep ensembles can give overconfident uncertainty estimates in practical vision tasks, particularly on points that have the same label as points in the training set. Since our method avoids overconfidence, it can be understood as complementary to deep ensembles, to be used in situations where obtaining conservative estimates is more important than the representational benefit of using labels. In practice, deep ensembles also require using more bootstraps to achieve the same OOD performance. Moreover, they do not have theoretical support in the case when all the members of the ensemble are trained on the same data, which is how they are used in practice . Dropout In cases where it is not economical to train more than one network, uncertainties can be obtained with dropout . Monte-Carlo dropout can be viewed as a form of approximate Bayesian inference. However, to do so requires a rather unnatural approximating family from the perspective of approximate inference. Also, one has then either to take a limit or generalize variational inference to a quasi-KL divergence. In addition, dropout can be interpreted in terms of MAP inference . Another alternative view of MC dropout is as an ensemble method in which the ensemble members have shared parameters (which means they are trained together) and where the ensembling is applied at test time too. This latter view is arguably as natural as the Bayesian interpretation. For this reason we discuss MC dropout separately from BNNs. Since dropout implicitly approximates non-Gaussian weight distribution with Gaussians, it exhibits spurious patterns in the obtained uncertainties, which can lead to arbitrarily overconfident estimates . In contrast, due to the conservatism property, random priors avoid such overconfidence. Bayesian Neural Networks (BNNs) Bayesian Neural Networks (; ; ; ;) explicitly model the distribution over weights of a neural network. While BNNs provide a link between deep learning and Bayesian inference, they are very slow to train. Even recent tuned implementations of BNNs are several times slower than supervised learning. This happens despite using a battery of technical optimizations, including distributed training and batch normalization. Moreover, modern convolutional BNNs still carry a significant accuracy penalty when deployed with realistic settings of prior variance. 2 See appendix E of the paper by. 2. For other algorithms, they are 1 − max(p µ), where p µ is the averaged output of models in ensemble . Encouraged by the huge empirical success of random priors in Reinforcement Learning , we wanted to provide an evaluation in a more typical supervised learning setting. We tested the uncertainties in two ways. First, we investigated calibration, i.e. whether we can expect a higher accuracy for more confident estimates. Next, we checked whether the uncertainties can be used for out-of-distribution detection. We compared to two competing approaches for uncertainty detection: deep ensembles and spatial concrete dropout . The same ResNet architecture served as a basis for all methods. Details of the implementation are provided in Appendix A. We evaluated the uncertainty estimates on out-of-distribution detection. To quantify the , we evaluated the area under the ROC curve (AU-ROC) for the task of deciding whether a given image comes from the same distribution or not. All methods were trained on four classes from the CIFAR-10 dataset (training details are provided in Appendix A). We then tested the ing networks on images from withheld classes and on the SVHN dataset Considering the statistical errors (see Appendix B), random priors performed slightly better than deep ensembles with adversarial training for B = 1 and about the same for B = 10. For dropout, B refers to the number of dropout samples. Dropout performed worse, but was cheaper to train. In order to gain a more finelygrained insight into the quality of the uncertainties, we also show uncertainty histograms in Figure 3. The figure shows the distribution of uncertainty estimates for seen data (top row) vs. unseen data (bottom row) for bootstrap sizes B = {1, 5, 10}. The main is that uncertainties obtained from random priors are already well-separated with B = 1, while deep ensembles need more bootstraps to achieve the full separation between test and train examples. We provide additional experimental , showing OOD accuracy and an evaluation on CIFAR 100 in Appendix B. Calibration Good uncertainty estimates have the property that accuracy increases as we become more certain, a property known as calibration. We measured it by evaluating average accuracy on the subset of images with uncertainty smaller than a given value. We trained on four classes from the CIFAR-10 dataset. We then tested the ing networks on the whole dataset, which included both the seen and unseen classes. Results are shown in Figure 4. Ideally, in a calibrated method, these curves should be increasing, indicating that a method always becomes more accurate as it becomes more confident. In coarse terms, Figure 4 confirms that all methods except a degenerate deep ensemble with only one bootstrap are roughly monotonic. However, uncertainty estimates from random priors are more stable, showing monotonicity on a finer scale as well as on a large scale. Interestingly, calibration improved only slightly when increasing the number of bootstraps B. Table 2: Out-of-distribution AUROC for the same models as above (see Tab. 1) on subsampled data. Numbers are accurate up to ±0.01. In the previous experiment, we kept the architectural and optimization choices fixed across algorithms. This ensured a level playing field, but meant that we were not able to obtain zero training error on the predictor networks used by random priors. However, we also wanted to evaluate random priors in the setting of near-zero training error. To do this, we used a smaller set of training images, while still keeping the network architecture the same. This allowed us to obtain nearcomplete convergence (details in Appendix A). Figure 6: Distribution of uncertainty estimates for various algorithms. Top row shows seen data, bottom row shows unseen data from CIFAR-10, where we trained on a sample of 75 images from the training set. For random priors (RP), uncertainties areσ 2. For other algorithms, they are 1 − max(p µ), where p µ is the averaged output of models in ensemble . Table 2, analogous to our on the full dataset presented above. In this sub-sampled regime, the random prior method easily outperformed competing approaches, showing better calibration (Fig. 5). The histograms in Figure 6 also demonstrate good separation between seen and unseen data. In the out-of-distribution benchmarks reported in Table 2, the random prior method has comfortably outperformed the baselines. While this training regime is not practical for real-life tasks, it demonstrates the potential performance of random priors when trained to full convergence. We performed an ablation to test the robustness of our algorithm to the scaling of the weight initialization in the prior. Results are shown in Figure 7, where we plot the relationship between initialization scale (taken from the set {0.01, 0.1, 1.0, 2.0, 5.0, 10.0}) and AUROC performance on the CIFAR-10 task. We can see that OOD performance is relatively robust with respect to the weight initialization within one order of magnitude. We have shown that uncertainties obtained from random priors achieve competitive performance with fewer bootstraps in a regime where the network architecture is typical for standard supervised learning workloads. Random priors showed superior performance in a regime where the predictors can be trained to near-zero loss. We provided a theoretical justification for the use of random priors for obtaining uncertainty estimates in the context of deep learning. We have shown that the obtained uncertainties are conservative and that they concentrate for any neural network architecture. We performed an extensive empirical comparison, showing that random priors perform similarly to deep ensembles in a typical supervised training setting, while outperforming them in a regime where we are able to accomplish near-zero training loss for the predictors. For the 1D regression experiment on synthetic data (Fig 1), we used feed-forward neural networks with 2 layers of 128 units each and a 1-dimensional output layer. We used an ensemble size of 5. The network was trained on 20 points sampled from the negative domain of a sigmoid function and tested on 20 points sampled from the positive domain. Model architecture For the CIFAR-10 experiments, we adapted the setup from the cifar10-fast model. 3 For the network predicting the mean, we used the exact same architecture as in this model. For the prior networks in our uncertainty estimators, the architecture for the prior network was the same as the mean network, but using a final linear layer instead of the softmax layer. We used squared error on that last layer to get the uncertainties. For the predictor networks in the uncertainty estimators, we added two additional layers at the end to make sure the prior functions are learnable (see Fig. 2). We followed in choosing the output size to be M = 512 and using the Adam optimizer with a learning rate of 0.0001. We optimized the initialization scale of our networks as a hyperparameter on the grid {0.01, 0.1, 1.0, 2.0, 10.0} and chose 2.0. We chose a scaling factor of β = 1.0 for the uncertainty bonus of the random priors and fixed it for all experiments. Data For the CIFAR-10 experiment, we trained on the classes {bird, dog, frog, horse} and excluded {cat, deer, airplane, automobile, ship, truck}. For the small CIFAR-10 ablation experiment, we trained on 75 images sampled from the classes {ship, truck} and excluded the remaining classes. Training Error The training error was 0.57 ± 0.20 on the CIFAR experiment and 0.03 ± 0.02 on the sub-sampled ablation (the symbol ± denotes 90% confidence intervals). Out-of-distribution classification For computing the areas under the receiver-operator characteristic curves (AUROC) in the OOD classification tables, we used the roc auc score function from the Python package sklearn , using the predicted uncertainties as predicted label scores and binary labels for whether or not the samples were from the training set. We provide confidence intervals for AUROC measurements in Table 3. Table 3: Out-of-distribution AUROC for random priors (RP), deep ensembles (DE), deep ensembles with adversarial training (DE+AT) and spatial concrete dropout (DR). The errors are computed from ten samples each in the B = 1 case. The ± symbol denotes one standard error. In addition to AUROC , we also provide accuracy figures on the same OOD tasks. The thresholding for classification was obtained by cross-validation. They are in Table 4 Table 5: Out-of-distribution accuracy for the same models as above (see Tab. 4) on subsampled data. These values augment the AUROC values reported in Table 6: In-distribution supervised classification accuracies on the respective test sets of the different data sets for random priors (RP), deep ensembles (DE), deep ensembles with adversarial training (DE+AT) and spatial concrete dropout (DR). *Since random priors do not have an intrinsic supervised prediction model, we used the predictions from the DE+AT model in all our experiments instead, setting B = 1. As additional empirical support for our method, we ran experiments on another data set, namely CIFAR-100 . Again, we include 5 classes in the training set and exclude the remaining classes. The are reported in the following (Figs. 8, 9; Tabs. 7, 8). They qualitatively and quantitatively support the same as our previous experiments. For completeness, we recall the definition of Bayes Risk. We are often interested in minimizing the Mean Squared Error E f (f (x) − w) 2, where x is a given test point and w is a variable we are 2. For other algorithms, they are 1 − max(p µ), where p µ is the averaged output of models in ensemble . The relationship between uncertainty (horizontal axis) and accuracy (vertical axis) for B = 1, 5, 10 on samples from CIFAR-100. In well-calibrated models, accuracy increases as uncertainty declines. allowed to adjust. A known of Bayesian decision theory is that the minimizer of the MSE is given by the expected value of f, i.e. Equation 12 holds for any stochastic process f, including when f is a posterior process obtained by conditioning on some dataset. A consequence of equation 12 is that it is impossible to obtain a MSE lower than the one obtained by computing the posterior mean of f. A stochastic process is Gaussian , if all its finite-dimensional distributions are Gaussian. The main advantage of GPs is that the posterior process can be expressed in a tractable way. GPs are often used for regression, where we are learning an unknown function 4 φ: R K → R from noisy observations. Since a Gaussian distribution is completely identified by its first two moments, a GP can be defined by a mean function and a covariance function. Formally, the notation GP(µ, k) refers to a GP with with mean function µ: GPs can be used to model two kinds of uncertainty: epistemic uncertainty, which reflects lack of knowledge about unobserved values of φ and aleatoric uncertainty, which reflects measurement noise. When performing regression, we start with a zero-mean prior GP(0, k) and then observe N training points X = {x i} i=1,...,N and labels y = {y i} i=1,...,N where Table 8: Out-of-distribution classification accuracy on CIFAR-100 for random priors (RP), deep ensembles (DE), deep ensembles with adversarial training (DE+AT) and spatial concrete dropout (DR). The ± symbol denotes one standard error. These values augment the AUROC values reported in Table 7. A ) model the aleatoric noise. We obtain the posterior process on GP(µ Xy, k X). For GPs, the mean and covariance of the posterior GP on y evaluated at x can be expressed as −1 y and In particular, the posterior covariance does not depend on y. In the formula above, we use the kernel matrix, where x i and x j are in the training set. We also use the notation k ∈ R N for the vector of train-test correlations {k} i = k(x i, x), where x i is in the training set and k(x, x) is similarly defined. The shorthand σ 2 X (x) introduced in equation 14 denotes the posterior variance at a single point. Below, we give a list of symbols used for variance of various random variables. for any test point x ∈ R K and for any stochastic process {f (x)} x∈R K with all second moments finite and exchangeable outputs Proof. We prove the statement by re-writing the expression on the left. Here, the equality in holds by definition of conditional probability. The equality in holds by definition of posterior mean and the equality 21 follows by assumption that the process has exchangeable outputs. While this argument follows a similar pattern to a standard about Bayesian Risk (see Appendix Appendix C), it is not identical because the function h Xf depends on f. Proposition 1 (Strict Conservatism in Expectation). Assume that f is a GP. Then for any function h: Moreover, equality holds if and only if h Xf (x) = µ Xf (x). Proof. We instantiate Lemma 1 by setting f to be a GP. By equation 14, the posterior covariance of a GP does not depend on the target values, i.e. σ 2 Xf (x) = σ 2 X (x). The first part of the can be shown by pulling σ 2 X (x) out of the expectation. Moreover, since · is a norm and hence positive semi-definite, equality holds if and only if h Xf (x) = µ Xf (x). Lemma 3. Assume that the random variableσ 2 µ (x) has finite variance upper bounded by v UB. With probability 1 − δ, we haveσ Proof. The proof is standard, but we state it in our notation for completeness. Applying Chebyshev's inequality to the random variableσ 2 µ (x), we have that Prob |σ Corollary 1 (Strict Conservatism for Finite Bootstraps). Assume that f is a GP. Assume that the random variableσ 2 µ (x) has finite variance upper bounded by v UB. Then with probability 1 − δ, for any function h: Proof. Combine Lemma 3 and Proposition 1. We now proceed to the proofs showing concentration. We begin by formally defining a class of predictor networks. Definition 1 (Class H U of Lipschitz networks). Consider functions h: R K → R M. Let j, j = 1,..., M, index the outputs of the function. We define H U so that each h ∈ H U has the following properties for each j, j. (P1) h j is Lipschitz continuous with constant L, i.e. h j (x) − h j (x) 2 ≤ L x − x' 2 for all x, x with x ∞ ≤ 1 and x ∞ ≤ 1, (P2) outputs are exchangeable, i.e. {h j : h ∈ H U} = {h j : h ∈ H U}, (P3) the class is symmetric around zero, i.e. h j ∈ {h j : h ∈ H U} implies −h j ∈ {h j : h ∈ H U}. (P4) h j is bounded, i.e. max x ∞ ≤1 |h j (x)| ≤ U. While the conditions in Definition 1 look complicated, they are in fact easy to check for predictor networks that follow the architecture in Figure 2. In particular, Lipschitz continuity (P1) has to hold in practice because its absence would indicate extreme sensitivity to input perturbations. Output exchangeability (P2) holds since reordering the outputs does not change our architecture. Symmetry around zero (P3) holds by flipping the sign in the last network layer. Boundedness (P4) is easy to ensure by clipping outputs. In the following Lemma, we obtain a bound on the expected uncertainty. Lemma 4. Consider a target function f: R K → R M, where j = 1,..., M, with the domain restricted to x ∞ ≤ 1. Introduce a constant U such that max x ∞ ≤1 |f j (x)| ≤ U. Denote the data distribution with support on {x : x ∞ ≤ 1} as D. Moreover, assume K ≥ 3. For h Xf ∈ H U, with probability 1 − δ we have Proof. The proof uses standard Rademacher tools. To avoid confusion across several conventions, we explicitly define the Rademacher complexity of a function class G as: Here, the random variables u i are sampled i.i.d. using a discrete distribution with Prob(u i = −1) = Prob(u i = 1) = 1 2 and the the second equality follows by using property (P3). We start by applying the generic Rademacher bound to the function class M = {x 1, . . ., x N, t 1 . . ., t N → 1 U 2 1 M t i − h(x i) 2, h ∈ H U }, which contains the possible errors of the predictor. We now introduce the function class M = {x 1, . . ., x N, t 1 . . ., t N → 1 B 2 (t j i −h j (x i)) 2, h ∈ H U }, which models the per-output squared error. Because of property (P2), M does not depend on the output index j. By pulling out the sum outside the supremum in equation 35, we get by Talagrand's Lemma , we also havê Here, H 1 = {Lemma 4 allowed us to relate the error on the training set to the expected error on the test set. It also shows that the two will be closer for small values of the Lipschitz constant L. We now use this Lemma to show our main concentration (Proposition 2). Proposition 2. If the training converges, i.e. the training loss for arbitrarily large training sets, then assuming the predictors h Xf are bounded and Lipschitz continuous with constant L, then under technical conditions the uncertainties concentrate, i.e. σ 2 (x) → 0 as N → ∞ and B → ∞ with probability 1. Proof. We are assuming the technical conditions of Lemma 4. Instantiating Lemma 4, setting the training loss to σ 2 A in the RHS of equation 34 and letting N → ∞, we obtain the following with probability 1: This implies: From the continuity of f and h Xf we have thatσ 2 µ is continuous in x. Together with the property that the expression under the expectation is non-negative, this gives that for every x. Since the right-hand sidedoes not depend on B, we also have From the definition ofv σ, we have that
We provide theoretical support to uncertainty estimates for deep learning obtained fitting random priors.
1,427
scitldr
In this paper, we propose an arbitrarily-conditioned data imputation framework built upon variational autoencoders and normalizing flows. The proposed model is capable of mapping any partial data to a multi-modal latent variational distribution. Sampling from such a distribution leads to stochastic imputation. Preliminary evaluation on MNIST dataset shows promising stochastic imputation conditioned on partial images as input. Neural network based algorithms have been shown effective and promising for various downstream tasks including classification , retrieval , prediction , and more. In order to correctly learn how to perform these tasks, they usually rely strictly on access to fully-observed data. However, acquiring this type of data in real life requires tremendous human effort, limiting the applicability of this family of models. Having a framework designed to perform inference on partially-observed data will not only alleviate the aforementioned constraint, but also open possibilities to perform data imputation, in which the missing data is inferred. Data imputation, also referred to conditional generation, has been an active research area (; ;). The probabilistic nature of this task makes it difficult to adopt off-the-shelf deterministic models widely studied. In other words, conditioned on the same partially-observed data as input, multiple plausible fully-observed data should be able to be imputed. Variational autoencoders (VAEs) , as a popular probabilistic modelling approach, have been applied to the data imputation task recently. A variational autoencoder defines a generative process that jointly models the distribution p θ (x, z) of the observed variable x and latent variable z, governed by parameters θ. Instead of performing local inference, VAEs include an inference network parameterized by φ to output an approximate posterior distribution q φ (z|x). Both the generative model and the inference model are optimized with a unified evidence lower bound (ELBO) on marginal data likelihood:. Recent literature on utilizing VAEbased models mainly focus on the effectiveness of combination of various obversed parts . Different from the related works described above, we propose to enrich the latent space of variational autoencoders to enable multi-modal posterior inference, and therefore probabilistic imputation. Specifically, we use a two-stage model, with first-stage focusing on learning a representation space based on fully-observed data, and second-stage focusing on aligning the representation space embedded from partially-observed data to the one in stage-one. Using flow-based transformations for constructing a rich latent distribution, the proposed model is capable of inferring multi-modal variational latent distributions. Adopting a standard VAE approach for this problem would involve advocating for a model which receives partial data as input and, with the feedback of a standard reconstruction loss, learns to output the full data. Training such a model would pose many challenges. Firstly, gradient coming from the very end of the network would promote stronger imputation on the decoder, whereas the encoder could learn to simply encode the partial data. Secondly, there would be no mechanism to ensure the distribution of possible reconstructions would be correctly captured by the proposed posterior, which is generated by the encoder and fully conditioned on the partial data. To amortize these problems, we propose a two-stage schema, represented in Figure 1. The first stage (upper part of the figure) corresponds to the encoder of a VAE model. This encoder was trained with an associated decoder, which was later discarded, with the task of encoding and reconstructing the full data. If properly trained, this stage's proposed posterior correctly depicts a good distribution of the full data, because this is a requirement in order to also reconstruct it. Once trained, its weights are fixed, and then the model of the second stage is trained on the partial data. Note that the encoders and decoders of the first and second stages are different -they can have the same architecture but do not share weights. Because the latent space of the first model is rich enough to represent the full data's distribution (under the perspective of the first model), we propose to adopt a divergence loss between the first and the second model. This divergence acts as a distillation method, allowing the first model to inject rich information about the latent representation of the full data into the second-stage model. This injection will ensure weak alignment between both representation spaces, while also providing direct feedback to the encoder about the expected distribution of data in that space. One problem with using simple families of posterior approximation is the lack of support for modeling multi-modal distributions, in which a reconstruction can take multiple forms. To compensate for that, we adopt a Normalizing Flow model inside the latent space, forcing the divergence between stage-one and stage-two to happen between the normal distribution, from the proposed posterior of the former, and the more complex distribution created by the flow model of the latter. The nature of this divergence then becomes a problem: How can we model a divergence between a simple and a more complex distribution for which we don't know the parameters? Once defined, how can we ensure a multi-modal distribution can be modeled by the second stage? q(x i). From this perspective, we can derive a Monte-Carlo approach for the KLDivergence, as long as p(x i) and q(x i) are tractable: In our model, we know p(x i) is coming from a normal distribution, which is the proposed posterior of stage-one, therefore we only have to address the computation of q(x i), which is coming from the flow model. Thanks to properties of Normalizing Flows (NFs), this can be modeled as a correction term applied to the simple distribution before the flow: where K is the number of transformations f k, and q 0 (z 0) is the simple distribution that is transformed to the complex distribution q K (z K) through flow transformations. To complete the model we also added a second divergence loss between the simple distribution (prior to the NF) and a Gaussian centered at zero with standard deviation of one. This extra divergence allows us to control the support of that distribution, regaining generative capability in all subsequent spaces, including the more complex one created by the NF module. The second stage model (encoder, partial posterior and decoder) is trained from scratch with the reconstruction and the divergence losses. During the training of the second stage model, the first stage model is fixed and it provides supervision for the structure of the latent space. Finally, problem becomes irrelevant when we take into consideration the stochastic optimization in neural networks. If the training data is rich enough to correctly represent the multi-modal nature of the full data (this is a base assumption for any machine learning model), the best way to minimize the divergence loss is, indeed, by creating a multi-modal distribution which has density directly proportional to the likelihood of the data. In Figure 2 we present preliminary which showcase the benefit of having each of the proposed modules. For this experiment, a regular grid is defined inside the latent space, and values in this grid are sampled from the decoder to observe the latent structure organization. In Figure 2 (a), we display for a baseline approach, representing the best possible scenario, in which the encoder has access to the full data. We then show, in Figure 2(b), the same space when adopting the schema in Figure 1, but without the NF module; and the full architecture -with NF -in Figure 2 (c). We observe that the NF module allowed the network to have a more flexible latent space, when compared to the case without NF. Following this experiment, we set out to test whether the multi-modality of reconstructions was being captured by the model. Due to limited space, we limit ourselves to a single example, for which we don't penalize the model for not perfectly reconstructing the partial data -the goal is to verify if the multi-modality is being captured, and if the model is able to recognize the digit. Figure 3(a) demonstrates the problem we're aiming to solve: given a partially observed piece of data, we want to capture all possible interpretations and reconstructions of the full data. The without NF and with NF are given in Figure 3 (b) and Figure 3(c), respectively. We observe that adding the flow module allows the model to more precisely represent the possible reconstructions of partial data. While Figure 3 (b) still displays signs of averaging and confusion, most of the digits in Figure 3 (c) are clearly identifiable, and the multi-modality of the possible reconstructions is correctly depicted. Figure 3(b), for example, was unable to provide the possibility of "0" being a valid reconstruction to the partial provided in Figure 3(a). Although we demonstrate the power of our model in the simple case of MNIST, our model remains data-agnostic, and can be applied to any data modality (images, videos, text, sound, etc). Possible applications range from arbitrarily-conditioned data imputation to data generation following complex modality interactions, which are partly modeled by the NF inside the latent space.
We propose an arbitrarily-conditioned data imputation framework built upon variational autoencoders and normalizing flows
1,428
scitldr
This paper studies \emph{model inversion attacks}, in which the access to a model is abused to infer information about the training data. Since its first introduction by~\citet{fredrikson2014privacy}, such attacks have raised serious concerns given that training data usually contain sensitive information. Thus far, successful model inversion attacks have only been demonstrated on simple models, such as linear regression and logistic regression. Previous attempts to invert neural networks, even the ones with simple architectures, have failed to produce convincing . We present a novel attack method, termed the \emph{generative model inversion attack}, which can invert deep neural networks with high success rates. Rather than reconstructing private training data from scratch, we leverage partial public information, which can be very generic, to learn a distributional prior via generative adversarial networks (GANs) and use it to guide the inversion process. Moreover, we theoretically prove that a model's predictive power and its vulnerability to inversion attacks are indeed two sides of the same coin---highly predictive models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount the attacks. Our experiments demonstrate that the proposed attack improves identification accuracy over the existing work by about $75\%$ for reconstructing face images from a state-of-the-art face recognition classifier. We also show that differential privacy, in its canonical form, is of little avail to protect against our attacks. Deep neural networks (DNNs) have been adopted in a wide range of applications, including computer vision, speech recognition, healthcare, among others. The fact that many compelling applications of DNNs involve processing sensitive and proprietary datasets raised great concerns about privacy. In particular, when machine learning (ML) algorithms are applied to private training data, the ing models may unintentionally leak information about training data through their output (i.e., black-box attack) or their parameters (i.e., white-box attack). A concrete example of privacy attacks is model inversion (MI) attacks, which aim to reconstruct sensitive features of training data by taking advantage of their correlation with the model output. Algorithmically, MI attacks are implemented as an optimization problem seeking for the sensitive feature value that achieves the maximum likelihood under the target model. The first MI attack was proposed in the context of genomic privacy , where the authors showed that adversarial access to a linear regression model for personalized medicine can be abused to infer private genomic attributes about individuals in the training dataset. Recent work extended MI attacks to other settings, e.g., recovering an image of a person from a face recognition model given just their name, and other target models, e.g., logistic regression and decision trees. Thus far, effective MI attacks have only been demonstrated on the aforementioned simple models. It remains an open question whether it is possible to launch the attacks against a DNN and reconstruct its private training data. The challenges of inverting DNNs arise from the intractability and ill-posedness of the underlying attack optimization problem. For neural networks, even the ones with one hidden layer, the corresponding attack optimization becomes a non-convex problem; solving it via gradient descent methods may easily stuck in local minima, which leads to poor attack performance. Moreover, in the attack scenarios where the target model is a DNN (e.g., attacking face recognition models), the sensitive features (face images) to be recovered often lie in a high-dimensional, continuous data space. Directly optimizing over the high-dimensional space without any constraints may generate unrealistic features lacking semantic information (See Figure 1). Figure 1: Reconstruction of the individual on the left by attacking three face recognition models (logistic regression, one-hidden-layer and twohidden-layer neural network) using the existing attack algorithm in In this paper, we focus on image data and propose a simple yet effective attack method, termed the generative model inversion (GMI) attack, which can invert DNNs and synthesize private training data with high fidelity. The key observation supporting our approach is that it is arguably easy to obtain information about the general data distribution, especially for the image case. For example, against a face recognition classifier, the adversary could randomly crawl facial images from the Internet without knowing the private training data. We find these datasets, although may not contain the target individuals, still provide rich knowledge about how a face image might be structured; extraction and proper formulation of such prior knowledge will help regularize the originally ill-posed inversion problem. We also move beyond specific attack algorithms and explore the fundamental reasons for a model's susceptibility to inversion attacks. We show that the vulnerability is unavoidable for highly predictive models, since these models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount MI attacks. Our contributions can be summarized as follows: We propose to use generative models to learn an informative prior from public datasets so as to regularize the ill-posed inversion problem. We propose an end-to-end GMI attack algorithm based on GANs, which can reveal private training data of DNNs with high fidelity. We present a theoretical that uncovers the fundamental connection between a model's predictive power and its susceptibility to general MI attacks and empirically validate it. We conduct extensive experiments to demonstrate the performance of the proposed attack. Experiment code is publicly available at https://tinyurl.com/yxbnjk4s. Related Work Privacy attacks against ML models consist of methods that aim to reveal some aspects of training data. Of particular interest are membership attacks and MI attacks. Membership attacks aim to determine whether a given individual's data is used in training the model . MI attacks, on the other hand, aim to reconstruct the features corresponding to specific target labels. In parallel to the emergence of various privacy attack methods, there is a line work that formalizes the privacy notion and develops defenses with formal and provable privacy guarantees. One dominate definition of privacy is differential privacy (DP), which carefully randomizes an algorithm so that its output does not to depend too much on any individuals' data . In the context of ML algorithms, DP guarantees protect against attempts to infer whether a data record is included in the training set from the trained model . By definition, DP limits the success rate of membership attacks. However, it does not explicitly protect attribute privacy, which is the target of MI attacks . The first MI attack was demonstrated in , where the authors presented an algorithm to recover genetic markers given the linear regression that uses them as input features, the response of the model, as well as other non-sensitive features of the input. proposed a algorithm that allows MI attacks to be carried out without the knowledge of non-sensitive features by poisoning training data properly. Despite the generality of the algorithmic frameworks proposed in the above two papers, the evaluation of the attacks is only limited to linear models. discussed the application of MI attacks to more complex models including some shallow neural networks in the context of face recognition. Although the attack can reconstruct face images with identification rates much higher than random guessing, the recovered faces are indeed blurry and hardly recognizable. Moreover, the quality of reconstruction tends to degrade for more complex architectures. Yang et al. (2019b) proposed to train a separate network that swaps the input and output of the target network to perform MI attacks. The inversion model can be trained with black-box accesses to the target model. However, their approach cannot directly be benefited from the white-box setting. Moreover, several recent papers started to formalize MI attacks and study the factors that affect a model's vulnerability from a theoretical viewpoint. For instance, characterized model invertibility for Boolean functions using the concept of influence from Boolean analysis; formalized the risk that the model poses specifically to individuals in the training data and shows that the risk increases with the degree of overfitting of the model. However, their theory assumed that the adversary has access to the join distribution of private feature and label, which is overly strong for many attack scenarios. Our theory does not rely on this assumption and better supports the experimental findings. An overview of our GMI attack is illustrated in Figure 2. In this section, we will first discuss the threat model and then present our attack method in details. In traditional MI attacks, an adversary, given a model trained to predict specific labels, uses it to make predictions of sensitive features used during training. Throughout the paper, we will refer to the model subject to attacks as the target network. We will use face recognition classifiers as a running example for the target network. Face recognition classifiers label an image containing a face with an identifier corresponding to the individual depicted in the image. We assume that the adversary employs an inference technique to discover the face image x for some specific identity y output by the classifier f. Following the canonical setup of MI attacks, we assume that the adversary has access to the target network f. In addition to f, the adversary may also have access to some auxiliary knowledge that facilitates his inference. Possible Auxiliary Knowledge Examples of auxiliary knowledge could be a blurred or corrupted image which only contains nonsenstive information, such as pixels in a face image. This auxiliary knowledge might be easy to obtain, as blurring and corruption are often applied to protect anonymity of individuals in public datasets . The setup of MI attacks on images resembles the widely studied image inpainting tasks in computer vision, which also try to fill missing pixels of an image. The difference is, however, in the goal of the two. MI attacks try to fill the sensitive features associated with a specific identity in the training set. In contrast, image inpainting tasks only aim to synthesize visually realistic and semantically plausible pixels for the missing regions; whether the synthesized pixels are consistent with a specific identity is beyond the scope. Despite the difference, our approach to MI attacks leverages some training strategies from the venerable line of work on image inpainting (; ; a) and significantly improves the recognizability of the reconstructed images over the existing attack methods. To realistically reconstruct missing sensitive regions in an image, our approach utilizes the generator G and the discriminator D, all of which are trained with public data. After training, we aim to find the latent vectorẑ that achieves highest likelihood under the target network while being constrained to the data manifold learned by G. However, if not properly designed, the generator may not allow the target network to easily distinguish between different latent vectors. For instance, in extreme cases, if the generated images of all latent vectors collapse to the same point in the feature space of the target network, then there is no hope to identify which one is more likely to appear in its private training set of the target network. To address this issue, we present a simple yet effective loss term to promote the diversity of the data manifold learned by G when projected to the target network's feature space. Specifically, our reconstruction process consists of two stages: Public knowledge distillation, in which we train the generator and the discriminators on public datasets in order to encourage the generator to generate realistic-looking images. The public datasets can be unlabeled and have no identity overlapping with the private dataset. Secret revelation, in which we make use of the generator obtained from the first stage and solve an optimization problem to recover the missing sensitive regions in an image. For the first stage, we leverage the canonical Wasserstein-GAN training loss. The loss function is adapted to the two discriminators for our case: In addition, inspired by Yang et al. (2019a), we introduce a diversity loss term that promotes the diversity of the images synthesized by G when projected to the target network's feature space. Let F denote the feature extractor of the target network. The diversity loss can thus be expressed as As discussed above, larger diversity will facilitate the targeted network to discern the generated image that is most likely to appear in its private training set. Our full objective for public knowledge distillation can be written as In the secret revelation stage, we solve the following optimization to find the latent vector that generates an image achieving the maximum likelihood under the target network while remaining where the prior loss L prior (z) penalizes unrealistic images and the identity loss L id (z) encourages the generated images to have likelihood under the targeted network. They are defined, respectively, by where C(G(z)) represent the probability of G(z) output by the target network. For a fixed data point (x, y), we can measure the performance of a model f for predicting the label y of feature x using the log likelihood log p f (y|x). It is known that maximizing the log likelihood is equivalent to minimizing the cross entropy loss-one of the most commonly used loss function for training DNNs. Thus, throughout the following analysis, we will focus on the log likelihood as a model performance measure. Now, suppose that (X, Y) is drawn from an unknown data distribution p(X, Y). Moreover, X = (X s, X ns), where X s and X ns denote the sensitive and non-sensitive part of the feature, respectively. We can define the predictive power of the sensitive feature X s under the model f (or equivalently, the predictive power of model f using X s) as the change of model performance when excluding it from the input, i.e.,. Similarly, we define the predictive power of the sensitive feature given a specific class y and nonsensitive feature x ns as We now consider the measure for the MI attack performance. Recall the goal of the adversary is to guess the value of x s given its corresponding label y, the model f, and some auxiliary knowledge x ns. The best attack outcome is the recovery of the entire posterior distribution of the sensitive feature, i.e., p(X s |y, x ns). However, due to the incompleteness of the information available to the adversary, the best possible attack that adversary can achieve under the attack model can be captured by p f (X s |y, x ns) ∝ p f (y|X s, x ns)p(X s |x ns), assuming that the adversary can have a fairly good estimate of p(X s |x ns). Such estimate can be obtained by, for example, learning from public datasets using the method in Section 2.2. Although MI attack algorithms often output a single feature vector as the attack , these algorithms can be adapted to output a feature distribution instead of a single point by randomizing the starting guess of the feature. Thus, it is natural to measure the MI attack performance in terms of the similarity between p(X s |y, x ns) and p f (X s |y, x ns). The next theorem indicates that the vulnerability to MI attacks is unavoidable if the sensitive features are highly predictive under the model. When stating the theorem, we use the negative KL-divergence S KL (·||·) to measure the similarity between two distributions. Theorem 1. Let f 1 and f 2 be two models such that for any fixed label We omit the proof of the theorem to the supplementary material. Intuitively, highly predictive models are able to build a strong correlation between features and labels, which coincides exactly with what an adversary exploits to launch MI attacks; hence, more predictive power inevitably leads to higher attack performance. , it is argued that a model is more vulnerable to MI attacks if it overfits data to a greater degree. Their is seemingly contradictory with ours, because fixing the training performance, more overfitting implies that the model has less predictive power. However, the assumption underlying their is fundamentally different from ours, which leads to the disparities. The in assumes that the adversary has access to the joint distribution p(X s, X ns, Y) that the private training data is drawn from and their setup of the goal of the MI attack is to learn the sensitive feature associated with a given label in a specific training dataset. By contrast, our formulation of MI attacks is to learn about private feature distribution p(X s |y, x ns) for a given label y from the model parameters. We do not assume that the adversary has the prior knowledge of p(X s, X ns, Y), as it is a overly strong assumption for our formulation-the adversary can easily obtain p(X s |y, x ns) for any labels and any values of non-sensitive features when having access to the joint distribution. Dataset We evalaute our method using three datasets: the MNIST handwritten digit data (MNIST), the Chest X-ray Database ) (ChestX-ray8), and the CelebFaces Attributes Dataset (CelebA) containing 202,599 face images of 10,177 identities with coarse alignment. We crop the images at the center and resize them to 64×64 so as to remove most . Protocol We split each dataset into two disjoint parts: one part used as the private dataset to train the target network and the other as a public dataset for prior knowledge distillation. The public data, throughout the experiments, do not have class intersection with the private training data of the target network. Therefore, the public dataset in our experiment only helps the adversary to gain knowledge about features generic to all classes and does not provide information about private, class-specific features for training the target network. This ensures the fairness of the comparison with the existing MI attack . Models We implement several different target networks with varied complexities. For all the adapted networks, we modify the FC-layer to fit in our task. For digit classification on MNIST, our target network consists of 3 convolutional layers and 2 pooling layers. For the disease prediction on ChestX-ray8, we use ResNet-18 adapted from as our target network. For the face recognition tasks on CelebA, we use the following networks: VGG16 adapted from ;ResNet-152 adapted from ; face.eoLVe adapted from the state-of-the-art face recognition network . Training We split the private dataset defined above into training set (90%) and test set (10%) and use the SGD optimizer with learning rate 10 −2, batch size 64, momentum 0.9 and weight decay 10 −4 to train these networks. To train the GAN in the first stage of our attack pipeline, we set λ d = 0.5 and use the Adam optimizer with the learning rate 0.004, batch size 64, β 1 = 0.5, and β 2 = 0.999 . In the second stage, we set λ i = 100 and use the SGD optimizer to optimize the latent vector z with the learning rate 0.01, batch size 64 and momentum 0.9. z is drawn from a zero-mean unit-variance Gaussian distribution. We randomly initialize z for 5 times and optimize each round for 1500 iterations. We choose the solution with the lowest identity loss as our final latent vector. Evaluating the success of MI attacks requires to assess whether the recovered image exposes the private information about a target individual. Previous works analyzed the attack performance mainly qualitatively by visual inspection. Herein, we introduce four metrics which allow to quantitatively judge the MI attack efficacy and perform evaluation at a large scale. Peak Signal-to-Noise Ratio (PSNR) PSNR is the ratio of an image's maximum squared pixel fluctuation over the mean squared error between the target image and the reconstructed image. PSNR measures the pixel-wise similarity between two images. The higher the PSNR, the better the quality of the reconstructed image. However, oftentimes, the reconstructed image may still reveal identity information even though it is not close to the target image pixel-wise. For instance, a recovered face with different translation, scale and rotation from the target image will still incur privacy loss. This necessitates the need for the following metrics that can evaluate the similarity between the reconstructed and the target image at a semantic level. Attack Accuracy (Attack Acc) We build an evaluation classifier that predicts the identity based on the input reconstructed image. If the evaluation classifier achieves high accuracy, the reconstructed image is considered to expose private information about the target individual. The evaluation classifier should be different from the target network because the reconstructed images may incorporate features that overfit the target network while being semantically meaningless. Moreover, the evaluation classifier should be highly performant. For the reasons above, we adopt the state-of-the-art architecture in each task as the evaluation classifier. For MNIST, our evaluation network consists of 5 convolutional layers and 2 pooling layers. For ChestX-ray8, we adapt VGG-19 from as our evaluation network. For CeleA, we use the model in for the evaluation classifier. We first pretrain it on the MS-Celeb-1M and then fine tune on the identities in the training set of the target network. The ing evaluation classifier can achieve 96% accuracy on these identities. Feature Distance (Feat Dist) Feat Dist measures the l 2 feature distance between the reconstructed image and the centroid of the target class. The feature space is taken to be the output of the penultimate layer of the evaluation network. K-Nearest Neighbor Distance (KNN Dist) KNN Dist looks at the shortest distance from the reconstructed image to the target class. We identify the closest data point to the reconstructed image in the training set and output their distance. The distance is measured by the l 2 distance between the two points in the feature space of the evaluation classifier. Figure 3: Qualitative comparison of the proposed GMI attack with the existing MI attack (EMI), the pure image inpainting method (PII). The ground truth target image is shown in 1st col. We compare our approach with two baselines: Existing model inversion attack (EMI), which implements the algorithm in . For this algorithm, the adversary only exploits the identity loss for image reconstruction and return the pixel values that minimize the the identity loss; Pure image inpainting (PII), which minimizes the W-GAN loss and performs image recovery based on the information completely from the public dataset. For CelebA, the private set comprises 21,152 images of 1000 identities and samples from the rest are used as a public dataset. We evaluate the attack performance in the three settings: the attacker does not have any auxiliary knowledge about the private image, in which case he will recover the image from scratch; the attacker has access to a blurred version of the private image and his goal is to deblur the image; the attacker has access to a corrupted version of the private image wherein the sensitive, identity-revealing features (e.g., nose, mouth, etc) are blocked. Table 1 compares the performance of our proposed GMI attack against EMI for different network architectures. We can see that the EMI works poorly on the deep nets and achieve around zero attack accuracy. GMI is much more effective than EMI. Particularly, our method improves the accuracy of the attack against the state-of-the-art face.evoLVe classifier over the existing MI attack by 75% in terms of Top-5 attack accuracy. Also, note that models that are more sophisticated and have more predictive power are more susceptible to attacks. We will examine this phenomenon in more details in Section 4.3.3. We now discuss the case where the attacker has access to some auxilliary knowledge in terms of blurred or partially blocked images. For the latter, we consider two types of masks-center and face "T", illustrated by the second column of Figure 3 (c) and (d), respectively. The center mask blocks the central part of the face and hides most of the identity-revealing features, such as eyes and nose, while the face T mask is designed to obstruct all private features in a face image. Table 2 shows that our method consistently outperforms the two baselines discussed above. Since the existing MI attack does not exploit any prior information, the inversion optimization problem is extremely ill-posed and performing gradient descent ends up at some visually meaningless local minimum, as illustrated by Figure 3. Interestingly, despite having the meaningless patterns, these images can all be classified correctly into the target label by the target network. Hence, the existing MI attack tends to generate "adversarial examples" that can fool the target network but does not exhibit any recognizable features of the private data. Figure 3 also compares our with PII, which is completely based on the information from the public dataset to recover the private image. We can see that although PII leads to realistic recoveries, the reconstructed images do not present the same identity features as the target images. This can be further corroborated by the quantitative in Table 2. Note that the attacks are more effective for the center mask than the face T mask. This is because the face T mask we designed completely hides the identity revealing features on the face while the center mask may still expose the mouth information. We have seen that distilling prior knowledge and properly incorporating it into the attack algorithm are important to the success of MI attacks. In our proposed method, the prior knowledge is gleaned from public datasets through GAN. We now evaluate the impact of public datasets on the attack performance. We first consider the case where the public data is from the same distribution as the private data and study how the size of the public data affects the attack performance. We change the size ratio (1:1, 1:4, 1:6, 1:10) of the public over the private data by varying the number of identities in the public dataset. As shown in Table 3, the attack performance varies by less than 7% when shrinking the public data size by 10 times. Moreover, we study the effect of the distribution shift between the public and private data on the attack performance. We train the GAN on the PubFig83 dataset, which contains 13,600 images with 83 identities, and attack the target network trained on CelebA. There are more faces with sunglasses in PubFig83 than CelebA, which makes it harder to distill generic face information. Without any pre-processing, the attack accuracy drops by more than 20% despite still outperforming the existing MI attack by a large margin. To further improve the reconstruction quality, we detect landmarks in the face images, rotate the images such that the eyes lie on a horizontal line, and crop the faces to remove the . These pre-processing steps make the public datasets better present the face information, thus improving the attack accuracy significantly. We perform experiments to validate the connection between predictive power and the vulnerability to MI attacks. We measure the predictive power of sensitive feature under a model using the difference of model testing accuracy based on all features and just non-sensitive features. We consider the following different ways to construct models with increasing feature predictive powers, namely, enlarging the training size per class, adding dropout regularization, and performing batch normalization. For the sake of efficiency, we slightly modify the proposed method in Section 2.2 in order to avert re-training GANs for different architectures. Specifically, we exclude the diversity loss from the attack pipeline so that multiple architectures can share the same GAN for prior knowledge distillation. Figure 4 shows that, in general, the attack performance will be better for models with higher feature predictive powers. Moreover, this trend is consistent across different architectures. Figure 5: Visualization of the recovered input images by the GMI and the EMI attack. For MNIST, we use all 34265 images with labels 5, 6, 7, 8, 9 as private set, and the rest of 35725 images with labels 0, 1, 2, 3, 4 as a public dataset. Note that the labels in the private and public data have no overlaps. We augment the public data by training an autoencoder and interpolating in the latent space. Our GMI attack is compared with the baseline in Table 4. We omit the PII baseline because the public and private set defined in this experiment are rather disparate and the PII essentially produce close to random guesses. We can see from the table that the performance of GMI is significantly better than the EMI. Examples of the recovered images with both attacks are compared in Figure 5. For ChestX-ray8, we use 10000 images of seven classes as the private data and the other 10000 with different labels as public data. The GMI and EMI attack are compared in Table 5. Again, the GMI attack outperforms the EMI attack by a large margin. We investigate the implications of DP for MI attacks. (, δ)-DP is ensured by adding Gaussian noise to clipped gradients in each training iteration. We find it challenging to produce useful face recognition models with DP guarantees due to the complexity of the task. Therefore, we turn to a simpler dataset, MNIST, which is commonly used in differential private ML studies. We set δ = 10 −5 and vary the noise scale to obtain target networks with different. The attack performance against these target networks and their utility are illustrated in Figure 4 (d). Since the attack accuracy of the GMI attack on differentially private models is higher than that of PII which fills missing regions completely based on the public data, it is clear that the GMI attack can expose private information from differentially private models, even with stringent privacy guarantees, like = 0.1. Moreover, varying differential privacy budgets helps little to protect against the GMI attack; sometimes, more privacy budgets even improve the attack performance (e.g., changing from 1 to 0.1). This is because DP, in its canonical form, only hides the presence of a single instance in the training set. Limiting the learning of specific individuals may facilitate the learning of generic features of a class, which, in turn, helps to stage MI attacks. In this paper, we present a generative approach to MI attacks, which can achieve the-state-of-the-art success rates for attacking the DNNs with high-dimensional input data. The idea of our approach is to extract generic knowledge from public datasets via GAN and use it to regularize the inversion problem. Our experimental show that our proposed attack is highly performant even when the public datasets do not include the identities that the adversary aims to recover, are unlabeled, have small sizes, come from a different distribution from the private data. We also provide theoretical analysis showing the fundamental connection between a model's predictive power and its vulnerability to inversion attacks. For future work, we are interested in extending the attack to the black-box setting and studying effective defenses against MI attacks. A PROOF OF THEOREM 1 Theorem 2. Let f 1 and f 2 are two models such that for any fixed label y ∈ Y, U f1 (x ns, y) ≥ U f2 (x ns, y). Then, S KL (p(X s |y, x ns)||p f1 (X s |y, x ns)) ≥ S KL (p(X s |y, x ns)||p f2 (X s |y, x ns)). Proof. We can expand the KL divergence D KL (p(X s |y, x ns)||p f1 (X s |y, x ns) as follows. Thus, B EXPERIMENTAL DETAILS B.1 NETWORK ARCHITECTURE The detailed architectures for the two encoders, the decoder of the generator, the local discriminator, and the global discriminator are presented in Table 6, Table 7, Table 8, Table 9, and Table 10, respectively. LeNet adapted from , which has three convolutional layers, two max pooling layers and one FC layer; SimpleCNN, which has five convolutional layers, each followed by a batch normalization layer and a leaky ReLU layer; SoftmaxNet, which has only one FC layer. We split the MNIST dataset into the private set used for training target networks with digits 0 ∼ 4 and the public set used for distilling prior knowledge with digits 5 ∼ 9. The target network is implemented as a Multilayer Perceptron with 2 hidden layers, which have 512 and 256 neurons, respectively. The evaluation classifier is a convulutional neural network with three convolution layers, followed by two fully-connected layers. It is trained on the entire MNIST training set and can achieve 99.2% accuracy on the MNIST test set. Differential privacy of target networks is guaranteed by adding Gaussian noise to each stochastic gradient descent step. We use the moment accounting technique to keep track of the privacy budget spent during training . During the training of the target networks, we set the batch size to be 256. We fix the number of epochs to be 40 and clip the L2 norm of per-sample gradient to be bounded by 1.5. We set the ratio between the noise scale and the gradient clipping threshold to be 0, 0.694, 0.92, 3, 28, respectively, to obtain the target networks with ε = ∞, 9.89, 4.94, 0.98, 0.10 when δ = 10 −5. For model with ε = 0.1, we use the SGD with a small learning rate 0.01 to ensure stable convergence; otherwise, we set the learning rate to be 0.1. The architecture of the generator in Section B.1 is tailored to the MNIST dataset. We reduce the number of input channels, change the size of kernels, and modify the layers of discriminators to be compatible with the shape of the MNIST data. To train the GAN in the first stage of our GMI attack, we set the batch size to be 64 and use the Adam optimizer with the learning rate 0.004, β 1 = 0.5, and β 2 = 0.999 . For the second stage, we set the batch size to be 64 and use the SGD with the Nesterov momentum that has the learning rate 0.01 and momentum 0.9. The optimization is performed for 1500 iterations. The center mask depicted in the main text is used to block the central part of digits. We report the attack accuracy averaged across 640 randomly sampled images from the private set and 5 random initializations of the latent vector for each sampled image.
We develop a privacy attack that can recover the sensitive input data of a deep net from its output
1,429
scitldr
Latent space based GAN methods and attention based encoder-decoder architectures have achieved impressive in text generation and Unsupervised NMT respectively. Leveraging the two domains, we propose an adversarial latent space based architecture capable of generating parallel sentences in two languages concurrently and translating bidirectionally. The bilingual generation goal is achieved by sampling from the latent space that is adversarially constrained to be shared between both languages. First an NMT model is trained, with back-translation and an adversarial setup, to enforce a latent state between the two languages. The encoder and decoder are shared for the two translation directions. Next, a GAN is trained to generate ‘synthetic’ code mimicking the languages’ shared latent space. This code is then fed into the decoder to generate text in either language. We perform our experiments on Europarl and Multi30k datasets, on the English-French language pair, and document our performance using both Supervised and Unsupervised NMT. Neural machine translation (NMT) and neural text generation (NTG) are among the pool of successful NLP tasks handled by neural approaches. For example, NMT has acheived close to human-level performance using sequence to sequence models, which tries to solve the translation problem endto-end. NTG techniques can be categorized into three classes: Maximum Likelihood Estimation based, GAN-based and reinforcement learning (RL)-based. Recently, researchers have extensively used GANs BID8 as a potentially powerful generative model for text BID32, because of their great success in the field of image generation. Inspired by human bilingualism, this work proposes a Bilingual-GAN agent, capable of deriving a shared latent space between two languages, and then leveraging that shared space in translation and text generation in both languages. Currently, in the literature, neural text generation (NTG) and NMT are treated as two independent problems; however, we believe that they are two sides of the same coin and could be studied jointly. Emerging latent variable-based techniques can facilitate unifying NTG and NMT and the proposed Bilingual-GAN will be a pioneering attempt in this direction. Learning latent space manifold via adversarial training has gained a lot of attention recently BID21; text generation and unsupervised NMT BID15 are among these examples where autoencoder (AE) latent space manifolds are learned adversarially. For NTG, in Adversarially Regularized Autoencoders (ARAE) work, a critic-generator-autoencoder combo is proposed to tackle the non-differentiability problem rising due to the discrete nature of text. The ARAE approach is to learn the continuous manifold of the autoencoder latent space and generate samples from it instead of direct synthesis of discrete (text) outputs. Output text is then reconstructed by the decoder from the generated latent samples, similarly to the autoencoding process. Adversarial learning of autoencoders' latent manifold has also been used for unsupervised NMT BID15 BID17 BID30 BID1. In BID15, a single denoising autoencoder is trained to derive a shared latent space between two languages using different loss functions. One of their objectives adversarially enforces the latent space generated by the encoders of the different languages to become shared and difficult to tell apart. Other objectives are autoencoder reconstruction measures and a cross-domain cost closely related to backtranslation BID24 terms. The contribution of this paper is to propose a latent space based architecture as a bilingual agent handling text generation and machine translation simultaneously. We demonstrate that our method even works when using complex multi-dimensional latent representations with attention based decoders, which weren't used in 2 RELATED WORK 2.1 LATENT SPACE BASED UNMT Neural Machine Translation BID10 BID26 BID27 constitutes the state-of-the-art in translation tasks for the majority of language pairs. On the unsupervised side, a few works BID15; BID0; BID16 have emerged recently to deal with neural machine translation without using parallel corpora, i.e sentences in one language have no matching translation in the other language. They all have a similar approach to unsupervised neural machine translation (UNMT) that uses an encoder-decoder pair sequence-to-sequence model that is shared between the languages while trying to find a latent space common to both languages. They all make use of back-translation BID24 needed for the unsupervised part of the training. BID15 use a word by word translation dictionary learned in an unsupervised way BID5 as part of their back-translation along with an adversarial loss to enforce language Independence in the latent code space. They later improve their model BID16 by removing these two elements and instead using a BPE sub-word tokenization BID23 with embeddings learned using FastText BID3 so that the sentences are embedded in a common space. BID0 have a similar flavour but uses some crosslingual embeddings to embed sentences in a shared space. They also decouple the decoder so that one is used per language. Researchers have conventionally utilized GAN framework in image applications BID20 with great success. Inspired by their success, a number of works have used GANs in various NLP applications such as machine translation BID28 BID29, dialogue models, question answering BID31, and natural language generation BID9. However, applying GAN in NLP is challenging due to the discrete nature of text. Consequently, back-propagation would not be feasible for discrete outputs and it is not straightforward to pass the gradients through the discrete output words of the generator. A latent code-based solution for this problem was proposed in, where a latent representation of the text is derived using an AE and the manifold of this representation is learned via adversarial training of a generator. Another version of the ARAE method with updating encoder, based on discriminator loss function was also introduced in BID25. The Bilingual-GAN comprises of two main components: a translation unit and a text generation unit. The complete architecture is described in Figure 1. The middle left rectangle unit represents the text generation unit and the remaining part represents the translation unit. The translation system is a sequence-to-sequence model with an encoder and a decoder extended to support two languages. This first translation component is inspired by the unsupervised neural machine translation system by BID15. We have one corpus in language 0 and another Figure 1: The complete architecture for our Bilingual GAN in language 1 (they need not be translations of each other), an encoder and a decoder shared between the two languages. The loss function which is used to compare two sentences is the same as the standard sequenceto-sequence loss: the token wise cross-entropy loss between the sentences, that we denote by ∆(sentence 1, sentence 2). For our purpose, let s li be a sentence in language i with i ∈ {0, 1}. The encoding of sentence s li is denoted by enc (s li) in language i which is used as the word embeddings of language i to convert the input sentence s li. Similarly, denote by dec (x, l i) the decoding of the code x (typically an output of the encoder) into language l i using the word embeddings of target language i to convert into words. Then, the system is trained with three losses aimed to allow the encoder-decoder pair to reconstruct inputs (reconstruction loss), to translate correctly (cross-domain loss) and for the encoder to encode language independent codes (adversarial loss). The losses are applied for every batch for both languages. Reconstruction Loss This is the standard auto-encoder loss which aims to reconstruct the input: DISPLAYFORM0 This loss can be seen in figure 2.Cross-Domain Loss This loss aims to allow translation of inputs. It is similar to back-translation BID24. For this loss, denote by transl (s li) the translation of sentence s li from language i to language 1 − i. The implementation of the translation is explained in subsection 3.1.1 when we address supervision. DISPLAYFORM1 In this loss, we first translate the original sentence s li into the other language and then check if we can recreate the original sentence in its original language. This loss can be seen in figure 2.Adversarial Loss This loss is to enforce the encoder to produce language independent code which is believed to help in decoding into either language. This loss has been defined adversarially. Let D be a discriminator where D(c) is a prediction for the language of the sentence that was used to create code c (typically the output of an encoder), 0 if the sentence is in language 0 and 1 if the sentence is in language 1. We thus have for the discriminator D the following DISPLAYFORM2 and for its adversary, the encoder, the opposite: DISPLAYFORM3 Input Noise In order to prevent the encoder-decoder pair to learn the identity function and to make the pair more robust to noise, noise is added to the input of the encoder. This is illustrated in figure 2 where you see the + noise atop the arrows feeding into the encoder. On the input sentences, the noise comes in the form of random word drops (we use a probability of 0.1) and of random shuffling but only moving each word by at most 3 positions. This is also the noise scheme that BID15 use in their work. We also add a Gaussian nose of mean 0 and standard deviation of 0.3 to the input of the decoder. Figure 2: The translation unit of the Bilingual-GAN. Recall that in the cross-domain loss above, equation 1, the translation function transl (s li) was used to translate the sentence s li from language i to language 1 − i. In fact, the choice of this function directly affects the amount of supervision in the trained model. Indeed, notice that only s li and transl (s li) are used in the losses. If the translation function transl is a lookup of a word-by-word translation dictionary learned in an unsupervised fashion as in BID5, then the whole system is trained in an unsupervised manner since we have no groundtruth information about s li. After a couple of epochs, the encoder-decoder model should be good enough to move beyond simple word-by-word translation so then the translation function can be changed to using the model itself to translate input sentences. This is what's done in BID15 where they change the translation function from wordby-word to model prediction after 1 epoch. In our case, we get the word-by-word translation lookup table by taking each word in the vocabulary and looking up the closest word in the other language in the multilingual embedding space created by BID6.If the translation function transl is able to get the groundtruth translation of the sentence, for example if we have an aligned dataset, then transl (s li) = s lj which is encoded and decoded into the original language i and compared with s li getting the usual supervised neural machine translation loss. However, note that this supervision is only one way since you learn to predict in language i given a sentence in language j. We refer to this level of supervision as Half-Supervised in our section later. In order to have supervision both ways, one would need to have both s li and s lj in the training corpus, this is what we refer to as the Supervised level. There are a few choices for embedding the sentence words before feeding into the encoder. We experiment with a few and show the in section 4.3. In particular, we use randomly initialized embeddings, embeddings trained with FastText BID3 and both pretrained and self-trained cross-lingual embeddings BID6. Here we show the exact specifications and training optimizers for the translation part of the Bilingual-GAN. The embeddings have size 300, the encoder consists of either 1 or 2 layers of 256 bidirectional LSTM cells, the decoder is equipped with attention ) and consists of a single layer of 256 LSTM cells. The discriminator, when the adversarial loss is present, is a standard feed-forward neural network with 3 layers of 1024 cells with ReLU activation and one output layer of one cell with Sigmoid activation. We used Adam with a β 1 of 0.5, a β 2 of 0.999, an of 10 −8 and a learning rate of 0.0003 to train the encoder and the decoder whereas we used RMSProp with a learning rate of 0.0005 to train the discriminator. Most of the specifications here were taken from BID15. First, we pre-train our NMT system 3.1. The NMT system learns a shared latent space (c x, c y) for the two language directions, and this shared latent space is enforced by a GAN setup between a critic and the encoders, and through back-translation BID24. Then, a bilingual generator is trained adversarially to learn the manifold of the shared latent space (c x, c y), which is learned in the NMT system. It is trained similar to a modified version of ARAE BID25 to generate codesĉ which mimic the samples from the shared latent space. Once GAN training is finished, the decoders of the NMT system can be used to generate parallel bilingual sentences by decoding the generator output code,ĉ. The proposed bilingual generator is a GAN BID8 trained to learn the hidden state manifold of the RNN-based encoder as in.We used Wasserstein GAN gradient penalty (WGAN-GP) BID9 ) approach in our experiments as: DISPLAYFORM0 where DISPLAYFORM1 and it is a random latent code obtained by sampling uniformly along a line connecting pairs of the generated code and the encoder output. P r is the distribution of the encoder output data, c represents the latent'code' or the latent space representations of the input text, P g is the distribution of the generated output data,ĉ represents the generated code representations, and λ is the gradient penalty term. We used λ = 10 . In order to train the GAN, we used the encoder output of our NMT system as'real' code. The encoder output is a latent state space matrix which captures all the hidden states of the LSTM encoder. We then generate noise which is fed into a generator neural network comprising 1 linear layer and 5 convolutional layers to produce a'mimicked' or'fake' code matrix. The'real' code and the fake code are then fed into the discriminator neural network, which also consists of 5 convolutional and 1 linear layer. The discriminator output is used to calculate the generator and discriminator losses. The losses are optimized using Adam BID12. Unlike the GAN update in BID9, we use 1 discriminator update per generator update. We have seen that by increasing the number of discriminator updates per generator update did not improve model training. In one training iteration, we feed both an English and a French sentence to the encoder and produce two real codes. We generate one fake code by using the generator and calculate losses against both the real codes. We average out the two losses. Although, the NMT is trained to align the latent spaces and we can use just one language to train the GAN, we use both real codes to reduce any biases in our NMT system. We train our GAN on both the supervised and unsupervised NMT scenarios. In the supervised scenario, we feed English and French parallel sentences in each training iteration. In the unsupervised scenario, we ensure the sentences are not parallel. Once the GAN is trained, the generator code can be decoded in either language using the pre-trained decoders of the NMT system. In latent-space based text generation, where the LSTM based encoder-decoder architectures do not use attention, a single code vector is generally employed which summarizes the entire hidden se-quence. A variant of the approach is to employ global mean pooling to produce a representative encoding BID22. We take advantage of our attention based architecture and our bi-directional encoder to concatenate the forward and backward latent states depth-wise and produce a code matrix which can be attended to by our decoder. The code matrix is obtained by concatenating the latent code of each time steps. Consequently, the generator tries to mimic the entire concatenated latent space. We found that this richer representation improves the quality of our sentence generation. This section presents the different experiments we did, on both translation and generation, and the datasets we worked on. The Europarl and the Multi30k datasets have been used for our experimentation. The Europarl dataset is part of the WMT 2014 aligned corpora BID13 while the Multi30k dataset is one used for a captioning task BID7 and consists of images and their captions. We only use the French and English pair. As preprocessing steps on the Europarl dataset, we removed sentences longer than 20 words and those with a ratio of number of words between translations is bigger than 1.5. Then, we tokenize the sentence using the Moses tokenizer BID14. For the Multi30k dataset, we use the supplied tokenized version of the dataset with no further processing. For the BPE experiments, we use the sentencepiece subword tokenizer by Google BID23. BPE is a subword tokenization method used sentences. Consequentially, the decoder also predicts subword tokens. This in a common embeddings table for both languages since English and French share the same subwords. The BPE was trained on the training corpora that we created. For the training, validation and test splits, we used 200k randomly chosen sentences for the Europarl dataset for training and 40k sentences for testing. When creating the splits for unsupervised training, we make sure that the sentences taken in one language have no translations in the other language's training set by randomly choosing different sentences for each of them with no overlap. For the validation set in that case, we chose 80k sentences. In the supervised case, we randomly choose the same sentences in both languages with a validation set of 40k. For the Multi30k dataset, we use 12 850 and 449 sentences for training and validation respectively for the unsupervised case and the whole provided split of 29k and 1014 sentences for training and validation respectively. In both cases, the test set is the provided 1k sentences Flickr 2017 one. For the hyperparameter search phase, we chose a vocabulary size of 8k for the Europarl, the most common words appearing in the training corpora and for the final experiments with the best hyperparameters, we worked with a vocabulary size of 15k. For Multi30k, we used the 6800 most common words as vocabulary. Translation BLEU We calculate the BLEU-N score according to the following equation BID19: DISPLAYFORM0 where p n is the probability of n-gram and w n = 1 n. The BP is set to 1 as we translated to the fixed length sentences in both directions. We report the of BLEU-4 in TAB1 and 4. Generation BLEU We also use the BLEU-N scores to evaluate the generated sentences. Here, we set BP to 1 as there is no reference lengths like in machine translation. The is described in TAB2. For the evaluations, we generated 40 000 sentences for the model trained on Europarl and 1 000 on the model trained on Multi30k. Perplexity is used to evaluate the fluency of the generated sentences. For the perplexity evaluations, we generated 100 000 and 10 000 sentences for the Europarl and the Multi30k datasets respectively. The forward and reverse perplexities of the LMs trained with maximum sentence length of 20 and 15 using the Europarl and he Multi30k datasets respectively are described in TAB4. The forward perplexities are calculated by training an RNN language model BID33 on real training data and evaluated on the generated samples. This measure describe the fluency of the synthetic samples. We also calculated the reverse perplexities by training an RNNLM on the synthetic samples and evaluated on the real test data. The are illustrated in TAB4. A lot of hyperparameters were used in our experiments and to keep the restults table compact, we abbreviated a few. We first explain the shorthands before going to the discussion of the . The levels of supervision has been explained in the previous section 3.1.1. MTF stands for model translation from and is the epoch at which we stop using the transl function and instead start using the model. NC stands for a New Concatenation method we used to combine the bidirectional encoder output: either we concatenate the forward and backward states lengthwise to get as many output vectors as twice the sentence length but each of them has dimension equal to the number of encoder cells (old) or depthwise to get the same number of output vectors as the sentence length but each vectors is twice the size of the number of encoder cells (new). FastText refers to the use of FastText BID3 to train our embeddings, Xlingual refers to the use of cross-lingual or multilingual embeddings using BID6 either trained on our own (Self-Trained) or using the pretrained (Pretrain.) ones and BPE refers to the use of subword tokenization BID23 with the tokens and the embeddings learned as in BID23. NoAdv refers to not using the adversarial loss to train, i.e. we do not enforce language independance in the code space through the adversarial loss, 2Enc refers to using a 2 layers of 256 cells each bidirectional LSTM encoder. This section of the focuses on the scores we have obtained while training the neural machine translation system. The main lines 4 and 6 that removing the adversarial loss helps the model. This is probably what motivated the removal of the adversarial loss in BID16 It's possible that the reconstruction and the cross-domain losses are enough to enforce a language independent code space. Lines 5 and 6 show that using 2 layers for the encoder is beneficial but that was to be expected. Lines 6 and 7 show that the new concatenation method improved upon the model. A small change for a small improvement that may be explained by the fact that both the forward and the backward states are combined and explicitly represent each word of the input sentence rather than having first only the forward states and then only the backward states. Surprisingly, BPE gave a bad score on English to French (line 8). We think that this is due to French being a harder language than English but the score difference is too big to explain that. Furter investigation is needed. Line 10 shows good with trainable FastText embeddings trained on our training corpora. Perhaps using pre-trained ones might be better in a similar fashion as pretrained cross-lingual embeddings helped over the self-trained ones as in lines 11 and 13. Lines 11 and 14 also show the importance of letting the embeddings change during training instead of fixing them. We evaluated text generation on both the fluency of the sentences in English and French and also on the degree to which concurrently generated sentences are valid translations of each other. We fixed our generated sentence length to a maximum of length 20 while training on Europarl and to a maximum of length 15 while training on Multi30k. We measured our performance both on the supervised and unsupervised scenario. The supervised scenario uses a pre-trained NMT trained on parallel sentences and unsupervised uses a pre-trained NMT trained on monolingual corpora. Generation BLEU scores are measured using the two test sets. The are described in TAB2: Generation BLEU scores for Text Generation on Europarl and Multi30k Datasets and French. We can note that the English sentences have a higher BLEU score which could be a bias from our NMT. We can also note that lower BLEU scores for the Multi30k because of the smaller test size. Perplexity is described in TAB4. The perplexities of the LMs using real data are 140.22 (En), 136.09 (Fr) and 59.29 (En), 37.56 (Fr) for the Europarl and the Multi30k datasets respectively reported in F-PPL column. From the tables, we can note the models with lower forward perplexities (higher fluency) for the synthetic samples tend to have higher reverse perplexities. This is because the LMs are trained on synthetic sentences and they might have ungrammatical sentences, which give the higher reverse perplexities on real test data. Also, the lower forward perplexities for the Bilingual-GAN generated sentences than the real data might indicate that the generated sentences has less diversity. Translation BLEU score is used to evaluate the ability of our GAN to generate parallel sentences. However, we need access to a reference set to measure BLEU score. We use Google Translate to translate English sentences to French and vice-versa. We used the sentences generated by our Bilingual-GAN as the candidate set and the Google translations are used as the reference set. We measure BLEU scores on 1000 sentences for each dataset and for the supervised and unsupervised models. The BLEU scores are shown in TAB5. We perform well for the Multi30k dataset specially for the supervised scenario. Our BLEU scores are lower on the Europarl dataset. However, we get slightly higher scores for the unsupervised model compared to the supervised. If we compare our BLEU scores to conventional NMT systems, trained on these datasets, they are lower. However, generating parallel sentences by using the proposed Bilingual-GAN is a novel approach and these numbers can be a benchmark for future research. le débat est clos. Europarl Unsupervised i have no need to know that it has been adopted in a democratic dialogue. je n'ai pas besoin de ce qu'il aété fait en justice.written statements (amendment) explications de vote: voir procès-verbal that is what is the case of the european commission's unk. c'est le cas qui suppose de la unk de la commission. Multi30k Supervised a child in a floral pattern, mirrored necklaces, walking with trees in the .un enfant avec un mannequin, des lunettes de soleil, des cartons, avec des feuilles. two people are sitting on a bench with the other people. deux personnes sont assises sur un banc et de la mer. a man is leaning on a rock wall.un homme utilise un mur de pierre. Multi30k Unsupervised three people walking in a crowded city.trois personnes marchant dans une rue animée.a girl with a purple shirt and sunglasses are eating. un homme et une femme mange un plat dans un magasin local.a woman sleeping in a chair with a graffiti lit street. une femmeâgée assise dans une chaise avec une canne en nuit. Table 5: Examples of aligned generated sentences The subjective judgments of the generated sentences of the models trained using the Europarl and the Multi30k datasets with maximum sentence length of size 20 and 15 is reported in Table 6. We used 25 random generated sentences from each model and give them to a group of 4 people. We asked them to rate the sentences based on a 5-point Likert scale according to their fluency. The raters are asked to score 1 which corresponds to gibberish, 3 corresponds to understandable but ungrammatical, and 5 correspond to naturally constructed and understandable sentences BID22. From Table 6, we can note that the proposed Bilingual-GAN approach gets good rate. The supervised approach get better rate compare to the unsupervised approach. Some examples of aligned generated sentences are describe in Table 5. Table 6: Human evaluation on the generated sentences by Bilingual-GAN using the Europarl and the Multi30k dataset. Our work proposed a novel method combining neural machine translation with word-based adversarial language generation to generate bilingual, aligned sentences. This work demonstrates the deep common grounds between language (text) generation and translation, which have not been studied before. We also explored learning a large code space comprising of the hidden states of an RNN over the entire sequence length. The are promising and motivate a few improvements such as improving the quality of the generated sentences and eliminating language specific performance degradation. Finally, various generation methods including reinforcement learning-based, codebased, text-based and mixed methods can be incorporated into the proposed framework to improve the performance of bilingual text generation. Since during language generation our learned code space favors English sentences over French sentences, we need to remove language specific biases or explore disentangling the code space into language specific and language agnostic subspaces.
We present a novel method for Bilingual Text Generation producing parallel concurrent sentences in two languages.
1,430
scitldr
As Artificial Intelligence (AI) becomes an integral part of our life, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to explain its behavior is one of the key requirements of an explainable agency. Prior work on explanation generation focuses on supporting the reasoning behind the robot's behavior. These approaches, however, fail to consider the mental workload needed to understand the received explanation. In other words, the human teammate is expected to understand any explanation provided, often before the task execution, no matter how much information is presented in the explanation. In this work, we argue that an explanation, especially complex ones, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reducing the mental workload of humans. However, a challenge here is that the different parts of an explanation are dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented along with three different implementations satisfying different online properties. We base our explanation generation method on a model reconciliation setting introduced in our prior work. Our approaches are evaluated both with human subjects in a standard planning competition (IPC) domain, using NASA Task Load Index (TLX), as well as in simulation with ten different problems across two IPC domains. As intelligent robots become more prevalent in our lives, the interaction of these AI agents with humans becomes more frequent and essential. One of the most important aspects of human-AI interaction is for the AI agent to provide explanations to convey the reasoning behind the robot's decision-making BID0. An explanation provides justifications for the agent's intent, which helps the human maintain trust of the robotic peer as well as a shared situation awareness BID1, BID2. Prior work on explanation generation often focuses on supporting the motivation for the agent's decision while ignoring the underlying requirements of the recipient to understand the explanation BID3, BID4, BID5. However, a good explanation should be generated in a lucid fashion from the recipient's perspective BID6.To address this challenge, the agent should consider the discrepancies between the human and its own model while generating explanations. In our prior work BID6, we encapsulate such inconsistencies as model differences. An explanation then becomes a request to the human to adjust The model reconciliation setting BID6. M R represents the robot's model and M H represents the human's model of expectation. Using M H, the human generates π M H, which captures the human's expectation of the robot. Whenever the two plans are different, the robot should explain by generating an explanation to reconcile the two models. the model differences in his mind so that the robot's behavior would make sense in the updated model, which is used to produce the human's expectation of the robot. The general decision-making process of an agent in the presence of such model differences is termed model reconciliation BID6, BID7.One remaining issue, however, is the ignorance of the mental workload required of the human for understanding an explanation. In most earlier work on explanation generation, the human is expected to understand any explanation provided regardless of how much information is present and no discussion has been provided on the process for presenting the information. In this work, we argue that explanations, especially complex ones, should be provided in an online fashion, which intertwines the communication of explanations with plan execution. In such a manner, an online explanation requires less mental workload at any specific point of time. One of the main challenges here, however, is that the different parts of an explanation could be dependent on each other, which must be taken into account when generating online explanations. The online explanation generation process spreads out the information to be communicated while ensuring that they do not introduce cognitive dissonance so that the different parts of the information are perceived in a smooth fashion. Let us illustrate the concept of online explanations through a familiar situation between two friends. Mark and Emma want to meet up to study together for an upcoming exam. Mark is a take-it-easy person so he plans to break the review session into two 60 minutes parts, grab lunch in between the sub-sessions and go for a walk after lunch. On the other hand, Mark knows that Emma is of a more focused type who would rather keep the review in one session and get lunch afterwards. Mark would like to keep his plan. However, had he explained to Emma at the beginning of his plan, he knew that Emma would have proposed to order takeouts for lunch on the way before the review session. Instead, without revealing his plan, he goes with Emma to the library. After studying for 60 minutes, he then explains to Emma that he cannot continue without energy, which makes going to lunch the best option for both. At the same time, Mark refrained from telling Emma (until after lunch) that he also needed a walk since otherwise Emma would have proposed for him to take a walk alone while she stays a bit longer for review, and then to meet up at the lunch place. The above example demonstrates the importance of providing an explanation in an online fashion. Mark gradually reveals the reasoning to maintain his plan as the execution unfolds so that it also becomes both acceptable and understandable to Emma, even though being subject to different values due to model differences (e.g., Mark values lunch break more than Emma thinks he does). The key point here is to explain minimally and only when necessary. In this way, the information to be conveyed is spread out throughout the plan execution, potentially with even a reduced amount of information, so that there is less mental workload requirement at the current step-from Emma's perspective, the interaction with Mark is more straightforward. In this paper, we develop a new method for explanation generation that intertwines explanation with plan execution. The new form of explanation is referred to as online explanation, which considers the mental workload of the receiver of an explanation by breaking it into multiple parts that are to be communicated at different times during the plan execution. We implemented three different approaches for online explanation generation, each focusing on different "online" properties. In the first approach, our focus is on matching the plan prefix. In the second approach, the focus is on making the very next action understandable to the human teammate. In the third approach, the focus is on matching the prefix of the robot's plan with any possible optimal human plan. We use a model search method that ensures that the earlier information communicated would not affect the later parts of the explanation. This creates a desirable experience for the recipient by significantly reducing the mental workload. Our approaches are evaluated both with human subjects and in simulation. II. RELATED WORK AI and its numerous applications have provided astounding benefits in areas such as transportation, medicine, finance and military in recent years, but AI agents are so far limited in their ability to operate as a teammate. To be considered a teammate, the agent must not only achieve a given task, but also provide a level of transparency to other members of the team BID2. One of the ways to achieve this is to enable AI agents to be self-explanatory in their behaviors. Recently, explainable AI paradigm BID8 rises as one essential constituent of human-AI collaboration. Explainable AI helps improve human trust of the AI agent and maintain a shared situation awareness by contributing to the human's understanding of the underlying decision-making process of the agent. The explainable agency's effectiveness BID9 is assessed based on its capability to model the human's perception of the AI agent accurately. This means that an explainable AI agent must not only model the world, but also the other agents' perception of itself BID10. This model of the other agents allows the agent to infer about their expectation of itself. Using this model, an agent can generate legible motions BID11, explicable plans BID7, BID12, BID13, or assistive actions BID14. In these approaches, an agent often substitutes cost optimality with a new metric that simultaneously considers cost and explicability. Another way of using the model is for an AI agent to signal its intention before execution BID15. The motivation here is to use the model to search for additional context information that would help improve human understanding. A third way of using this model is for the agent to explain its behavior by generating explanations BID3, BID4, BID5. Similar to intention signaling, this method has the benefit that the agent can maintain its optimal behavior. Research along this direction has focused on generating the "right" explanations based on the recipient's perception model of an explanation BID6, BID16. This is useful, however, only with the assumption that the explanation can be understood, regardless of how much information is provided or whether sufficient time is given-the mental workload that is required for understanding an explanation is largely ignored. In our prior work, we have studied how the ordering of the information of an explanation may influence the perception of an explanation BID17. In this work, we further argue that an explanation must sometimes be made in an online fashion. This is especially true for complex explanations that require a large amount of information to be conveyed. The idea behind online explanation generation is to provide a minimal amount of information that is sufficient to explain part of the plan that is of interest currently (e.g., the next action), and in such a way intertwine explanation generation with plan execution. Our problem definition is based on the model reconciliation setting defined in our prior work BID6. We provide a brief review of the relevant concepts before defining our problem in this work. Our problem is closely associated with planning problems so we first provide the here. A planning problem is defined as a tuple (F, A, I, G) using PDDL BID18, similar to STRIPS BID19. F is the set of predicates used to specify the state of the world and A is the set of actions used to change the state of the world. Actions are defined with a set of preconditions, add and delete effects. I, G are the initial and goal state. DISPLAYFORM0 and π * I,G is the robot's plan to be explained. Where cost(π * I,G, M R) is the cost of the plan generated using M R and cost * M R (I, G) is the cost of the optimal plan based on the initial and goal state pair under M R. In other words, the robot plan to be explained is required to be optimal according to M R, assuming rational agents. The model reconciliation setting also takes the human's model M H into account, which captures the human's expectation of the robot's behavior. When the robot's behavior to be explained (i.e., π * I,G) matches with the human's expected behavior, the models are said to be reconciled for the plan. A figure that illustrates the model reconciliation setting is presented in FIG0. Explanation generation in a model reconciliation setting means bringing two models, M H and M R, "close enough" by updating M H such that π * I,G, the robot's plan, becomes fully explainable (optimal) in the human's model. A mapping function was defined in BID6 to convert a planning problem into a set of features that specifies the problem as Γ: M −→ S is a mapping function, which transfers any planning problem (F, A, I, G) to a state s in the feature space as follows: DISPLAYFORM1 In other words, the mapping function converts a planning problem into a set of features that specifies the problem. Definition 2 (Explanation Generation BID6): The explanation generation problem is a tuple (π * I,G, M R, M H), and an explanation is a set of unit feature changes to DISPLAYFORM2 where M H is the model after the changes. An explanation hence reconciles two models by making the cost difference between the human's expected plan and the robot's plan smaller after the model updates. Definition 3 (Complete Explanation BID6): Given an explanation generation problem, a complete explanation is an explanation that satisfies cost(π * DISPLAYFORM3 . The robot's plan must be optimal in the human's model after a complete explanation ( M H). A minimal complete explanation (MCE) BID6 is defined as a complete explanation that contains the minimum number of unit feature changes. While the previous explanation generation approach provides a framework to generate explanations considering both the robot's model and the human's model, it largely ignores the mental workload requirement of the human for understanding the explanation. We introduce online explanation generation to address this issue. The key here is to only provide a minimal amount of information during the plan execution to explain the part of the plan that is of interest and not explainable. Definition 4 (Online Explanation Generation): Given a model reconciliation problem, an online explanation is a set of sub-explanations (e k, t k), where e k represents the kth set of unit features to be made (as a sub-explanation) at step t k in the plan. Basically, an online explanation requires only that any actions in the robot's plan before the kth sub-explanations will match with that of the human's expectation. In such a way, the robot can split an explanation into multiple parts, which are made in an online fashion as the plan is being executed. We provide three different approaches of online explanation generation based on the definition provided, while each of these approaches focus on one aspect of explanation generation intertwined with plan execution. Section IV-A discusses OEG with Plan Prefix matching, Section IV-B describes OEG with Next Action matching and Section IV-C explains OEG with any prefix matching. To generate the sub-explanations (i.e., {e k}) for an online explanation, the planning process must consider how the sequence of model changes would in the changes of the human's expectations after each sub-explanation. Similar to the search process for complete explanations BID6, we convert the problem of explanation generation to the problem of model search in the space of possible models. The challenge here is that the model changes may not be independent, i.e., future changes may render a mismatch in the previously reconciled plan prefixes. To address this issue, it must be ensured that the model changes after e k, i.e., e k+1:m where m denotes the size of the set of sub-explanations, would not change the plan prefixes in M H. This can be achieved by searching from M R to M H to find the largest set of model changes which ensure that the plan prefix would not change afterwards after further sub-explanations. This search process is illustrated in FIG1. An OEG-PP is a set of subexplanations (e k, t k) such that: DISPLAYFORM0 where Prefix(π, t) returns the prefix of a plan π up to step t k−1. E k represents e 1:k and π H E k is the optimal plan created from M H E k (M H after providing sub-explanations e 1 to e k). More specifically, the following process will be performed recursively for each sub-explanation. First, we continue moving along π * I,G = (a 1, a 2, ..., a n) as long as the plan prefix matches with the prefix of the plan using the human model M H. Let t = t 1 be the first plan step where they differ. Our search for the sub-explanation starts BID6, the difference is that in our approach the search starts from the robot model and stops where the plan prefixes for the updated human model and the robot model match, while in the previous approach the search process starts from the human model (M H). In this aspect, our research process is more akin to MME BID6. However, since we are focusing on matching the prefixes rather than the whole plan in one shot, our approach must run this process multiple times compared to only once in MME. While seemingly more computationally expensive, this characteristic actually allows us to beat both MCE and MME in terms of computation since our approaches at any time consider only a small set of changes (see ). The dotted line represents the border of the maximum state space model modification in robot model which reconciles the two models up to where the plan execution currently is. Maximum updates to the robot model is equivalent to minimum updates to the human model. with M R. It finds the largest set of model changes to M R such that the prefix of a plan using the corresponding model (i.e., M R minus the set of changes) matches with that of π * I,G up to step t 2 − 1. The complement set of changes (i.e., the difference between M H and M R, minus this set of changes) will be e 1. For the next recursive step, we will start from action t 1 and the human model will be M H E1. To ensure that the prefix (up to t 2 − 1) will be maintained for future steps, we directly force the later plans to be compatible with the prefix. Since we know that an optimal plan exists that satisfies this requirement following the search process, this would not affect our solution for online explanation. The recursive search algorithm for model space OEG is presented in Algorithm 1 for finding e k given E k−1. To search for e k, we use a recursive model reconciliation procedure on the model space. Given M DISPLAYFORM1 and M R, we start off with finding the difference between these two models, and modify M R with respect to M H to find the largest set of model changes that can satisfy constraints introduced in Eq.. This algorithm continues until the human's plan matches with that of the robot's plan. B. OEG for matching Next Action (OEG-NA)Throughout OEG-PP, we assume that generating explanations would modify M H, and the goal of explanation generation is to ensure that the robot and human plan have the same prefix at any step of plan execution. However, this is not always required since the human may not be interested in actions that occurred. Hence, we relax earlier than the current action the plan prefix condition, such that the robot needs only to reconcile between M R and M H to match the very next action in π * I,G and π DISPLAYFORM2 at step t k, regardless of the earlier actions in the plan prefix. This approach is also motivated by the fact that the human is with prefix set up to DISPLAYFORM3 DISPLAYFORM4, t k, ∆); return λ max as e k; known to have limited cognitive memory span BID20. In the most limited case, the agent focuses on explaining the very next action that is different between the most recent human plan π DISPLAYFORM5 and π * I,G. Similar to Algorithm 1, We perform a recursive model reconciliation procedure on the model space. Compared to other two approaches, first, we perform the search from M H \M H rather than M R (see FIG1) since it is computationally faster due to the fact that the plan prefixes do not need to be identical and since the search procedure is monotonic, the search would be equivalent as if the procedure started from M R. The other difference here is that we do not compare the entire plan prefix. Instead, the agent explains only the immediate next action that does not match in the human and robot plans that, without requiring the explanation also maintains the match between the prefixes. In this aspect, the search process of OEG is similar to that of minimally monotonically explanation (MME) in BID6, except that the process must be executed multiple times for OEG due to its online fashion. In the implementations, however, our algorithms actually combine search from M H and M R for a better performance, given the fact that latter model updates do not often affect the previous sub-explanations: DISPLAYFORM6 One assumption in the OEG-PP approach is that the robot has only right plan. Subsequently, the robot's goal is to reconcile the human's plan with respect to its own plan using model space search. We relax this assumption by assuming that there is a set of optimal plans. In such a setting, the robot does not need to explain as long as there exists a human plan that has the same prefix as the robot's plan earlier than the current action. The goal of OEG here is thus to satisfy the following: DISPLAYFORM0 is the human optimal plan generated from the original human model (M H). A straightforward solution to OEG-AP is to generate all human optimal plans and check if any one of them matches with the robot's plan (prefix). This approach however is computationally expensive. Instead, we implemented a compilation approach. To check that a plan prefix Prefix(π * I,G, t k − 1) in the robot's plan is also a prefix in the human's model, we first compile the problem in the human's model into a new problem such that the robot's plan prefix would always be a prefix of the human's plan. If the cost of the human's optimal plan in this new domain model is equal to the cost of the human's optimal plan before the compilation, then clearly there exists an optimal plan in the human's model that matches the prefix. Otherwise, we know that an explanation must be made. Hence, the key here is to ensure that a plan prefix is always satisfied in the compiled model. This is not difficult to achieve. For all i ≥ 1, such that a i, a i+1 ∈ Prefix(π * I,G, t k − 1), where a i, a i+1 are two consecutive actions in π * I,G, the compilation can be achieved by adding a predicate p i to a i as an effect, which is a prerequisite for a i+1. a i+1, in its turn deletes p i and adds p i+1 which is a prerequisite for a i+2, etc. To search for e k, we again use a recursive model reconciliation process on the model space, similar to Algorihm 1. Similar to IV-A, we start off with finding the difference between these two models. The main difference in this approach is that after each model update after a subexplanation, the agent checks if there exists a human optimal plan that has the same plan prefix as the robot's plan up until the next action using the compilation approach described above. This check stops when such a plan does not exist and a new sub-explanations must be identified by model space search. This process continues until an optimal human plan exists that matches the robot's plan. Note however that this does not mean that an optimal planner would necessary return the same plan using the human's model. We evaluated our approach for online explanation generation both with human subjects and in simulation for the different approaches introduced above and compared the with Minimally Complete Explanation (MCE) BID6 approach. For simulation, the goal is to see that how online explanation is in general different from MCE in terms of the information needed and computation time. We evaluated our approach on ten different problems across the rover domain and barman domain-two standard IPC domain described below. For both human and simulation evaluations, the differences between M H and M R are made by randomly removing preconditions from an arbitrarily chosen set of model features. For human subject study, the aim is to confirm the benefits of online explanation generation. Our hypothesis is as follows:• Online explanation generation will reduce mental workload and improve task performance. We evaluated our approach with human subjects on a modified rover domain (see Sec. V-D). In this domain, the rover is supposedly on Mars and the goal is to explore the space to take rock and soil samples as well as taking images and communicate the after analysis to the base station via the lander. In order to take any image, the rover must first calibrate its camera with respect to the target. To sample rock or soil, the robot must have an empty space in its storage. At any point of time, the rover only has enough space to store one sample. In order to take and π R for the Rover domain problems. The y-axis represents the distances while x-axis represents the number of E k (sub-explanations). multiple samples, it must drop the current sample before taking another sample BID22. In this domain, the robot assumes the role of a barman whose goal is to serve a desired set of drinks using drink dispensers, glasses and a shaker. The constraints are that the robot can grab one object if its hand is empty, the robot can grab one object with one hand, and before filling it with a drink, a glass should be empty and clean BID22. TAB1 shows the simulation comparing minimally complete explanations (MCE) withx OEG-PP, OEG-NA and OEG-AP approaches for 5 problems in the rover domain and 5 problems in the barman domain. While the average number of model features of OEG (in a sub-explanation) being shared at each instance of time is considerably lower that MCE (every feature in the explanation is presented at once), the total number of model features in an explanation are the same for MCE and OEG-PP across most of the problems. We can see that in some cases (for instance, P3 from the Rover domain), the total number of model features in the explanation for OEG-PP and OEG-NA is more than that of MCE, which is expected since OEG is focused on generating the minimal amount of information at each time step, instead of the amount overall. The reason for sharing more information in total in OEG-PP and OEG-NA, when compared to MCE, lies in the dependence between the features and the behavior of the planner (i.e., which optimal is returned). While OEG-AP seems to have improved over the amount of information in an explanation, it actually only shows the advantage of considering all optimal plans instead of the one returned by the planner. Comparing both the OEG-NA and OEG-AP approaches with MCE and OEG-PP, there is a remaining distance between the robot's plan and the human's plan in terms of plan action distance (also returned by an optimal planner). The distance of OEG-NA is due to the fact that only the immediate next action is considered. For OEG-AP, as we explained, there is no guarantee that the plan returned using the human's model will be the same as the robot's plan since it considers all optimal human plans and only requires one of them to match the robot's. This is also illustrated more clearly in FIG3. Furthermore, ion OEG approaches, since the execution and explanation is intertwined, the plan distance BID21 between π DISPLAYFORM0 and π R in our approaches gradually moves towards 0 as shown, which suggests a "smoother" adjustment for M H during the execution. This is expected to have a positive effect on the human's mental workload, which we evaluated next.. In our implementation, the possible model updates are sorted ascending based on their feature size and our algorithms start checking the ones with the smallest changes from the robot's side. The consistency check is left as we proceed to the next sub-explanation and backtracking is performed when it fails. This search process takes advantage of the fact that latter information often does not affect the previous sub-explanations. To test our hypothesis, we designed a human study to compare our three approaches for online explanation generation with minimally complete explanation (MCE) BID6. Furthermore, to ensure that the performance difference is not solely due to simply breaking information into multiple pieces, we also implement another approach that randomly breaks MCE during plan execution (referred to as MCE-R). We conducted our experiment using Amazon Mechanical Turk (MTurk) with 3D simulation. The subjects were given an introduction to the rover domain and the task they were supposed to help with. Each subject was given a 30-minute limit to finish the task. Explanations were provided using plain English language and rover actions were depicted using GIF images from a 3D simulated scenario as the rover executes the plan. Figure 4 shows the 3D simulated scenario presented to the subjects. In this experiment, the human subject acts as the rover's commander, where the robot is on Mars and supposed to perform a mission autonomously. The human subject observes the rover's plan sequentially and is asked to determine whether the rover's current action is questionable or not, with explanations provided by OEG approaches or MCEs. Each subject can only perform the task for one setting to reduce the influence between different runs. To observe the effect on mental workload more clearly, we have also added a few spatial puzzles to the experiment as a secondary task to create additional cognitive demand. In the scenario, we deliberately remove certain information from the domain so that the subject would create an incorrect plan, when no explanation is given. In particular, we did not inform them that the storage is limited, the memory is limited, the camera must be calibrated, and the camera must be calibrated with respect to the objective. This hidden information introduces differences between M H and M R in the model reconciliation setting, and hence ing in scenarios where explanations must be provided. In this scenario, for example, the subject may question the action for calibrating the camera if they were not specifically told to consider that. In MCE setting, the robot shares all the information at the beginning of the task BID6, while the information is randomly broken to be communicated at different steps in MCE-R. In each of the OEG setting, the robot uses different approaches of online explanation generation, which intertwines the communication of explanation with the plan execution. In particular, the four pieces of missing information are provided to the subjects at different steps. In all settings, the subjects were asked to determine whether the robot's action makes sense or not at a time. The minimally complete explanations are generated based on BID6 and online explanations are generated using approaches introduced above. At the end of the study, the subjects were provided the NASA Task Load standard questionnaire to evaluate the efficiency of different explanation approaches by NASA Task Load Index (TLX) BID23. The NASA TLX is a subjective workload assessment tool to evaluate human-machine interface systems. Mental workload is a multidimensional variable which can be captured by different variables and NASA TLX is one of the most frequently used subjective measurements for capturing different aspect of mental workload BID24. It calculates an overall mental workload score using a weighted average on sub-scales: mental demand, physical demand, temporal demand, performance, effort and frustration. Since our experiment does not involve physical demand, we did not include the corresponding question. The description of questions used for each category is presented as follows:• Mental Demand: How mentally demanding was the task? We created the academic survey using Qualtrics and recruited 150 human subjects on MTurk, 30 subjects for each setting. To improve the quality of the responses, we set the criteria that the worker's HIT acceptance rate must be greater than 98%. After sifting out invalid responses (i.e., failing to identify the two purposely inserted random actions), we had 94 valid responses in total: 19 for each of MCE-R and MCE, 20 for OEG-PP, and 18 for each of OEG-NA and OEG-AP. The age range of subjects was between 18 and 70, and 29.8% of the subjects were female. We examined how well the human subjects understand the robot's plan given the different explanations, and compared the distances across the five different settings. We compute the distance between the robot's plan and the human's expected plan by the ratio between the number of questionable actions and the total number of actions in a plan. The lower the distance value, the closer the human's plan is to the robot's plan. This metric intuitively captures how much the human subject understands the robot's plan. We calculated the averaged of each settings over all of the subjects participated in that setting, using subjective questions from The overall show that OEG approaches are able to better reduce the human's mental workload than MCE approaches. This is backed up by the fact that OEG approaches ed in better performance in almost all NASA TLX measures. Due to intertwining the explanation process with the plan execution, the OEG approaches create more temporal demand according to the experiment, which is expected. FIG6 presents both objective performance measures, and subjective of the human study amongst the 5 TLX categories. First, the number of questionable actions are significantly lower among OEG approaches when comparing to the MCEs. This indicates that the subjects had more trust towards robots in the OEG cases. Moreover, the accuracy of identifying the correct actions (questionable vs. non-questionable) among OEG approaches are higher. Between the three approaches, OEG-AP has the least number of questionable actions and the most accuracy. We have also presented the p-value for the mental load based on the subjective measures in TAB1 for all measures ranging from 0 to 100). The indicate a statistical significant difference between OEG approaches and MCEs for the mental workload in a pairwise comparison. The overall p-value across five categories is 0.0068 between OEGs (as a group) and MCEs (as a group).We also did some time analysis. The average overall time taken to accomplish the task for each of the categories is as follows: OEG-NA (567.44s) < OEG-AP (629.56s) < MCE-R (678.98s) < MCE (763.47s) < OEG-PP (775.65s), although we did not see a statistically significant difference due to large variances. The accuracy of the secondary task is also not significantly different between the various approaches. In this paper, we introduced a novel approach for explanation generation to reduce the mental workload needed for the human to interpret the explanations, throughout a humanrobot interaction scheme. The key idea here is to break down a complex explanation into smaller parts and convey them in an online fashion, while intertwined with the plan execution. We take a step further from our prior work by considering not only providing the correct explanations, but also the explanations that are easily understandable. We provided three different approaches each of which focuses on one aspect of explanation generation weaved in plan execution. This is an important step toward achieving explainable AI. We evaluated our approaches using both simulation and human subjects. Results showed that our approaches achieved better task performance while reducing the mental workload.
We introduce online explanation to consider the cognitive requirement of the human for understanding the generated explanation by the agent.
1,431
scitldr
Deep neural networks use deeper and broader structures to achieve better performance and consequently, use increasingly more GPU memory as well. However, limited GPU memory restricts many potential designs of neural networks. In this paper, we propose a reinforcement learning based variable swapping and recomputation algorithm to reduce the memory cost, without sacrificing the accuracy of models. Variable swapping can transfer variables between CPU and GPU memory to reduce variables stored in GPU memory. Recomputation can trade time for space by removing some feature maps during forward propagation. Forward functions are executed once again to get the feature maps before reuse. However, how to automatically decide which variables to be swapped or recomputed remains a challenging problem. To address this issue, we propose to use a deep Q-network(DQN) to make plans. By combining variable swapping and recomputation, our outperform several well-known benchmarks. Limited GPU memory restricts model performance due to two different reasons. Firstly, there is a trend that deep neural networks (DNNs) use deeper and more GPU memory-intensive structures , and have continuously made improvement in various computer vision areas such as image classification, object detection, and semantic segmentation (a; ; ; ; ;). Likewise, empirical show that deeper networks can achieve higher accuracy (b;). Deeper network means higher consumption of GPU memory. shows that bigger input batch size can speed up the training process and achieve higher accuracy. However, a bigger input batch size requires more GPU memory to store intermediate variables. We want more GPU memory to get better performance. The rationale to utilize CPU memory by offloading, and later prefetching variables from it is twofold. Firstly, the size of the CPU memory is usually bigger than that of GPU memory. If we do not use variable swapping, all the tensors will stay in GPU memory. Figure 1 shows the details of variable swapping. Secondly, due to the availability of the GPU direct memory access (DMA) engines, which can overlap data transfers with kernel execution. More specifically, a GPU engine is an independent unit which can operate or be scheduled in parallel with other engines. DMA engines control data transfers, and kernel engines can execute different layer functions of DNNs. Hence, in the ideal case, we can completely overlap DNNs training with variable swapping. Therefore, variable swapping is efficient. Regarding recomputation, some feature maps are not stored in GPU memory in forward propagation, but the feature maps are gotten by running forward functions in backpropagation, as shown in Figure 2. Why do we combine swapping with recomputation? Because recomputation uses GPU computing engines to reduce memory usage, and variable swapping uses DMA engines to save memory. Different engines can run parallelly. If we execute recomputation during data transfers, we will not waste computing engines or DMA engines. It is hard to decide which variables should be swapped or recomputed. Different DNNs have different structures. Networks have thousands of variables during training, so it is intractable to enumerate the search space exhaustively. Some existing works use heuristic search algorithms for recompu- Figure 1: The upper graph shows GPU operations in a standard neural network in a time sequence. The lower one shows how to add variable swapping operations. The nodes in the same column represent they occur at the same time. We copy X 0 into CPU memory while reading X 0. After data transfer and reading, we f ree X 0 from GPU memory. Before using X 0 again, we allocate space for X 0 and transfer it back to GPU memory. Figure 2: If we do not store X 1 in the memory in the forward propagation, we need to execute the layer 0 forward function again to get X 1 for the layer 1 backward function. tation or swapping with limited information from computational graphs. For example, they do not consider the time cost of recomputing different layers or swapping different variables. Additionally, they do not make plans for recomputation during swapping in order to increase GPU utilization. Our work utilizes more information from computational graphs than theirs and makes plans automatically for users. The contribution of our paper is that we propose a DQN algorithm to make plans for swapping and recomputation to reduce memory usage of DNNs. Users only need to set memory usage limits and do not require knowledge on DNNs. Additionally, the variable swapping and recomputation will not decrease the accuracy of networks. Variable swapping is widely used in DNNs for GPU memory management. uses a greedy algorithm for swapping, which may be myopic. uses a heuristic algorithm to decide on which variables to be offloaded. However, users are required to decide the number of variables to be swapped. Besides, they cannot devise plans automatically given different memory limits. uses a least recently used (LRU) algorithm, which does not take advantage of the iterative nature of neural networks. Our method makes use of more information from computation graphs and provides plans automatically. Recomputation can trade space for time. proposes recomputation, in-place operation, and memory sharing. They mentioned a grid search method for recomputation when given a memory limit. However, the time cost of recomputation does not become a concern to them. designs a strategy only for recomputing attention structure.; propose reversible networks. However, they cannot make different plans given different memory limits. selects some specific low-cost layers for recomputation. However, they do not utilize other types of layers. Moreover, recomputing some layers introduces many allocation operations during backward propagation, which can influence the memory load of DNNs. Our method considers the above challenges by using DQN to make plans for recomputation. Our major problem is to minimize the computation overhead with a GPU memory limit..., o n ) be a sequence of GPU operations in a training iteration, where o i denotes the i th GPU operation. GPU operations include four types: malloc, f ree, read, and write. Let m i be the GPU memory usage from o 0 to o i. t O is the the overall execution time of O. We have several Figure 3: The inputs are a sequence of GPU operations and a memory limit from a user. The algorithm gets the inputs and interacts with an environment simulator. After a few optimization steps, the algorithm gives a best-found strategy which minimizes the training time and subjects to the memory limit. choices between offloading (swapping out) a variable from GPU memory, prefetching (swapping in) a variable, removing a variable (the first phase of recomputation) from GPU memory, recomputing a variable (the second phase of recomputation), and doing nothing. In each GPU operation o i, we have a choice to make sure that max i∈{0,...,n} m i is less than the memory limit and use as little t O as possible. Figure 3 shows the overall algorithm. We focus on optimizing DNNs and some machine learning algorithms such as K-means with iterative nature. We first train DNNs or machine learning algorithms for a few iterations and collect outputs of GPU operations. By utilizing the sequence of GPU operations and memory limit provided by users, we can use DQN to find an optimal plan to swap and recompute. Finally, we train the DNNs or machine learning algorithms following the plan. Our algorithm is an offline version since we want to exploit the iterative nature of DNNs to get more information. In each GPU operation, we need to choose an action. Actions include three types: swapping, recomputation, and doing nothing. Which action should we choose? We can use Q-learning to solve the problem. We cannot enumerate all states and find their corresponding Q-values since even a small network has hundreds of candidate variables. Hence we use deep Q-network, which learns to approximate Q-values from states by a neural network and chooses actions following the Q-values. We need to swap and recompute to make the memory usage not exceed the memory limit set by the user. The reward is related to the performance overhead of swapping or recomputation. Let us introduce GPU operations. Operations include four types such as malloc, write, read, and free. The beginning time of each operation is known to us. Each operation contains a variable and the address and the size of the variable. We can view each GPU operation as a node in the graph. If two rows are continuous or use the same variable, we will add an edge to these two nodes. The weight of the edge is the time lag between the two nodes. We need to create agent states for DQN. Here are four types of information that we should record: the variable which is being executed, variables which can be swapped or recomputed, the attributes of the variables, various structures of DNNs. The first two types of information can change while the agent state changes. However, the last two will not change when actions are applied to the agent. We map the structures of DNNs as well as other information into vectors as agent states. We will introduce a representation of the state of each node, and then combine all node states into agent states (graph states). Before continuing, let us list some of the necessary notations. • s v is the state of node v, where v ∈ V. V is the set of nodes. S includes all node states. • w(u, v) is the weighted edge between node u and v, and its value is the time lag between u and v. Each node represents a GPU operation. • u ∈ N (v) means that there is an edge between node u and v, or u = v. • The parameter matrix W can be learned. • H 0 and H 1 include all variables which can be offloaded and recomputed in the current state respectively. • [,] joins a sequence of matrices along the first dimension. v only has the information of node v. We update S 2, S 3, and S 4 until S T in sequence. The number of iterations T for each node is usually small, such as T = 4, which is inspired by. where, and x v ∈ R 6×1. x v includes six features: the size of the variable operated in node v, the duration of the variable transferring between GPU memory and CPU memory, the duration of the variable recomputing, how soon the variable will be revisited, the action type of the node (Section 4.5), whether it is valid to execute the action in node v. One reason for adding neighbor nodes and edges is that adding operation is invariant to the different order over neighbors. The other reason is that we can use the same length of the node vectors for different DNN structures. The graph state concatenates the summation over each node state with the state of the node which is under execution. where W 5, W 6 ∈ R p×p. s c is a node state which indicates that node c is under execution. Because we add all the state of nodes together, the graph feature is not related to the number of nodes. An advantage of using such graph feature representation is that we can train a DQN on a small graph and fine-tune on a large graph. The actions a include three types: offloading a variable in the set H 0 into CPU memory, removing a variable from the set H 1 during forward propagation, doing nothing (Figure 4). Note, variable swapping includes two phases: swapping out a variable and swapping in a variable. Recomputation also includes two phases: removing a variable in forward propagation and recomputing functions to get the removed variable during backpropagation. Actions only include the first phase for variable swapping and recomputation, so we call them the swap-out action and the removing action. There is no swap-in action or the second phase recomputation action since when the variable is in CPU memory, the optimal swap-in or the second phase recomputation timing is fixed, no need for including into actions. As to prefetching, We first prefetch the earliest reused variable which is not in the GPU memory and then the second earliest reused variable. If swapping in a variable does not cause any excess of memory usage until the end of the backpropagation, we will begin to prefetch Figure 4: Solid and dashed lines represent time relation between nodes and agent transition (Section 4.6) respectively. Each node represents exactly one GPU operation and will be executed in a time sequence. Each node represents at most one action, and not all of the actions will be executed. In each node, we can choose one action from several candidate actions. For example, we can choose to do nothing, remove X 0, or offload X 0 in the 3 th node. the variable. If a GPU operation requires a variable, we need to suspend GPU operations before finishing prefetching the variable. We use a similar procedure for recomputation. For the state transition, when we apply an action to an agent, {H 0, H 1} and the node which is under execution will change. However, actions will not influence the relationships between nodes and attributes of nodes. For example, As shown in Figure 4, the agent is in the 3 th node. It takes one second to finish each GPU operation. Offloading X 0 takes 1.5 seconds. If we choose an action which is doing nothing or removing X 0, the agent will be in the 4 th node. If the action is offloading X 0, the agent will be in the 5 th node since we always round up the agent to the next node. We need an extra check. The current GPU memory usage should be less or equal than the memory limit. If we choose to offload X 0, and the current memory usage is equal to the memory limit, the agent will be in the 4 th node instead of in the 5 th node. We will not malloc new variables before the GPU memory has enough space. The forward overhead for offloading X 0 is 0.5 seconds since GPU operation pause for 0.5 seconds to wait for offloading. We have known the prefetching order, which is described in the Section 4.5 so that we can calculate the backward overhead in the same way. If we never pause GPU operations for variable swapping to make the memory usage less than the memory limit, and there is no recomputation, the reward will be zero. As shown in Figure 4, if we choose to remove X 0, the reward will be negative time for recomputing forward layer functions to get X 0 in the backward propagation. If we choose to offload X 0, the reward will be negative overhead of the forward and backward propagation which is caused by offloading and prefetching X 0. In order to get the right state transitions and rewards, we need to know the exact time for each GPU operation. However, sometimes, we cannot fit a huge model to GPU memory. We came up with an idea to solve the problem. , its free and malloc are fast, so we need neglectable extra time for free and malloc. Hence, the training time will be roughly correct. It should be noted that we cannot get the right derivative of weights by such way. However, we can get the roughly correct time for each GPU operation. When we apply an action to the agent, the agent will transition to the next state from the current state; at the same time, we get a reward. A simulator provides the next state and a reward following the criteria that we defined in the action, transition, and reward sections (Section 4.5). We use the simulator to simulate the training environment while updating DQN. We have such following two assumptions: The first one is that the recomputing time can be estimated . The second one is that variable swapping can run parallel with layer functions entirely. It is easy to convert graph states to Q values. We concatenate the graph state and an action node state to a vector and then map the vector to a value. The action node represents not only a GPU operation but also an action (Figure 4).Q where W 7 ∈ R 1×3p and W 8 ∈ R p×p. s T a is an action node. As shown in Figure 4, we can begin to copy X 0 into CPU memory while reading X 0, but we need to remove X 0 from GPU memory after reading X 0. It is noteworthy that we cannot offload X 0 after removing X 0, and vice versa. We usually use the first node to represent doing nothing action. We use a heuristic method to decide which node can also represent an action, and we guarantee that no node represents more than one action. We train an end-to-end DQN by the following loss function. where γ is a decay factor. y is a constant value, which means that the gradient will not flow through y. r(it g, it s T a) is the reward for agent state it g and action it s T a. Terminal stateQ t is zero. If we no longer need to remove or offload any variables until the end of the current iteration, and max i∈{0,...,n} m i is less than the memory limit, the state will be the terminal state. As shown in algorithm 1, we do not update the Equation 4 by the single currently experienced sample. Instead, fitted Q-iteration updates the weights with mini-batches sampled from experience replay dataset. We use the greedy method to choose an action a from {H 0, H 1}. Finally, we train the DNN following the plan that is generated by DQN. We execute each GPU operation in a time sequence. If the current node is an action node, and the action is in action set A, we will execute the action following Section 4.6. As for prefetching and the second phase of recomputation, we following the method introduced in Section 4.5. The lower x-axis shows memory reduction, and the upper x-axis shows corresponding memory usage. In this part, we evaluate the performance of variable swapping and variable swapping combined with recomputation. We test our method on various architectures such as ResNet, VGG, UNet, Kmeans , and LSTM, whose structures do not change during training. We also extend our method on a dynamic computation graph, e.g., deep networks with stochastic depth , whose structures can change during training. Our images dataset is CIFAR-10 and Karpathy's char-RNN dataset . Our k-means dataset is generated randomly by NVIDIA code. Additionally, we train ResNet and VGG for two different depths for better analyzation. Our experiments are conducted on a workstation, which equipped CPU Intel Xeon E5 and NVIDIA GeForce GTX 1080 Ti with 11 GB RAM. The CPU memory is 64GB. Our motherboard has PCI Express x16 for data communication between GPU and CPU. Our system is Ubuntu 16.04, with CUDA 9.0 and cuDNN 7.0. We use the fastest cuDNN algorithm, which requires extra workspaces to store intermediate . Our method is tested on deep learning framework Singa. We compare our method with other baselines. They are MXNet-memonger , SuperNeurons , and TFLMS. MXNet-memonger trades computation to exchange memory, but the performance depends on recomputing layers that we choose. SuperNeuraons have proposed recomputation and variable swapping. TFLMS only uses variable swapping. Figure 5 shows different computation overheads versus different memory reductions. For MXNetmemonger, we can obtain different data points in our graphs as changing recomputation layers. As for SuperNeurons, we choose to use recomputation mode, swapping mode, and swapping combined with recomputation mode to get three different data points. Their program does not have recomputation mode for ResNet, so we only report two data points in ResNet for their baseline. As for TFLMS, we choose the different number of swapping variables to control memory usage. Our method takes less extra computation time and saves more GPU memory, which shows that our are better than MXNet-memonger, SuperNeurons, and TFLMS. SuperNeuron uses the least recently used (LRU) algorithm for variable swapping. They view GPU memory as a cache and use a classical cache replacement algorithm. However, they do not make use of the iterative nature of the DNN. TFLMS only uses variable swapping, and they need to set the number of swap variables manually. SuperNeuron and MXNet-memonger choose specific layers for recomputation by expert knowledge. Our method makes use of more information from computation graphs and provides plans automatically for users. SuperNeuron also combines variable swapping with recomputation. However, they do not treat variable swapping and recomputation separately. When we run their program, we find that their program GPU utilization during network training is much lower than ours. They waste some GPU resource for saving memory, which can be avoided. The GPU utilization of our method is higher than theirs. Our work can be used in more general architectures. Only TFLMS and our method can work on LSTM function CuDNNLSTM. We cannot run the other two methods on such architecture. Additionally, among these four works, only our method supports ResNet with stochastic depth and K-Means. Compared with other baselines, our algorithm has the following advantages. First of all, we can set a wide range of memory limit easily. Secondly, our method can work well on an extensive range of iterative nature machine learning algorithms. Last, our method provides plans automatically for users, and users do not need expert knowledge. Let us analyze our method for different architectures. For ResNet and VGG, they have similar architectures and get similar . Regarding UNet, its structure is different from that of ResNet and VGG. For example, the first feature map is required to be used at the end phase of the forward propagation and the second feature map need to be used at the second last phase of the forward pass and so on. If we offload the first feature map, we need to prefetch it before the last phase of the forward pass, which means we need to swap it in again in memory usage growing phase. If we do not offload such variables, GPU data transfer engines will be idle for some time or have fewer candidate variables to be offloaded. Thus on UNet is worse than ResNet and VGG. Concerning LSTM, it does not have convolutional layers. Convolutional layers execute slower than other layers. If we need to use longer time to do kernel operation in GPU, we will have a longer time for data transfers since kernel operation, and data transfers are executed in different GPU engines. In consequence, the overhead of LSTM is longer than that of ResNet and VGG. As to SD ResNet, it has dynamic structures. The architecture of the network can change during training. Our method is not designed for such structures, so the is worse than others. In this paper, we propose a DQN to devise plans for variable swapping and recomputation to reduce memory usage. Our work can work well with different memory limits. Our method provides plans automatically for users. They only need to set a memory limit and do not require knowledge on DNN or machine learning algorithm. Our method can work well for different network structures such as ResNet, VGG, K-means, SD ResNet, and LSTM. Besides, the variable swapping and recomputation do not decrease the accuracy of networks.
We propose a reinforcement learning based variable swapping and recomputation algorithm to reduce the memory cost.
1,432
scitldr
In vanilla backpropagation (VBP), activation function matters considerably in terms of non-linearity and differentiability. Vanishing gradient has been an important problem related to the bad choice of activation function in deep learning (DL). This work shows that a differentiable activation function is not necessary any more for error backpropagation. The derivative of the activation function can be replaced by an iterative temporal differencing (ITD) using fixed random feedback weight alignment (FBA). Using FBA with ITD, we can transform the VBP into a more biologically plausible approach for learning deep neural network architectures. We don't claim that ITD works completely the same as the spike-time dependent plasticity (STDP) in our brain but this work can be a step toward the integration of STDP-based error backpropagation in deep learning. VBP was proposed around 1987 BID10. Almost at the same time, biologicallyinspired convolutional networks was also introduced as well using VBP BID5. Deep learning (DL) was introduced as an approach to learn deep neural network architecture using VBP BID5; BID4. Extremely deep networks learning reached 152 layers of representation with residual and highway networks BID3; BID13. Deep reinforcement learning was successfully implemented and applied which was mimicking the dopamine effect in our brain for self-supervised and unsupervised learning BID11 BID9 BID8. Hierarchical convolutional neural network have been biologically inspired by our visual cortex; BID1 BID0 BID14. The discovery of fixed random synaptic feedback weights alignments (FBA) in error backpropagation for deep learning started a new quest of finding the biological version of VBP since it solves the symmetrical synaptic weights problem in backprop. Recently, spiketime dependent plasticity was the important issue with backprop. One of the works in this direction, highly inspired from Hinton's recirculation idea , is deep learning using segregated dendrites BID2. Apical dendrites as the segregated synaptic feedback are claimed to be capable of modeling STDP into the backprop successfully BID2. In this section, we visually demonstrate the ITD using FBA in VBP 1. In this figure, VBP, VBP with FBA, and ITD using FBA for VBP are shown all in one figure. The choice of activation function for this implementation was Tanh function. The ITD was applied to MNIST standard dataset. VBP, FBA, and ITD were compared using maximum cross entropy (MCE) as the loss function 2. Also, ITD with MCE as loss function is compared to ITD with least squared error (LSE) 3. The hyper parameters for both of the experiments are equal as follows: 5000 number of iterations/ epochs, 0.01 (1e-2) learning rate, 100 minibatch size with shuffling for stochasticity, vanilla stochastic gradient descent is used, 32 for number of hidden layers, 2-layer deep networks. Feed-forward neural network is used as the architecture. In this paper, we took one more step toward a more biologically plausible backpropagation for deep learning. After hierarchical convolutional neural network and fixed random synaptic feedback alignment, we believe iterative temporal differencing is a way toward integrating STDP learning process in the brain. We believe the next steps should be to investigate more into the STDP processes details in learning, dopamine-based unsupervised learning, and generating Poisson-based spikes.
Iterative temporal differencing with fixed random feedback alignment support spike-time dependent plasticity in vanilla backpropagation for deep learning.
1,433
scitldr
Inspired by the recent successes of deep generative models for Text-To-Speech (TTS) such as WaveNet (van den) and Tacotron , this article proposes the use of a deep generative model tailored for Automatic Speech Recognition (ASR) as the primary acoustic model (AM) for an overall recognition system with a separate language model (LM). Two dimensions of depth are considered: the use of mixture density networks, both autoregressive and non-autoregressive, to generate density functions capable of modeling acoustic input sequences with much more powerful conditioning than the first-generation generative models for ASR, Gaussian Mixture Models / Hidden Markov Models (GMM/HMMs), and the use of standard LSTMs, in the spirit of the original tandem approach, to produce discriminative feature vectors for generative modeling. Combining mixture density networks and deep discriminative features leads to a novel dual-stack LSTM architecture directly related to the RNN Transducer , but with the explicit functional form of a density, and combining naturally with a separate language model, using Bayes rule. The generative models discussed here are compared experimentally in terms of log-likelihoods and frame accuracies. For most of its history, the field of ASR has used a collection of separate modules to represent different stages of an overall processing chain. Fred Jelinek formalized this approach within his concept of the "noisy channel model", in which components chain together to form a consistent overall joint probability P (X, W) for e.g. a sequence of acoustic observations X and a sequence of words W. In turn, P (X, W) enables the Bayes' decision procedure based on the posterior P (W |X), and Bayes' theorem BID13: DISPLAYFORM0 where p(X) can be dropped from the recognition procedure, as it doesn't affect the relative posteriors of hypotheses W. This model was particularly convenient given the strong independence assumptions in the model structures used. Gaussian Mixture Models (GMMs) used jointly with 1st-order Hidden Markov Models (HMMs) were well-suited to this modular approach, as they directly provide an acoustic likelihood p(X|W) for a sequence of acoustic feature vectors X = x 1:T conditioned on a given sequence of modeling units, e.g. phonemes, graphemes or words, W = w 1:M. Denoting a possible state alignment of W to X as S W = {w 1, ..., w T}, the overall acoustic log-likelihood is defined with strong independence assumptions, using DISPLAYFORM1 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada.and a sum over alignments (computed efficiently using the forward-backward algorithm), DISPLAYFORM2 or the best state alignment (the Viterbi approximation): DISPLAYFORM3 This then combines naturally with a separate language model probability, P (W) to form the joint probability P (X, W).Other probabilistic modules such as a pronunciation dictionary can be introduced into the overall chain, again combining with the other components according to Bayes' rule. One can keep adding modules, or substituting modules, and the overall model still holds. In this sense, this approach to ASR is "compositional". The adoption of more powerful, discriminative models such as Deep Neural Networks (DNNs) and Long Short Term Memory models (LSTMs) did not at first alter this model. The popular "hybrid" approach (still considered state-of-the-art by most of the community today) simply converts the posteriors P (s|x) (defined at the level of a single observation x for a model state s) obtained from DNNs or LSTMs to a scaled likelihood by normalizing the posterior by a state prior P (s) BID2. Fleshing out this picture, the hybrid model for DNNs then is DISPLAYFORM0 p(x t |w t). = T t=1 P (w t |x t)/P (w t),and for a bidirectional LSTM is DISPLAYFORM1 P (w t |X)/P (w t).These (scaled) likelihoods can then plug back into the channel model using Bayes rule, as before, and can also plug into an overall sequence training objective function such as utterance-level MMI or sMBR BID18 BID20. Note that for the typical DNNs and LSTMs used by the community, sequence training has the potential to estimate implicitly the scaling term p(s) as an output unit bias term, so that it optimizes the sequence-level criterion. The overall construction of a sequence-level training objective, converting local frame-level scores from a discriminative model into "generative" scaled likelihoods, only to then plug those into a discriminative sequence training criterion, may seem like a strange hybrid indeed. Nonetheless, this has been a remarkably effective approach, that still constitutes the state-of-the-art BID31. In contrast, end-to-end models such as Listen, Attend & Spell (LAS) BID3 or RNN Transducer (RNN-T) BID7 dispense with the modular channel model altogether, and directly optimize a discriminative sequence-level model with a discriminative criterion. By construction, these models are all-inone models that represent P (W |X) directly, with no chaining of sub-module probabilities. The model directly defines the posterior needed for recognition. If one has a large enough training set of supervised data pairs, this can be thought of as the perfect model. State-of-the-art, or near state-of-the-art have been reported for these models on challenging tasks BID0 BID4. What has also been reported is the combination of end-to-end models with other models, e.g. an LM P * (W) trained on vastly more text data than is found in the transcripts of the audio data used to train P (W |X). Here the story is much less rosy. One approach, sometimes referred to as "Shallow Fusion", is simple interpolation of the joint end-to-end model score with the LM score, e.g. BID14: DISPLAYFORM0 The external LM has also been brought in through other methods, e.g. "Deep Fusion" BID36 and "Cold Fusion" BID23. Though giving some practical gains, these are heuristic approaches with no clear mathematical justification such as Bayes' theorem. An additional approach to model combination, the "sequence version" of the hybrid model for ASR described above BID2, is to form a separate estimate P (W) based only on the transcript-side of the {audio + transcript} data used to train P (W |X), and use it to form a scaled likelihood of the acoustics given W: DISPLAYFORM0 related to p(X|W) by Bayes rule, DISPLAYFORM1 As the p(X) term doesn't affect recognition, the scaled likelihood can then combine with P * (W) for recognition using DISPLAYFORM2 following Bayes rule, but with some uncertainty whether the scaled likelihood is a meaningful representation of p(X|W). It remains to be seen whether this approach is more effective empirically than the fusion techniques. As for the conventional ASR approach with discriminative acoustic models, the scaled likelihood can be plugged into an overall sequence training objective function such as utterance-level MMI or sMBR with a separate language model, if desired. Depending on the model used for P (W |X), note that sequence training has the potential to estimate implicitly the scaling term in Equation 8, so that it optimizes the sequence-level criterion. As described earlier, generative acoustic models offer a mathematically principled and interpretable approach to module combination, following Bayes rule. While the first-generation generative models used for ASR were an excellent fit to the modular channel model, the strong independence assumptions of those early models severely hobbled their effectiveness, especially compared to the deep discriminative models, DNNs and LSTMs, that eventually replaced them BID20 BID22. In contrast, generative models have made large advances in the area of speech synthesis. State-of-the-art TTS approaches such as WaveNet and Tacotron BID28 BID32 specifically model p(X|W) as a fully-conditioned autoregressive model, using the entire unidirectional sequence of past observations x t (where x t is either an acoustic feature vector or a single scalar waveform sample), and typically, the entire sequence of symbols w i constituting the sequence W. The "conditional WaveNet" model BID28 exemplifies this well, defining DISPLAYFORM0 where the input S W represents e.g. text and speaker features, and where observed samples x t are targets of e.g. an N -ways quantized softmax trained with CE, using e.g. a DNN with dilated convolutions. Mixture density networks BID1, based on either DNNs or RNNs, have also been effective as deep generative TTS models BID34 BID35. Their use for ASR was proposed 20 years ago but not fully investigated BID21. Mixture density networks can use the same conditioning over observations and labels as the sample-by-sample WaveNet model, but use a Gaussian mixture density function: DISPLAYFORM1 This is a reasonable choice when x t is a feature vector, as opposed to a scalar sample as in WaveNet. As ASR models typically operate on feature vectors, and not at the sample level, mixture density networks may be a good first step to investigate deep generative models for ASR. The original channel model provides a principled foundation for ASR with deep generative acoustic models such as the TTS models just discussed BID12 BID21. Whenever a likelihood p(X|W) is available, ASR is possible -as long as a separate LM P (W) is available to form the joint probability P (X, W), and given a decoder that can generate, score and prune different symbol sequence hypotheses W. Time-synchronous beam-search can be used, among several possible decoding strategies. The strong conditioning on the entire unidirectional symbol sequence history characterizing several of the models discussed here means that ASR decoding in principle cannot merge partial hypotheses, but that is also true for decoding with end-to-end ASR models such as LAS and RNN-T. Another approach to ASR decoding with the models described here is N -best rescoring of a list of hypotheses from an existing conventional ASR system. Note that the LM P (W) can of course be "deep" too, e.g. an RNN-LM. The "deep channel model" is just the channel model, with deep modules. One generative approach to ASR that has shown lasting effectiveness is the tandem approach BID6 BID10, still yielding near state-of-the-art BID19 BID27. In contrast to the use of scaled likelihoods formed directly from the output of a discriminative model, described earlier, in the tandem approach, features from a penultimate layer are extracted from the discriminative model, and used in a separate generative model such as GMMs. Typically these deep features are obtained from a bottleneck layer BID8, concatenated with acoustic features, and transformed using e.g. Principal Components Analysis (PCA) before being used as input to the generative model BID27. The is a consistent generative model, but now operating on features that are far more discriminative than raw acoustic features by themselves. Related work has investigated the embedding of a gaussian layer directly into the same DNN architecture, enabling joint discriminative training of feature extractor and gaussian density parameters BID26 BID30. These studies limited themselves to DNNs for the discriminative feature extraction, and to standard GMMs for the generative model. The approach proposed here extends this past work in both dimensions. Though not explored in this study, the goal of integrating or chaining ASR and TTS together has a long history BID5 BID12 BID24 BID25. Recently, this has been used for semi-supervised ASR and TTS training BID11 BID25. The models discussed here are clearly related to this body of work, as well as to the ideas for prediction-based unsupervised training discussed in BID16. This study explores two dimensions of depth for ASR-oriented generative models: the depth of the features being modeled, and the depth of the density function itself. The features considered were:• Shallow features: raw acoustic features, such as logmel energies or cepstral features;• Deep features: features obtained from the last layer of an LSTM stack trained discriminatively with a standard criterion such as CE, in the spirit of the original Tandem approach. The density models considered were:• Shallow density layers: vanilla gaussian mixture models, though implemented in TensorFlow and trained with SGD (using either ML or CE);• Deep density networks: mixture density density networks, both autoregressive and nonautoregressive, used to generate density functions capable of modeling acoustic feature vector sequences strongly conditioned on past features and labels, trained with ML. 2 Shallow mixture density models estimated using ML/SGD, modeling shallow features FIG0 illustrates the simplest of the models described here, a vanilla gaussian mixture, DISPLAYFORM0 defined for any label w m and any D-dimensional feature vector x t. This likelihood makes strong independence assumptions; it is not conditioned on previous observations, nor on previous label symbols. Nonetheless, it can plug into the complete likelihood of the utterance, based on Equation 2.Implemented in TensorFlow, the parameters for this model are represented as TensorFlow Variables. Standard deviations and mixing weights are represented in the log domain, with log_softmax used to enforce the constraint that the mixing weights sum to one BID30.A simple radial covariance (using a single scalar standard deviation per mixture component) model was found to be convenient and effective: DISPLAYFORM1 where the log of the standard multi-variate gaussian pdf has been decomposed to emphasize the similarity to the typical DNN/LSTM final layer, log(sof tmax(wx + b)); the main differences being the use of multiple mixture components, and more importantly, the self-normalized nature of the density function. (See BID9 for discussion of the equivalence of gaussian and log-linear models under certain conditions). The model can be strengthened through a transformation A of the input feature x, where A is either a diagonal or lower-triangular matrix, with exponentially-transformed diagonal values to ensure the positivity of the matrix determinant. The transformed feature vector Ax can then be used in Equation BID13, corresponding to a gaussian mixture model with shared diagonal covariances or shared full covariances, overlaying the gaussian-specific radial model. The described in the main body of this article are all for densities using the simple radial model; with stronger covariance models are described in Appendices A and B.SGD-based optimization of the density function can be done with a number criteria; the natural fit for this simple generative model is Maximum Likelihood (ML), but discriminative criteria such as Cross Entropy (CE) or sequence-discriminative criteria such as utterance-level MMI or sMBR can also be used. ML-based estimation of GMMs was traditionally performed with the highly effective ExpectationMaximization (EM) algorithm BID17. In contrast to the EM algorithm, SGD is not specifically suited to ML/GMM optimization, but its generality is highly appealing in the context of joint optimization of density functions with deep features or density parameters that are themselves generated by arbitrary, possibly recurrent neural network stacks. The use of SGD for ML-based estimation of GMMs has not been widely reported. A first step in this study was to verify that ML/SGD can effectively learn the correct estimates given synthetic data drawn from known distributions. For low-dimensional scenarios easily inspected visually, it was found that as long as the features were mean/standard-dev normalized, standard rules of thumb of GMM parameter initialization BID33 led to surprisingly effective ML/SGD estimation, even for mixtures with highly overlapping components, such as illustrated in FIG1. For real-world data, two rough sanity checks were used: TensorBoard histograms of mixing weights can diagnose unhealthy situations where a single gaussian component overwhelms all others in the mixture; log-likelihoods on the training set should improve significantly with increasing numbers of mixture components. Careful comparison of ML/SGD-estimated GMMs with EM-estimated GMMs, and schemes for mixture splitting in TensorFlow, could yield insights and better performance.3 Autoregressive and non-autoregressive deep mixture density networks modeling shallow features FIG2 illustrates a mixture density network generated from an LSTM stack, predicting the next acoustic feature frame using all label symbols w 1:t up to that point in time (input to the LSTM stack via a class embedding), and in the autoregressive version of the model illustrated here, all previous acoustic features as well: DISPLAYFORM0 defined for a specific alignment of labels w t to observations x t. This likelihood can plug into Equation 11, a much more powerful model than Equation 2.In the non-autoregressive version, the mixture density network uses only the label symbols (and no previous acoustic features) to predict the next acoustic feature frame: The power of these density networks is that the density function changes dynamically as a function of the input it is provided. Via the input of the unidirectional embedded symbol sequence, the model can leverage long-span symbol context, a key feature when using non-standard acoustic modeling units such as graphemes. The autoregressive version has the further feature that the ground truth of past observed acoustic feature vectors enables a kind of adaptation of the predictive model to the actual speech signal input to the model. In contrast, the non-autoregressive model can only leverage the given unidirectional symbol context in making its predictions; it has to model all acoustic variability observed in the training set, across all speakers, speaking styles and acoustic environments. DISPLAYFORM1 Some variants on the full autoregressive model were considered:• SH=n: size of the frame shift between observations input into the LSTM stack, and target of the prediction, if different from 1. E.g., SH=3 refers to predicting x t from x 1:t−3.• BN=n: size of linear bottleneck on features before being fed into the LSTM stack, if any.• ST=n: size of the stride over the frames input into the LSTM stack, if different from 1. E.g. ST=10 refers to only using every 10-th frame, feeding in 0s in between. These are ways to provide less than the full past input to the autoregressive model. One can expect that there would be a worsening of prediction log-likelihood, but perhaps an improvement in frame accuracy. Note that "Professor Forcing" BID15 was proposed to address the related problem of "Teacher Forcing" in autoregressive models.4 Shallow mixture density layer modeling deep discriminative features FIG3 illustrates a shallow density model of deep features. The deep features are obtained from an LSTM stack, separately trained with CE as a discriminative acoustic encoder. The deep features in question are simply the output of the last LSTM layer, before the logits and softmax layer BID29. In contrast to previous tandem work using DNNs BID8 BID30, no bottleneck layer was found to be necessary, presumably due to the more compact LSTM layer size (e.g. 512 for LSTMs in this study, vs 2048 for the DNNs in BID30). The model here is: DISPLAYFORM0 If the LSTM stack implementing the discriminative acoustic encoder d is frozen, this is just a vanilla GMM layer that can be trained with ML, whose features happen to be highly discriminative. Joint training of the density function and the feature extractor d using ML is feasible, but requires proper handling of the Jacobian of the inverse of d . Joint training using CE, or utterance-level sMBR/MMI, however, has no such issue BID19 BID26 BID30. Finally, one can apply a deep density network to the modeling of deep features, as illustrated in FIG4. The is an architecture closely mirroring the well-known RNN Transducer BID7, but explicitly formulated as a density function generated from an LSTM stack encoding a label sequence, applied to deep features encoding the acoustic sequence. The ing likelihood is: DISPLAYFORM0 If the LSTM stack encoding the deep features is frozen, this is a straightforward mixture density network that can be trained with ML, but with highly discriminative features. As with the shallow density model of deep features, it can be trained jointly using discriminative criteria; and joint training with ML is again feasible but requires proper handling of the Jacobian of the inverse of d . The labels w can be input time-synchronously (repeating input labels for the length of their frame alignment) as well as label-synchronously, only inputting label transitions; the latter approach produced better frame accuracies, and was used in the experiments for this model. To our knowledge, this is a novel application of mixture density networks to the modeling of deep features. The goal of the experiments was to verify that the added conditioning of the mixture density networks examined indeed improves log-likelihoods as expected, and to get some insights about likely WER from frame accuracies for these models. Frame accuracy for the shallow density layers (whether they use shallow or deep features) is straightforward to compute in TensorFlow, as N label class outputs for any given feature vector x t can easily For the deep density networks, in principle only one label class output is generated at a time, from the input of a specific embedded class label. Frame classification for density networks can be computed by producing N density outputs for N class label hypotheses input to the LSTM stack. In all experiments here, a simple label class prior P (s) is estimated in-network BID30 and used jointly with the likelihood produced by the density model to make the frame classification. (Though one could use in-network estimation of much more powerful (e.g. LSTM-based) LMs to tremendously boost frame accuracy, the simple class prior was deemed fit to provide insight into the discriminative power of the AM itself).An issue for deep density networks is the label context to use when measuring frame accuracy. In a real ASR decoding scenario, multiple partial label sequences would be hypothesized by the decoder. The approach adopted here was to measure frame accuracy for the density networks via conditioning on the ground-truth label sequence up to time t − 1, with only the label at time t being hypothesized. This can be computed efficiently in TensorFlow by batching the input of the N label hypotheses at all times t, but splicing into the inference pass a set of separately generated LSTM states up to t − 1 obtained from ground-truth only input. FIG5 illustrates the use of class input-batching to do this efficiently in TensorFlow. The dataset set used here is a set of 20000 hours of spontaneous speech from anonymized, humantranscribed Voice Search data for American English (en-us). The training examples are sequences of unstacked (single-frame) feature vectors extracted from a 64 ms window shifted over the audio samples in 30 ms increments. The acoustic feature vectors used in the "deep feature" models, based on discriminative acoustic encoder architectures, are 256 dimensional logmel energies; those used in the "shallow feature" models are 96 dimensional cepstral coefficients, consisting of a stack of 32+32+32 mel filterbank cepstral coefficients (MFCCs) and their first and second derivatives, extracted from the same 64 ms windowing scheme. Compared to logmel features, MFCCs are a reasonable choice when using a radial or diagonal covariance model, and the time derivatives provide a representation of temporal context BID21. Results for shallow feature density networks using logmels are reported in Appendices A and B; the frame accuracies detailed there are significantly lower than those for MFCCs with derivatives described in the following. The overall training architecture closely follows the TensorFlow based implementation described in BID31. Each training step has a batch size of 64 example utterances, with a fixed static LSTM unrolling of 20 frames. Given the total batch size of 1280 frames, 1 epoch of training over the 20000 hour training set corresponds to 1.875M training steps. For every configuration, at least 10 epochs of SGD training were run. All LSTM stacks used are 5 layer models with 512 units each, for a total of roughly 4.3M parameters, not counting the final layer, whose size will depend on the number of output classes and, for the density models, on the number of mixture components. The class outputs are 42 context-independent (CI) phonemes or 8192 decision-tree clustered context-dependent (CD) phonemes. A label class embedding of size 128 was used for all experiments. CE training of the density models combines the acoustic likelihood output by the density model with an in-network estimate of the state prior P (s) (as in used in the computation of frame accuracy) BID30.The statistics reported were somewhat unscientifically obtained from smoothed TensorBoard plots when the models were deemed to have converged. The smoothing factor for the TensorBoard moving average was 0.95. Frame accuracies are based on a small held out dataset of 1000 utterances (with an average of 120 feature frames per utterance) and are hence quite noisy; only 2 significant digits are reported for them. The costs (negative log-likelihoods) are smoothed costs for the training criterion at convergence. As the costs are running averages over the entire training set, they are less noisy than the frame accuracy statistics. These metrics are admittedly not very rigorous, but nonetheless offer insights into the conditions explored. As roughly 27% of the data is silence, a frame accuracy of anything not surpassing that can officially be considered abysmal. TAB0 describes the costs (negative log-likelihoods) and frame accuracies for all models using shallow mel-cepstral features. Shallow density with shallow features: The frame classification of an ML-trained radial gaussian mixture barely surpasses abysmal levels, at 35%. In contrast, a reference 5-layer DNN with a matching number of parameters is at 55%. However, CE-training the gaussian mixture model brings the frame accuracy up to 48%. This may be a reasonable , confirming that the flat structure of the GMM is not as effective a classifier as a deeply structured DNN, but isn't completely off either, if trained discriminatively. The cost of 127 provides a baseline for the deep density models in the rest of TAB0. Those models are expected to improve significantly over that. Deep autoregressive densities with shallow features: As expected, the deep density models do much better in terms of cost than the shallow density model just discussed. Also as expected, the autoregressive models have better log-likelihoods than the non-autoregressive models, and furthermore, the full autoregressive models do better than the autoregressive models whose input was limited via the schemes for prediction shift (SH), feature bottleneck (BN) and frame stride (ST). However, the trend for frame accuracy is the opposite. The more past input is provided to the LSTM stack generating the mixture density, the worse the frame accuracy. One perspective is that providing too much information about past observations makes the prediction problem too easy, the label class information is not necessary, and hence the prediction is not discriminative. (Professor Forcing BID15 may provide a solution to this problem). Results are also shown for the unsupervised versions (Unsup), where all class input is merged into a single symbol. One expects that the label class information would help improve the prediction cost over the unsupervised prediction, but that is not the case for the CI phone scenario, corresponding to the "CI AR" , with a cost of 43.5, compared to the "Unsup AR" cost of 43.0. The "CD AR" model does a bit better with a cost of 42.3. Shifting the prediction target 3 frames into the future (SH=3), however, produces a gain for the CI model over the unsupervised version, 116 vs 118. (See Appendix B for a deeper exploration of the effect of shifting the prediction target). Going from 10 mixture components to 256 significantly improves prediction cost, but doesn't significantly affect the frame accuracies for the autoregressive models here. Deep non-autoregressive densities with shallow features: Removing all past observation input makes the prediction cost much worse than for the autoregressive models, but the frame accuracies are much better. They are however still not competitive with the simple DNN, and in fact a bit worse than the CE-trained shallow density model. Shallow densities with deep features, joint training with CE: Given the form of the radial density defined in Equation 14 it should not surprise us that the classification capacity of an LSTM stack using the radial density as the last layer, trained discriminatively, would closely match the classification capacity of an LSTM stack with a standard logits layer. This is what we see for both CI and CD versions, with a single radial component per mixture, achieving frame accuracies of 87% and 79% respectively, matching the standard CI and CD LSTMs. Shallow densities with deep features, ML training: All the ML here require pre-training of the standard LSTM (either CI or CD) with CE first. There is a clear drop in performance compared to the pure CE , especially for the CD case, with 8192 output classes to discriminate compared to 42.However, the deep features here are not mean/standard-dev normalized (nor PCA'd, etc.). The simple radial covariance model clearly could be improved upon for this particular configuration, if desired. The dual LSTM stack architecture does quite well as a predictor of the deep features, using just the label class context. The prediction cost (e.g. -400 for the 10 component mixture version) is much better than the cost (-305) for the shallow density model of the same deep features, and in fact has significantly better frame accuracy too, 85% vs 77%. It may be argued that the deep features here have essentially extracted phoneme information from the CE training of the standard LSTM stack, and that now the prediction problem is really a label prediction problem, given the previously observed label class input. However, there is nothing wrong with that from an ASR point of view, as the deep features are a function of the acoustics. Furthermore, this may be exactly what is interesting about this last configuration: using an encoding of a label sequence, we are trying to predict an encoding of an acoustic sequence. Visualization of predictive quality for deep densities with shallow logmel features: See Appendix C for plots of generated logmel spectra, specifically focusing on the difference between supervised and unsupervised prediction quality as a function of the prediction shift ("SH"). Four generative models were described and evaluated, the cartesian product of {shallow, deep} features and {shallow, deep} densities. This recapitulates but also extends the original tandem approach to ASR. As no WER evaluations were run, these are rather tentative, but do give some insights into the models proposed. The log-likelihood follow our intuitions about the strength of the models. The weak for supervised vs unsupervised may suggest an issue with the experimental setup, e.g. the fixed alignments used are poor, or it could reflect the weakness of the radial covariance model. The somewhat better for the CD phone model suggests either that the CI label context is not being completely represented by the LSTM state, or that "future blindness" (not knowing the identity of the immediately following phoneme) is a significant issue, though other strategies such as re-alignment or decision delaying could address that too. Appendix A and Appendix B explore the strength of the covariance model used, and the difference between supervised and unsupervised models, in greater depth. The frame accuracies for the models using shallow acoustic features seem too low to be useful for ASR. One could argue, however, that the autoregressive models, properly used in an ASR decoding framework, may perform much better than the frame accuracies suggest. The difference in prediction cost between supervised and unsupervised scenarios discussed earlier (and in Appendices A and B) may be a better metric than frame accuracy for getting insight into ASR performance. The use of deep features solves many of the problems generative models have in modeling environmental and speaker variability, and immediately provides strong discriminative power as measured by frame accuracy, but it may be seen as cheating and no longer purely generative. Given their state-of-the-art frame accuracy, presumably the corresponding generative models described here would be viable for ASR. The question then is, How does the generative nature of these models help us? E.g. Does it allow for better compositionality with separate LMs, as claimed in the Introduction? Does it provide the advantages attributed to generative models regarding unsupervised training or adaptation? Full ASR decoding experiments with WERs are needed to address those questions. Appendix A: Evaluation of different covariance models for autoregressive density networks modeling shallow logmel featuresThis Appendix describes the effect of covariance models stronger than the radial model used in the main body of the article, evaluated on logmel features modeled by autoregressive density networks. Only a single gaussian per mixture is used for all radial ("Rad"), diagonal ("Diag") and full ("Full") covariance models. The same prediction shift ("SH") of 2 frames is used. (A shift of 2 frames ensures there is nearly no overlap between input frames and prediction target frame, given the 64 ms audio analysis window size and 30 ms window step size). Costs and frame accuracies for these models are shown in TAB3. One can see that there is no big difference in cost between the radial and diagonal covariance models, but a big improvement is observed for the use of full covariance models. The difference between supervised and unsupervised models, also reported here, is not clearly correlated with the strength of the covariance model. The improvements in prediction cost do not improve the low frame accuracy observed for the logmel density models, but as discussed in the main body of the article, the frame accuracy metric used may be misleading. This Appendix describes the effect of different target prediction shifts ("SH") for autoregressive density networks modeling logmel features, focusing on the difference between supervised and unsupervised models. The are shown in TAB4. One can see that the prediction cost is substantially worse for the larger prediction shifts, the difference between supervised and unsupervised is substantially larger, and no real impact is observed on frame accuracy. This Appendix uses logmel spectrograms to illustrate the behavior of both unsupervised and supervised autoregressive deep density networks trained to predict logmel features with different prediction target shifts. Radial deep density networks were trained with a single gaussian per mixture, and the sequence of generated mean vectors plotted over time for unseen data from a separate dataset of read speech, alongside the actual target logmel feature vectors for the same data. The figures here illustrate target features and generated means for individual data mini-batches, i.e. 64 concatenated segments of 20 frames each, used in the truncated LSTM unrolling scheme described in Section 6.2. For a prediction shift of 1 frame (corresponding to the "SH=1" for the radial model in TAB4), there is no visually discernible difference between unsupervised and supervised prediction, illustrated in Figure 7 and Figure 8, respectively; the label encoding is not necessary to make a good prediction. (A shift of 1 frame corresponds to a nearly 50% overlap in analysis window, given the 64 ms window and 30 ms window step). This behavior matches the rather small advantage in prediction cost observed for the supervised model over the unsupervised model in the corresponding "SH=1" in TAB4. In contrast, a shift of 4 frames makes the unsupervised prediction quite poor (Figure 9), and there is a noticeable advantage for the supervised prediction, which can leverage the label encoding (FIG0). These plots intuitively match the much better prediction cost for the supervised model compared to the unsupervised model for the corresponding "SH=4" in TAB4.
This paper proposes the use of a deep generative acoustic model for automatic speech recognition, combining naturally with other deep sequence-to-sequence modules using Bayes' rule.
1,434
scitldr
Recent studies show that widely used Deep neural networks (DNNs) are vulnerable to the carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the L_p distance for penalizing perturbations. Different defense methods have also been explored to defend against such adversarial attacks. While the effectiveness of L_p distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works. Perturbations generated through spatial transformation could in large L_p distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. This potentially provides a new direction in adversarial example generation and the design of corresponding defenses. We visualize the spatial transformation based perturbation for different examples and show that our technique can produce realistic adversarial examples with smooth image deformation. Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted. Deep neural networks (DNNs) have demonstrated their outstanding performance in different domains, ranging from image processing BID18 BID10 ), text analysis BID3 to speech recognition. Though deep networks have exhibited high performance for these tasks, recently they have been shown to be particularly vulnerable to adversarial perturbations added to the input images BID34 BID7. These perturbed instances are called adversarial examples, which can lead to undesirable consequences in many practical applications based on DNNs. For example, adversarial examples can be used to subvert malware detection, fraud detection, or even potentially mislead autonomous navigation systems BID30 BID5 BID8 and therefore pose security risks when applied to security-related applications. A comprehensive study about adversarial examples is required to motivate effective defenses. Different methods have been proposed to generate adversarial examples such as fast gradient sign methods (FGSM) BID7, which can produce adversarial instances rapidly, and optimization-based methods (C&W) BID1, which search for adversarial examples with smaller magnitude of perturbation. One important criterion for adversarial examples is that the perturbed images should "look like" the original instances. The traditional attack strategies adopt L 2 (or other L p) norm distance as a perceptual similarity metric to evaluate the distortion BID9. However, this is not an ideal metric BID16 BID14, as L 2 similarity is sensitive to lighting and viewpoint change of a pictured object. For instance, an image can be shifted by one pixel, which will lead to large L 2 distance, while the translated image actually appear "the same" to human perception. Motivated by this example, in this paper we aim to look for other types of adversarial examples and propose to create perceptually realistic examples by changing the positions of pixels instead of directly manipulating existing pixel values. This has been shown to better preserve the identity and structure of the original image BID44. Thus, the proposed spatially transformed adversarial example optimization method (stAdv) can keep adversarial examples less distinguishable from real instances (such examples can be found in Figure 3).Various defense methods have also been proposed to defend against adversarial examples. Adversarial training based methods have so far achieved the most promising BID7 BID38 BID28. They have demonstrated the robustness of improved deep networks under certain constraints. However, the spatially transformed adversarial examples are generated through a rather different principle, whereby what is being minimized is the local geometric distortion rather than the L p pixel error between the adversarial and original instances. Thus, the previous adversarial training based defense method may appear less effective against this new attack given the fact that these examples generated by stAdv have never been seen before. This opens a new challenge about how to defend against such attacks, as well as other attacks that are not based on direct pixel value manipulation. We visualize the spatial deformation generated by stAdv; it is seen to be locally smooth and virtually imperceptible to the human eye. In addition, to better understand the properties of deep neural networks on different adversarial examples, we provide visualizations of the attention of the DNN given adversarial examples generated by different attack algorithms. We find that the spatial transformation based attack is more resilient across different defense models, including adversarially trained robust models. Our contributions are summarized as follows:• We propose to generate adversarial examples based on spatial transformation instead of direct manipulation of the pixel values, and we show realistic and effective adversarial examples on the MNIST, CIFAR-10, and ImageNet datasets.• We provide visualizations of optimized transformations and show that such geometric changes are small and locally smooth, leading to high perceptual quality.• We empirically show that, compared to other attacks, adversarial examples generated by stAdv are more difficult to detect with current defense systems.• Finally, we visualize the attention maps of deep networks on different adversarial examples and demonstrate that adversarial examples based on stAdv can more consistently mislead the adversarial trained robust deep networks compared to other existing attack methods. Here we first briefly summarize the existing adversarial attack algorithms as well as the current defense methods. We then discuss the spatial transformation model used in our adversarial attack. Adversarial Examples Given a benign sample x, an attack instance x adv is referred to as an adversarial example, if a small magnitude of perturbation is added to x (i.e. x adv = x +) so that x adv is misclassified by the targeted classifier g. Based on the adversarial goal, attacks can be classified into two categories: targeted and untargeted attacks. In a targeted attack, the adversary's objective is to modify an input x such that the target model g classifies the perturbed input x adv in a targeted class chosen, which differs from its ground truth. In a untargeted attack, the adversary's objective is to cause the perturbed input x adv to be misclassified in any class other than its ground truth. Based on the adversarial capabilities, these attacks can be categorized as white-box and black-box attacks, where an adversary has full knowledge of the classifier and training data in the white-box setting BID35 BID7 BID1 BID25 BID30 BID0 BID17 BID20; while having zero knowledge about them in the black-box setting BID29 BID24 BID26 BID27. In this work, we will focus on the white-box setting to explore what a powerful adversary can do based on the Kerckhoffs's principle BID33 to better motivate defense methods. In computer vision and graphics literature, Two main aspects determine the appearance of a pictured object BID37: the lighting and material, which determine the brightness of a point as a function of illumination and object material properties, and the geometry, which determines where the projection of a point will be located in the scene. Most previous adversarial attacks BID7 focus on changing the lighting and material aspect, while assuming the underlying geometry stays the same during the adversarial perturbation generation process. Modeling geometric transformation with neural networks was first explored by "capsules," computational units that locally transform their input for modeling 2D and 3D geometric changes BID13. Later, BID15 demonstrated that similar computational units, named spatial transformers, can benefit many visual recognition tasks. BID43 adopted the spatial transformers for synthesizing novel views of the same object and has shown that a geometric method can produce more realistic compared to pure pixel-based methods. Inspired by these successes, we also use the spatial transformers to deform the input images, but with a different goal: to generate realistic adversarial examples. Defensive Methods Following the emergence of adversarial examples, various defense methods have been studied, including adversarial training BID7, distillation BID31, gradient masking BID9 and feature squeezing BID40. However, these defenses can either be evaded by C&W attacks or only provide marginal improvements BID2 BID11. Among these defenses, adversarial training has achieved the state-of-the-art performance. BID7 proposed to use the fast gradient sign attack as an adversary to perform adversarial training, which is much faster, followed by ensemble adversarial training BID38 and projected gradient descent (PGD) adversarial training BID28. In this work, we explicitly analyze how effective the spatial transformation based adversarial examples are under these adversarial training based defense methods. Here we first introduce several existing attack methods and then present our formulation for producing spatially transformed adversarial examples. Given a learned classifier g: X → Y from a feature space X to a set of classification outputs Y (e.g., Y = {0, 1} for binary classification), an adversary aims to generate adversarial example x adv for an original instance x ∈ X with its ground truth label y ∈ Y, so that the classifier predicts g(x adv) = y (untargeted attack) or g(x adv) = t (targeted attack) where t is the target class. All of the current methods for generating adversarial examples are built on directly modifying the pixel values of the original image. The fast gradient sign method (FGSM) BID7 uses a first-order approximation of the loss function to construct adversarial samples for the adversary's target classifier g. The algorithm achieves untargeted attack by performing a single gradient ascent step: DISPLAYFORM0, where g (x, y) is the loss function (e.g. cross-entropy loss) used to train the original model g, y denotes the ground truth label, and the hyper-parameter controls the magnitude of the perturbation. A targeted version of it can be done similarly. Optimization-based attack (C&W) produces an adversarial perturbation for a targeted attack based on certain constraints BID1 BID24 as formulated below: DISPLAYFORM1 where the L p norm penalty ensures that the added perturbation is small. The same optimization procedure can achieve untargeted attacks with a modified constraint g(x + δ) = y. Figure 1: Generating adversarial examples with spatial transformation: the blue point denotes the coordinate of a pixel in an output adversarial image and the green point is its corresponding pixel in an input image. The flow field in red represents the displacement from the pixels in the adversarial image to the pixels in the input image. All the existing approaches directly modify pixel values, which may sometimes produce noticeable artifacts. Instead, we aim to smoothly change the geometry of the scene while keeping the original appearance, producing more perceptually realistic adversarial examples. In this section, we first introduce our spatial transformation model and then describe our objective function for generating spatially transformed adversarial examples. We use x (i) adv to denote the pixel value of the i-th pixel and 2D coordinate (u (i) adv, v (i) adv ) to denote its location in the adversarial image x adv. We assume that x (i) adv is transformed from the pixel x (i) from the original image. We use the per-pixel flow (displacement) field f to synthesize the adversarial image x adv using pixels from the input x. For the i-th pixel within x adv at the pixel location (u (i) adv, v (i) adv ), we optimize the amount of displacement in each image dimension, with the pair denoted by the flow vector f i:= (∆u (i), ∆v (i) ). Note that the flow vector f i goes from a pixel x (i) adv in the adversarial image to its corresponding pixel x (i) in the input image. Thus, the location of its corresponding pixel DISPLAYFORM0 adv + ∆v (i) ). As the (u (i), v (i) ) can be fractional numbers and does not necessarily lie on the integer image grid, we use the differentiable bilinear interpolation BID15 to transform the input image with the flow field. We calculate x (i) adv as: DISPLAYFORM1 where N (u (i), v (i) ) are the indices of the 4-pixel neighbors at the location (u (i), v (i) ) (top-left, topright, bottom-left, bottom-right). We can obtain the adversarial image x adv by calculating Equation 1 for every pixel x (i) adv. Note that x adv is differentiable with respect to the flow field f BID15 BID44. The estimated flow field essentially captures the amount of spatial transformation required to fool the classifier. Objective function Most of the previous methods constrain the added perturbation to be small regarding a L p metric. Here instead of imposing the L p norm on pixel space, we introduce a new regularization loss L flow on the local distortion f, producing higher perceptual quality for adversarial examples. Therefore, the goal of the attack is to generate adversarial examples which can mislead the classifier as well as minimizing the local distortion introduced by the flow field f. Formally, given a benign instance x, we obtain the flow field f by minimize the following objective: DISPLAYFORM2 where L adv encourages the generated adversarial examples to be misclassified by the target classifier. L flow ensures that the spatial transformation distance is minimized to preserve high perceptual quality, and τ balances these two losses. The goal of L adv is to guarantee the targeted attack g(x adv) = t where t is the targeted class, different from the ground truth label y. Recall that we transform the input image x to x adv with the flow field f (Equation 1). In practice, directly enforcing g(x adv) = t during optimization is highly non-linear, we adopt the objective function suggested in BID1. DISPLAYFORM3 where g(x) represents the logit output of model g, g(x) i denotes the i-th element of the logit vector, and κ is used to control the attack confidence level. To compute L flow, we calculate the sum of spatial movement distance for any two adjacent pixels. Given an arbitrary pixel p and its neighbors q ∈ N (p), we enforce the locally smooth spatial transformation perturbation L flow based on the total variation BID32: DISPLAYFORM4 Intuitively, minimizing the spatial transformation can help ensure the high perceptual quality for stAdv, since adjacent pixels tend to move towards close direction and distance. We solve the above optimization with L-BFGS solver BID23. In this section, we first show adversarial examples generated by the proposed spatial transformation method and analyze the properties of these examples from different perspectives. We then visualize the estimated flows for adversarial examples and show that with small and smooth transformation, the generated adversarial examples can already achieve a high attack success rate against deep networks. We also show that stAdv can preserve a high attack success rate against current defense methods, which motivates more sophisticated defense methods in the future. Finally, we analyze the attention regions of DNNs, to better understand the attack properties of stAdv. Experiment Setup We set τ as 0.05 for all our experiments. We use confidence κ = 0 for both C&W and stAdv for a fair comparison. We leverage L-BFGS BID23 as our solver with backtracking linear search. We show adversarial examples with high perceptual quality for both MNIST (LeCun & BID21 CIFAR-10 datasets.stAdv on MNIST In our experiments, we generate adversarial examples againsts three target models in the white-box setting on the MNIST dataset. Model A, B, and C are derived from the prior work BID38, which represent different architectures. See Appendix A and TAB4 for more details about their network architectures. TAB0 presents the accuracy of pristine MNIST test data on each model as well as the attack success rate of adversarial examples generated by stAdv on these models. FIG0 shows the adversarial examples against different models where the original instances appear in the diagonal. Each adversarial example achieves a targeted attack, with the target class shown on the top of the column. It is clear that the generated adversarial examples still appear to be in the same class as the original instance for humans. Another advantage for stAdv compared with traditional attacks is that examples based on stAdv seldom show noise pattern within the adversarial examples. Instead, stAdv smoothly deforms the digits and since such natural deformation also exists in the dataset digits, humans can barely notice such manipulation. stAdv on CIFAR-10 For CIFAR-10, we use ResNet-32 1 and wide ResNet-34 2 as the target classifers BID41 BID10 BID28. We show the classification accuracy of pristine CIFAR-10 test data (p) and attack success rates of adversarial examples generated by stAdv on different models in TAB1. Figure 3 shows the generated examples on CIFAR-10 against different models. The original images are shown in the diagonal. The other images are targeted adversarial examples, with the index of the target classes shown at the top of the column. Here we use "0-9" to denote the ground truth labels of images lying in the diagonal for each corresponding column. These adversarial examples based on stAdv are randomly selected from the instances that can successfully attack the corresponding classifier. Humans can hardly distinguish these adversarial examples from the original instances. Comparison of different adversarial examples In Figure 4, we show adversarial examples that are targeted attacked to the same class ("0" for MNIST and "airplane" for CIFAR-10), which is different from their ground truth. We compare adversarial examples generated from different methods on an MNIST instance, where the digit "0" is misclassified as "2." We can see that the adjacent flows move in a similar direction in order to generate smooth . The flows are more focused on the edge of the digit and sometimes these flows move in different directions along the edge, which implies that the object boundary plays an important role in our stAdv optimization. Figure 6 illustrates a similar visualization on CIFAR-10. It shows that the optimized flows often focus on the area of the main object, such as the airplane. We also observe that the magnitude of flows near the edge are usually larger, which similarly indicates the importance of edges for misleading the classifiers. This observation confirms the observation that when DNNs extract edge information in the earlier layers for visual recognition tasks BID39. In addition, we visualize the similar flow for the ImageNet dataset BID4 in Figure 7. The top-1 label of the original image in Figure 7 (a) is "mountain bike". Figure 7 (b)-(d) show targeted adversarial examples generated by stAdv, which have target classes "goldfish," "Maltese dog," and "tabby cat," respectively, and which are predicted as such as the top-1 class. An interesting observation is that, although there are other objects within the image, nearly 90% of the spatial transformation flows tend to focus on the target object bike. Different target class corresponds to different directions for these flows, which still fall into the similar area. To quantify the perceptual realism of stAdv's adversarial examples, we perform a user study with human participants on Amazon Mechanical Turk (AMT). We follow the same perceptual study protocol used in prior image synthesis work BID14. We generate 600 images from an ImageNet-compatible dataset, described in Appendix C. In our study, the participants are asked to choose the more visually realistic image between an adversarial example generated by stAdv and its original image. During each trial, these two images appear side-by-side for 2 seconds. After the images disappear, our participants are given unlimited time to make their decision. To avoid labeling bias, we allow each user to conduct at most 50 trails. For each pair of an original image and its adversarial example, we collect about 5 annotations from different users. In total, we collected 2, 740 annotations from 93 AMT users. Examples generated by our method were chosen as the more realistic in 47.01% ± 1.96% of the trails (perfectly realistic would achieve 50%). This indicates that our adversarial examples are almost indistinguishable from natural images. Adv. 5.04% 7.61% 31.66% Ens. 4.65% 8.43% 29.56% PGD 14.9% 13.90% 31.6% Here we generate adversarial examples in the white-box setting and test different defense methods against these samples to evaluate the strength of these attacks under defenses. We mainly focus on the adversarial training defenses due to their state-of-the-art performance. We apply three defense strategies in our evaluation: the FGSM adversarial training (Adv.) BID7, ensemble adversarial training (Ens.) BID38, and projectile gradient descent (PGD) adversarial training BID28 methods. For adversarial training purposes, we generate adversarial examples based on L ∞ bound BID1 as 0.3 on MNIST and 8 on CIFAR-10. We test adversarial examples generated against model A, B, and C on MNIST as shown in TAB4, and similarly adversarial examples generated against ResNet32 and wide ResNet34 on CIFAR-10.The on the MNIST and CIFAR-10 datasets are shown in TAB2. We observe that the three defense strategies can achieve high performance (less than 10% attack success rate) against FGSM and C&W attacks. These defense methods only achieve low defense performance on stAdv, which improve the attack success rate to more than 30% among all defense strategies. These indicate that new type of adversarial strategy, such as our spatial transformation-based attack, may open new directions for developing better defense systems. However, for stAdv, we cannot use L p norm to bound the distance as translating an image by one pixel may introduce large L p penalty. We instead constrain the spatial transformation flow and show that our adversarial examples have high perceptual quality in FIG0, and 4 as well as Section 4.3. We also test our adversarial examples against the 3×3 average pooling restoration mechanism. TAB5 in Appendix B shows the classification accuracy of recovered images after performing 3 × 3 average filter on different models. We find that the simple 3 × 3 average pooing restoration mechanism can recover the original class from fast gradient sign examples and improve the classification accuracy up to around 70%. Carlini & Wagner have also shown that such mean blur defense strategy can defend against adversarial examples generated by their attack and improve the model accuracy to around 80% (2017b). From TAB5, we can see that the mean blur defense method can only improve the model accuracy to around 50% on stAdv examples, which means adversarial examples generated by stAdv are more robust compared to other attacks. We also perform a perfect knowledge adaptive attack against the mean blur defense following the same attack strategy suggested in BID2, where we add the 3 × 3 average pooling layer into the original network and apply stAdv to attack the new network again. We observe that the success rate of an adaptive attack is nearly 100%, which is consistent with Carlini & Wagner's findings (2017b). In addition to the analyzing adversarial examples themselves, in this section, we further characterize these spatially transformed adversarial examples from the perspective of deep neural networks. Here we apply Class Activation Mapping (CAM) BID43, an implicit attention visualization technique for localizing the discriminative regions implicitly detected by a DNN. We use it to show the attention of the target ImageNet inception_v3 model BID36 ) for both original images and generated adversarial examples. In addition, we also compare and visualize the attention regions of both naturally trained and the adversarial trained inception_v3 model 3 on adversarial images generated by different attack algorithms FIG5 ). The ground truth top-1 label is "cinema," so the attention region for the original image FIG5 ) includes both tower and building regions. However, when the adversarial examples are targeted attacked into the adversarial label "missile," the attention region focuses on only the tower for all the attack algorithms as shown in FIG5 (b)-(d) with slight different attention region sizes. More interestingly, we also test these adversarial examples on the public adversarial trained robust inception_v3 model. The appears in FIG5 (f)- (h). This time, the attention regions are drawn to the building again for both FGSM and C&W methods, which are close to the attention regions of the original image. The top-1 label for FIG5 (f) and (g) are again the ground truth "cinema", which means both FGSM and C&W fail to attack the robust model. However, FIG5 (h) is still misclassified as "missile" under the robust model and the CAM visualization shows that the attention region still focuses on the tower. This example again implies that adversarial examples generated by stAdv are challenging to defend for the current "robust" ImageNet models. Different from the previous works that generate adversarial examples by directly manipulating pixel values, in this work we propose a new type of perturbation based on spatial transformation, which aims to preserve high perceptual quality for adversarial examples. We have shown that adversarial examples generated by stAdv are more difficult for humans to distinguish from original instances. We also analyze the attack success rate of these examples under existing defense methods and demonstrate they are harder to defend against, which opens new directions for developing more robust defense algorithms. Finally, we visualize the attention regions of DNNs on our adversarial examples to better understand this new attack. A MODEL ARCHITECTURES Here we evaluated adversarial examples generated by stAdv against the 3 × 3 average pooling restoration mechanism suggested in. TAB5 shows the classification accuracy of recovered images after performing 3 × 3 average pooling on different models. ImageNet-compatible. We use benign images from the DEV set from the NIPS 2017 targeted adversarial attack competition. 4 This competition provided a dataset compatible with ImageNet and containing target labels for a targeted attack. We generate targeted adversarial examples for the target inception_v3 model. In Figure 10 below, we show the original images on the left with the correct label, and we show adversarial examples generated by stAdv on the right with the target label. MNIST. We generate adversarial examples for the target Model B. In Figure 11, we show original images with ground truth classes 0-9 in the diagonal, and we show adversarial examples generated by stAdv targeting the class of the original image within that column. CIFAR-10. We generate adversarial examples for the target ResNet-32 model. In FIG0, we show the original images in the diagonal, and we show adversarial examples generated by stAdv targeting the class of the original image within that column. Table 6 shows the magnitude of the generated flow regarding total variation (TV) and L 2 distance on the ImageNet-compatible set, MNIST, CIFAR-10, respectively. These metrics are calculated by the following equations, where n is the number of pixels: TV = 1 n all pixels p q∈N (p) ||∆u (p) − ∆u (q)
We propose a new approach for generating adversarial examples based on spatial transformation, which produces perceptually realistic examples compared to existing attacks.
1,435
scitldr
Most existing deep reinforcement learning (DRL) frameworks consider action spaces that are either discrete or continuous space. Motivated by the project of design Game AI for King of Glory (KOG), one the world’s most popular mobile game, we consider the scenario with the discrete-continuous hybrid action space. To directly apply existing DLR frameworks, existing approaches either approximate the hybrid space by a discrete set or relaxing it into a continuous set, which is usually less efficient and robust. In this paper, we propose a parametrized deep Q-network (P-DQN) for the hybrid action space without approximation or relaxation. Our algorithm combines DQN and DDPG and can be viewed as an extension of the DQN to hybrid actions. The empirical study on the game KOG validates the efficiency and effectiveness of our method. In recent years, the exciting field of deep reinforcement learning (DRL) have witnessed striking empirical achievements in complicated sequential decision making problems that are once believed unsolvable. One active area of the application of DRL methods is to design artificial intelligence (AI) for games. The success of DRL in the game of Go provides a promising methodology for game AI. In addition to the game of Go, DRL has been widely used in other games such as Atari BID19, Robot Soccer BID8 BID17, and Torcs ) to achieve super-human performances. However, most existing DRL methods only handle the environments with actions chosen from a set which is either finite and discrete (e.g., Go and Atari) or continuous (e.g. MuJoCo and Torcs) For example, the algorithms for discrete action space include deep Q-network (DQN) BID18, Double DQN , A3C BID20; the algorithms for continuous action space include deterministic policy gradients (DPG) BID29 and its deep version DDPG.Motivated by the applications in Real Time Strategic (RTS) games, we consider the reinforcement learning problem with a discrete-continuous hybrid action space. Different from completely discrete or continuous actions that are widely studied in the existing literature, in our setting, the action is defined by the following hierarchical structure. We first choose a high level action k from a discrete set {1, 2, · · ·, K}; upon choosing k, we further choose a low level parameter x k ∈ X k which is associated with the k-th high level action. Here X k is a continuous set for all k ∈ {1, . . ., K}.1 Therefore, we focus on a discrete-continuous hybrid action space A = (k, x k) x k ∈ X k for all 1 ≤ k ≤ K.To apply existing DRL approaches on this hybrid action space, two straightforward ideas include:• Approximate A by an finite discrete set. We could approximate each X k by a discrete subset, which, however, might lose the natural structure of X k. Moreover, when X k is a region in the Euclidean space, establishing a good approximation usually requires a huge number discrete actions.• Relax A into a continuous set. To apply existing DRL framework with continuous action spaces, BID8 define the following approximate space DISPLAYFORM0 where F k ⊆ R. Here f 1, f 2,..., f K is used to select the discrete action either deterministically (by picking arg max i f i) or randomly (with probability softmax(f)). Compared with the original action space A, A might significantly increases the complexity of the action space. Furthermore, continuous relaxation can also lead to unnecessary confusion by over-parametrization. For example, (1, 0, · · ·, 0, x 1, x 2, x 3, · · ·, x K) ∈ A and (1, 0, · · ·, 0, x 1, x 2, x 3, · · ·, x K) ∈ A indeed represent the same action (1, x 1) in the original space A.In this paper, we propose a novel DRL framework, namely parametrized deep Q-network learning (P-DQN), which directly work on the discrete-continuous hybrid action space without approximation or relaxation. Our method can be viewed as an extension of the famous DQN algorithm to hybrid action spaces. Similar to deterministic policy gradient methods, to handle the continuous parameters within actions, we first define a deterministic function which maps the state and each discrete action to its corresponding continuous parameter. Then we define a action-value function which maps the state and finite hybrid actions to real values, where the continuous parameters are obtained from the deterministic function in the first step. With the merits of both DQN and DDPG, we expect our algorithm to find the optimal discrete action as well as avoid exhaustive search over continuous action parameters. To evaluate the empirical performances, we apply our algorithm to King of Glory (KOG), which is one of the most popular online games worldwide, with over 200 million active users per month. KOG is a multi-agent online battle arena (MOBA) game on mobile devices, which requires players to take hybrid actions to interact with other players in real-time. Empirical study indicates that P-DQN is more efficient and robust than BID8's method that relaxes A into a continuous set and applies DDPG. In reinforcement learning, the environment is usually modeled by a Markov decision process (MDP) M = {S, A, p, p 0, γ, r}, where S is the state space, A is the action space, p is the Markov transition probability distribution, p 0 is the probability distribution of the initial state, r(s, a) is the reward function, and γ ∈ is the discount factor. An agent interacts with the MDP sequentially as follows. At the t-th step, suppose the MDP is at state s t ∈ S and the agent selects an action a t ∈ A, then the agent observe an immediate reward r(s t, a t) and the next state s t+1 ∼ p(s t+1 |s t, a t). A stochastic policy π maps each state to a probability distribution over A, that is, π(a|s) is defined as the probability of selecting action a at state s. Whereas a deterministic µ: S → A maps each state to a particular action in A. Let R t = j≥t γ j−t r(s j, a j) be the cumulative discounted reward starting from time-step t. We define the state-value function and the action-value function of policy π as V π = E(R t |S t = s; π) and Q π (s, a) = E(R t |S 0 = s, A 0 = a; π), respectively. Moreover, we define the optimal state-and action-value functions as V π = sup π V π and Q * = sup π Q π, respectively, where the supremum is taken over all possible policies. The goal of the agent is to find a policy the maximizes the expected total discounted reward J(π) = E(R 0 |π), which is can be achieved by estimating Q *. Broadly speaking, reinforcement learning algorithms can be categorized into two classes: value-based methods and policy-based methods. Value-based methods first estimate Q * and then output the greedy policy with respect to that estimate. Whereas policy-based methods directly optimizes J(π) as a functional of π. The Q-learning algorithm BID36 ) is based on the Bellman equation DISPLAYFORM0 which has Q * as the unique solution. In the tabular setting, the algorithm updates the Q-function by iteratively applying the sample counterpart of the Bellman equation DISPLAYFORM1 where α > 0 is the stepsize and s is the next state observed given the current state s and action a. However, when the state space S is so large that it is impossible to store all the states in memory, function approximation for Q * is applied. Deep Q-Networks (DQN) BID19 approximates Q * using a neural network Q(s, a; w) ≈ Q(s, a), where w is the network weights. In the t-th iteration, the DQN updates the parameter using the gradient of the least squares loss function DISPLAYFORM2 In practice, DQN is trained with techniques such as experience replay and asynchronous stochastic gradient descent methods BID20 ) which enjoy great empirical success. In addition to the value-based methods, the policy-based methods directly models the optimal policy. In specific, let π be any policy. We write p t (·|s; π) as the distribution of S t given S 1 = s with actions executed according to policy π. We define the discounted probability distribution ρ π by DISPLAYFORM3 Then the objective of policy-based methods is to find a policy that maximizes the expected reward DISPLAYFORM4 Let π θ be a stochastic policy parametrized by θ ∈ Θ. For example, π θ could be a neural network in which the last layer is a softmax layer with |A| neurons. The stochastic gradient methods aims at finding a parameter θ that maximizes J(π θ) via gradient descent. The stochastic policy gradient theorem BID33 states that DISPLAYFORM5 3)The policy gradient algorithm iteratively updates θ using estimates of (2.3). For example, the REINFORCE algorithm BID37 updates θ using ∇ θ log π θ (a t |s t) · r t. Moreover, the actor-critic methods use another neural network Q(s, a; w) to estimate the value function Q π θ (s, a) associated to policy π θ. This algorithm combines the value-based and policy-based perspectives together, and is recently used to achieve superhuman performance in the game of Go BID31. When the action space is continuous, value-based methods will no longer be computationally tractable because of taking maximum over the action space A in (2.2), which in general cannot be computed efficiently. The reason is that the neural network Q(s, a; w) is nonconvex when viewed as a function of a; max a∈A Q(s, a; w) is the global minima of a nonconvex function, which is NP-hard to obtain in the worst case. To resolve this issue, the continuous Q-learning BID6 rewrite the action value function as Q(s, a) = V (s) + A(s, a), where V (s) is the state value function and A(s, a) is the advantage function that encodes the relative advantage of each action. These functions are approximated by neural networks V (s; θ V) and A(s, a; θ A), respectively, where θ V and θ A are network weights. The action value function is given by DISPLAYFORM0 Then in the t-th iteration, the continuous Q-learning updates θ v and θ a by taking a gradient step using the least squares loss function DISPLAYFORM1 Moreover, it is also possible to adapt policy-based methods to continuous action spaces by considering deterministic policies. Let µ θ: S → A be a deterministic policy. Similar to (2.3), the deterministic policy gradient (DPG) theorem BID29 states that DISPLAYFORM2 Furthermore, this deterministic version of the policy gradient theorem can be viewed as the limit of (2.3) with the variance of π θ going to zero. Based on (2.4), the DPG algorithm BID29 and the deep deterministic policy gradient (DDPG) algorithm are proposed. General reinforcement learning There is a huge body of literature in reinforcement learning, we refer readers to textbooks by; Szepesvári FORMULA6 for detailed introduction. Combined with the recent advancement of deep learning BID5, deep reinforcement learning becomes a blossoming field of research with a plethora of new algorithms which achieve surprising empirical success in a variety of applications that are previously considered extremely difficult and challenging. Finite discrete action space methods For reinforcement learning problems with finite action spaces, BID18 propose the DQN algorithm, which first combines the deep neural networks with the classical Q-learning algorithm BID36. A variety of extensions are proposed to improve DQN, including Double DQN, dueling DQN BID35, bootstrap DQN BID23, asynchronous DQN BID20, and averaged-.In terms of policy-based methods, BID33 propose the REINFORCE algorithm, which is the basic form of policy gradient. An important extension is the actor-critic method BID13, whose asynchronous deep version A3C BID20 produces the stateof-the-art performances on the Arcade Learning Environment (ALE) benchmark BID1.Continuous action space methods Moreover, for DRL on continuous action spaces, BID29 proposes the deterministic policy gradient algorithm and deterministic actor-critic algorithms. This work is further extended by, which propose the DDPG algorithm, which is an model-free actor critic algorithm using deep neural networks to parametrize the policies. A related line of work is policy optimization methods, which improve the policy gradient method using novel optimization techniques. These methods include natural gradient descent BID12, trust region optimization BID27, proximal gradient descent BID28, mirror descent BID21, and entropy regularization BID22.Hybrid actions A related body of literature is the recent work on reinforcement learning with a structured action space, which contains finite actions each parametrized by a continuous parameter. To handle such parametrized actions, BID8 applies the DDPG algorithm on the relaxed action space directly, and BID17 proposes a learning framework updating the parameters for discrete actions and continuous parameters alternately. Game AI Recently remarkable advances have been made in building AI bots for computer games using deep reinforcement learning. These games include Atari Games, a collection of video games, Texas Hold'em, a multi-player poker game, and Doom, a first-person shooter game. See BID18; BID9; BID15; BID2 for details and see BID11 for a comprehensive survey. More notably, the computer Go agent AlphaGo achieves super-human performances by defeating the human world champion Lee Sedol. Two more complicated class of games are the real-time strategy (RTS) games and MOBA games. These are multi-agent games which involves searching within huge state and action spaces that are possibly continuous. Due to the difficulty of these problems, current research for these games are rather inadequate with most existing work consider specific scenarios instead of the full-fledged RTS or MOBA games. See, e.g., BID4; BID24 for an recent attempt on applying DRL methods to RTS games. This section introduces the proposed framework to handle the application with hybrid discretecontinuous action space. We consider a MDP with a parametrized action space A, which consists of K discrete actions each associated with a continuous parameter. In specific, we assume that any action a ∈ A can be written as a = (k, x k), where k ∈ {1, . . ., K} is the discrete action, and x k ∈ X k is a continuous parameter associated with the k-th discrete action. Thus action a is a hybrid of discrete and continuous components with the value of the continuous action determined after the discrete action is chosen. Then the parametrized action space A can be written as DISPLAYFORM0 In the sequel, we denote {1, . . ., K} by [K] for short. For the action space A in (4.1), we denote the action value function by Q(s, a) = Q(s, k, x k) where s ∈ S, 1 ≤ k ≤ K, and x k ∈ X k. Let k t be the discrete action selected at time t and let x kt be the associated continuous parameter. Then the Bellman equation becomes DISPLAYFORM1 Here inside the conditional expectation on the right-hand side of (4.2), we first solve DISPLAYFORM2, and then take the largest Q(s t+1, k, x * k). Note that taking supremum over continuous space X k is computationally intractable. However, the right-hand side of (4.2) can be evaluated efficiently providing x * k is given. To elaborate this idea, first note that, when the function Q is fixed, for any s ∈ S and k ∈ [K], we can view x Q k (s) = argsup DISPLAYFORM3 as a function of state s. That is, we identify (4.3) as a function x Q k: S → X k. Then we can rewrite the Bellman equation in (4.2) as DISPLAYFORM4 Note that this new Bellman equation resembles the classical Bellman equation in (2.1) with A = [K]. Similar to the deep Q-networks, we use a deep neural network Q(s, k, x k ; ω) to approximate Q(s, k, x k), where ω denotes the network weights. Moreover, for such a Q(s, k, x k ; ω), we approximate x Q k (s) in (4.3) with a deterministic policy network x k (·; θ): S → X k, where θ denotes the network weights of the policy network. That is, when ω is fixed, we want to find θ such that Q s, k, x k (s; θ); ω ≈ sup DISPLAYFORM5 Remark 4.1. Readers who are familiar with the work by BID8, that also claims to handle discrete-continuous hybrid action spaces, may be curious of its difference from the proposed P-DQN. The key differences are as follows.• In BID8, the discrete action types are parametrized as some continuous values, say f. And the discrete action that is actually executed is chosen via k = arg max i f (i). Such a trick actually turns the hybrid action space into a continuous action space, upon which the classical DDPG algorithm can be applied. However, in our framework, the discrete action type is chosen directly by maximizing the action's Q value explicitly.• The Q network in BID8 uses the artificial parameters f as input, which makes it an action-value function estimator of current policy (Q π). While in our framework, the Q network is actually an approximate estimator of the optimal policy's action-value function (Q).• We note that P-DQN is an off-policy method that can use historical data, while it is hard to use historical data in BID8 because there is only discrete action k without parameters f.(a) Network of P-DQN (b) Network of DDPG Figure 1: Illustration of the networks of P-DQN and DDPG BID8. P-DQN selects the discrete action type by maximizing Q values explicitly; while in DDPG, the discrete action with largest f, which can be seen as a continuous parameterization of K discrete action types, is chosen. Also in P-DQN the state and action parameters are feed into the Q-network which outputs K action values for each action type; while in DDPG, the continuous parameterization f, instead of the actual action k taken, is feed into the Q-network. Suppose that θ satisfies (4.4), then similar to DQN, we could estimate ω by minimizing the meansquared Bellman error via gradient descent. In specific, in the t-th step, let ω t and θ t be the weights of the value network and the deterministic policy network, respectively. To incorporate multi-step algorithms, for a fixed n ≥ 1, we define the n-step target y t by DISPLAYFORM0 We define the least squares loss function for ω by DISPLAYFORM1 Moreover, since we aim to find θ that minimizes Q[s, k, x k (s; θ); ω] with ω fixed, we define the loss function for θ by DISPLAYFORM2 Then we update ω t and θ t by gradient-based optimization methods. Moreover, the gradients are given by DISPLAYFORM3 Here ∇ x Q(s, k, x k ; ω) and ∇ ω Q(s, k, x k ; ω) are the gradients of the Q-network with respect to its third argument and fourth argument, respectively. By (5.5) and (5.4) we update the parameters using stochastic gradient methods. In addition, note that in the ideal case, we would minimize the loss function Θ t (θ) in (5.3) when ω t is fixed. From the in stochastic approximation methods BID14, we could approximately achieve such a goal in an online fashion via a two-timescale update rule BID3. In specific, we update ω with a stepsize α t that is asymptotically negligible compared with the stepsize β t for θ. In addition, for the validity of Input: Stepsizes {αt, βt} t≥0, exploration parameter, minibatch size B, the replay memory D, and a probability distribution µ over the action space A for exploration. Initialize network weights ω1 and θ1. for t = 1, 2,..., T doCompute action parameters x k ← x k (s, θt). Select action at = (kt, x k t) according to the -greedy policy at = a sample from distribution µ with probability, (kt, x k t) such that kt = arg max k∈[K] Q(s, k, x k ; ωt) with probability 1 −.Take action at, observe reward rt and the next state st+1. DISPLAYFORM0 Use data {y b, s b, a b} b∈ [B] to compute the stochastic gradient ∇ω Q t (ω) and ∇ θ Θ t (θ) defined in (5.5) and (5.4). Update the parameters by ωt+1 ← ωt − αt · ∇ω Q t (ωt) and θt+1 ← θt − βt · ∇ θ Θ t (θt). end for stochastic approximation, we require {α t, β t} to satisfy the Robbins-Moron condition BID25. We present the P-DQN algorithm with experienced replay in Algorithm 1.Note that this algorithm requires a distribution µ defined on the action space A for exploration. In each step, with probability, the agent sample an random action from µ; otherwise, it takes the greedy action with respect to the current value function. In practice, if each X k is a compact set in the Euclidean space (as in our case), µ could be defined as the uniform distribution over A. In addition, as in the DDPG algorithm , we can also add additive noise to the continuous part of the actions for exploration. Moreover, we use experience replay BID18 to reduce the dependencies among the samples, which can be replaced by more sample-efficient methods such as prioritized replay.Moreover, we note that our P-DQN algorithm can easily incorporate asynchronous gradient descent to speed up the training process. Similar to the asynchronous n-step DQN in BID20, we consider a centralized distributed training framework where each process can compute its local gradient and synchronize with a global parameter server. In specific, each local process runs an independent game environment to generate transition trajectories and use its own transitions to compute gradients with respect to ω and θ. These local gradients are then aggregated across multiple processes to update the global parameters. Note that these local stochastic gradients are independent. Thus tricks such as experience replay can be avoided in the distributed setting. Moreover, aggregating independent stochastic gradient decrease the variance of gradient estimation, which yields better algorithmic stability. We present the asynchronous P-DQN algorithm in Algorithm 2. For simplicity, here we only lay out the algorithm for each local process, which fetches ω and θ from the parameter server and computes the gradient. The parameter server stores the global parameters ω, θ. It updates the global parameters using the gradients sent from the local processes. In addition we use the RMSProp BID10 to update the network parameters, which is shown to be more stable in practice. The game King of Glory is a MOBA game, which is a special form of the RTS game where the players are divided into two opposing teams fighting against each other. Each team has a team base located in either the bottom-left or the top-right corner which are guarded by three towers on each of the three lanes. The towers can attack the enemies when they are within its attack range. Each player controls one hero, which is a powerful unit that is able to move, kill, perform skills, and purchase Input: exploration parameter, a probability distribution µ over the action space A for exploration, the max length of multi step return tmax, and maximum number of iterations Nstep. Initialize global shared parameter ω and θ Set global shared counter Nstep = 0 Initialize local step counter t ← 1. repeat Clear local gradients dω ← 0, dθ ← 0. tstart ← t Synchronize local parameters ω ← ω and θ ← θ from the parameter server. repeat Observe state st and let x k ← x k (st, θ) Select action at = (kt, x k t) according to the -greedy policy at = a sample from distribution µ with probability, (kt, x k t) such that kt = arg max k∈[K] Q(st, k, x k ; ω) with probability 1 −.Take action at, observe reward rt and the next state st+1. t ← t + 1 Nstep ← Nstep + 1 until st is the terminal state or t − tstart = tmax Define the target y = 0 for terminal st DISPLAYFORM0 Update global θ and ω using dθ and dω with RMSProp BID10 ). until Nstep > Nmax equipments. The goal of the heroes is to destroy the base of the opposing team. In addition, for both teams, there are computer-controlled units spawned periodically that march towards the opposing base in all the three lanes. These units can attack the enemies but cannot perform skills or purchase equipments. An illustration of the map is in FIG2, where the blue or red circles on each lane are the towers. During game play, the heroes advance their levels and obtain gold by killing units and destroying the towers. With gold, the heros are able to purchase equipments such as weapons and armors to enhance their power. In addition, by upgrading to the new level, a hero is able to improve its unique skills. Whereas when a hero is killed by the enemy, it will wait for some time to reborn. In this game, each team contains one, three, or five players. The five-versus-five model is the most complicated mode which requires strategic collaboration among the five players. In contrast, the one-versus-one mode, which is called solo, only depends on the player's control of a single hero. In a solo game, only the middle lane is active; both the two players move along the middle lane to fight against each other. The map and a screenshot of a solo game are given in FIG2 -(b) and (c), respectively. In our experiments, we play focus on the solo mode. We emphasize that a typical solo game lasts about 10 to 20 minutes where each player must make instantaneous decisions. Moreover, the players have to make different types of actions including attack, move and purchasing. Thus, as a reinforcement learning problem, it has four main difficulties: first, the state space has huge capacity; second, since there are various kinds of actions, the action space is complicated; third, the reward function is not well defined; and fourth, heuristic search algorithms are not feasible since the game is in real-time. Therefore, although we consider the simplest mode of King of Glory, it is still a challenging game for artificial intelligence. In this section, we applied the P-DQN algorithm to the solo mode of King of Glory. In our experiments, we play against the default AI hero Lu Ban provided by the game, which is a shooter with long attack range. To evaluate the performances, we compared our algorithm with the DDPG algorithm BID8 under fair condition. In our experiment, the state of the game is represented by a 179-dimensional feature vector which is manually constructed using the output from the game engine. These features consist of two parts. The first part is the basic attributes of the two heroes, the computer-controlled units, and buildings such as the towers and the bases of the two teams. For example, the attributes of the heroes include Health Point, Magic Point, Attack Damage, Armor, Magic Power, Physical Penetration/Resistance, and Magic Penetration/Resistance, and the attributes of the towers include Health Point and Attack Damage. The second component of the features is the relative positions of other units and buildings with respect to the hero controlled by the P-DQN player as well as the attacking relations between other units. We note that these features are directly extracted from the game engine without sophisticated feature engineering. We conjecture that the overall performances could be improved with a more careful engineered set of features. We simplify the actions of a hero into K = 6 discrete action types: Move, Attack, UseSkill1, UseSkill2, UseSkill3, and Retreat. Some of the actions may have additional continuous parameters to specify the precise behavior. For example, when the action type is k = Move, the direction of movement is given by the parameter x k = α, where α ∈ [0, 2π]. Recall that each hero's skills are unique. For Lu Ban, the first skill is to throw a grenade at some specified location, the second skill is to launch a missile in a particular direction, and the last skill is to call an airship to fly in a specified direction. A complete list of actions as well as the associated parameters are given in TAB0. The ultimate goal of a solo game is to destroy the opponent's base. However, the final is only available when the game terminates. Using such kind of information as the reward for training might not be very effective, as it is very sparse and delayed. In practice, we manually design the rewards using information from each frame. Specifically, we define a variety of statistics as follows. (In the sequel, we use subscript 0 to represent the attributes of our side and 1 to represent those of the opponent.)• Gold difference GD = Gold 0 − Gold 1. This statistic measures the difference of gold gained from killing hero, soldiers and destroying towers of the opposing team. The gold can be used to buy weapons and armors, which enhance the offending and defending attributes of the hero. Using this value as the reward encourages the hero to gain more gold.• Health Point difference (HPD = HeroRelativeHP 0 − HeroRelativeHP 1): This statistic measures the difference of Health Point of the two competing heroes. A hero with higher Health Point can bear more severe damages while hero with lower Health Point is more likely to be killed. Using this value as the reward encourages the hero to avoid attacks and last longer before being killed by the enemy.• Kill/Death KD = Kills 0 − Kills 1. This statistic measures the historical performance of the two heroes. If a hero is killed multiple times, it is usually considered more likely to lose the game. Using this value as the reward can encourage the hero to kill the opponent and avoid death.• Tower/Base HP difference THP = TowerRelativeHP 0 − TowerRelativeHP 1, BHP = BaseRelativeHP 0 − BaseRelativeHP 1. These two statistics measures the health difference of the towers and bases of the two teams. Incorporating these two statistic in the reward encourages our hero to attack towers of the opposing team and defend its own towers.• Tower Destroyed TD = AliveTower 0 − AliveTower 1. This counts the number of destroyed towers, which rewards the hero when it successfully destroy the opponent's towers.• Winning Game W = AliveBase 0 − AliveBase 1. This value indicates the winning or losing of the game.• Moving forward reward: MF = x + y, where (x, y) is the coordinate of Hero 0: This value is used as part of the reward to guide our hero to move forward and compete actively in the battle field. The overall reward is calculated as a weighted sum of the time differentiated statistics defined above. In specific, the exact formula is r t = 0.5 × 10 −5 (MF t − MF t−1) + 0.001(GD t − GD t−1) + 0.5(HPD t − HPD t−1 DISPLAYFORM0 The coefficients are set roughly inversely proportional to the scale of each statistic. We note that our algorithm is not very sensitive to the change of these coefficients in a reasonable range. In the experiments, we use the default parameters of skills provided by the game environment (usually pointing to the opponent hero's location). We found such kind of simplification does not affect to the overall performance of our agent. In addition, to deal with the periodic problem of the direction of movement, we use (cos(α), sin(α)) to represent the direction and learn a normalized two-dimensional vector instead of a degree (in practice, we add a normalize layer at the end to ensure this). In addition, the 6 discrete actions are not always usable, due to skills level up, lack of Magic Point (MP), or skills Cool Down(CD). In order to deal with this problem, we replace the max k∈ [K] with max k∈[K] and k is usable when selecting the action to perform, and calculating multi-step target as in Equation 5.1.For the network structure, recall that we use a feature vector of 179 dimensions as the state. We set both the value-network and the policy network as multi-layer fully-connected deep networks. The networks are in the same size of 256-128-64 nodes in each hidden layer, with the Relu activation function. During the training and testing processes, we set the frame skipping parameter to 2. This means that we take actions every 3 frames or equivalently, 0.2 second, which adapts to the human reaction time, 0.1 second. We set t max = 20 (4 seconds) to alleviate the delayed reward. In order to encourage exploration, we use -greedy sampling in training with = 0.255. In specific, the first 5 type actions We further smooth the original noisy curves (plotted in light colors) to their running average (plotted in dark colors). In the 3 rows, we plot the average of episode lengths, reward sum averaged for each episode in training, and reward sum averaged for each episode in validation, for the two algorithms respectively. Usually a positive reward sum indicates a winning game, and vice versa. We can see that the proposed algorithm P-DQN learns much faster than its precedent work in our setting. (a) Performance of P-DQN. (b) Performance of DDPG are sampled with probability of 0.05 each and the action "Retreat" with probability 0.005. For actions with additional parameters, since the parameters are in bounded sets, we draw these parameters from a uniform distribution. Moreover, if the sampled action is infeasible, we execute the greedy policy from the feasible ones, so the effective exploration rate is less than. We uses 48 parallel workers with constant learning rate 0.001 in training and 1 worker with deterministic sampling in validation. The training and validating performances are plotted in Figure 3.We implemented the DDPG BID8 ) algorithm within our learning environment to have a fair comparison. The exact network structure is plotted in Figure 1. Each algorithm is allowed to run for 15 million steps, which corresponds to roughly 140 minutes of wall clock time when paralleled with 48 workers. From the experiments , we can see that our algorithm P-DQN can learn the value network and the policy network much faster comparing to the other algorithm. In (a1), we see that the average length of games increases at first, reaches its peak when the two player's strength are close, and decreases when our player can easily defeat the opponent. In addition, in (a2) and (a3), we see that the total rewards in an episode increase consistently in training as well as in test settings. The DDPG algorithm may not be suitable for hybrid actions with both a discrete part and a continuous part. The major difference is that maximization over k when we need to select a action is computed explicitly in P-DQN, instead of approximated implicitly with the policy network as in DDPG. Moreover, with a deterministic policy network, we extend the DQN algorithm to hybrid action spaces of discrete and continuous types, which makes the P-DQN algorithm more suitable for realistic scenarios. Previous deep reinforcement learning algorithms mostly can work with either discrete or continuous action space. In this work, we consider the scenario with discrete-continuous hybrid action space. In contrast of existing approaches of approximating the hybrid space by a discrete set or relaxing it into a continuous set, we propose the parameterized deep Q-network (P-DQN), which extends the classical DQN with deterministic policy for the continuous part of actions. Empirical experiments of training AI for King of Glory, one of the most popular games, demonstrate the efficiency and effectiveness of P-DQN.
A DQN and DDPG hybrid algorithm is proposed to deal with the discrete-continuous hybrid action space.
1,436
scitldr
Over the past decade, knowledge graphs became popular for capturing structured domain knowledge. Relational learning models enable the prediction of missing links inside knowledge graphs. More specifically, latent distance approaches model the relationships among entities via a distance between latent representations. Translating embedding models (e.g., TransE) are among the most popular latent distance approaches which use one distance function to learn multiple relation patterns. However, they are mostly inefficient in capturing symmetric relations since the representation vector norm for all the symmetric relations becomes equal to zero. They also lose information when learning relations with reflexive patterns since they become symmetric and transitive. We propose the Multiple Distance Embedding model (MDE) that addresses these limitations and a framework which enables collaborative combinations of latent distance-based terms (MDE). Our solution is based on two principles: 1) using limit-based loss instead of margin ranking loss and 2) by learning independent embedding vectors for each of terms we can collectively train and predict using contradicting distance terms. We further demonstrate that MDE allows modeling relations with (anti)symmetry, inversion, and composition patterns. We propose MDE as a neural network model which allows us to map non-linear relations between the embedding vectors and the expected output of the score function. Our empirical show that MDE outperforms the state-of-the-art embedding models on several benchmark datasets. While machine learning methods conventionally model functions given sample inputs and outputs, a subset of Statistical Relational Learning (SRL) approaches specifically aim to model "things" (entities) and relations between them. These methods usually model human knowledge which is structured in the form of multi-relational Knowledge Graphs (KG). KGs allow semantically rich queries and are used in search engines, natural language processing (NLP) and dialog systems. However, they usually miss many of the true relations , therefore, the prediction of missing links/relations in KGs is a crucial challenge for SRL approaches. A KG usually consists of a set of facts. A fact is a triple (head, relation, tail) where heads and tails are called entities. Among the SRL models, distance-based KG embeddings are popular because of their simplicity, their low number of parameters, and their efficiency on large scale datasets. Specifically, their simplicity allows integrating them into many models. Previous studies have integrated them with logical rule embeddings , have adopted them to encode temporal information and have applied them to find equivalent entities between multi-language datasets . Soon after the introduction of the first multi-relational distance-based method TransE it was acknowledged that it is inefficient in learning of symmetric relations, since the norm of the representation vector for all the symmetric relations in the KG becomes close to zero. This means the model cannot distinguish well between different symmetric relations in a KG. To extend this model many variations are studied afterwards, e.g., TransH (b), TransR (b), TransD , and STransE . Even though they solved the issue of symmetric relations, they introduced a new problem: these models were no longer efficient in learning the inversion and composition relation patterns that originally TransE could handle. Besides, as noted in , within the family of distancebased embeddings, usually reflexive relations are forced to become symmetric and transitive. In this study, we take advantage of independent vector representations of vectors that enable us to view the same relations from different aspects and put forward a translation-based model that addresses these limitations and allows the learning of all three relation patterns. In addition, we address the issue of the limit-based loss function in finding an optimal limit and suggest an updating limit loss function to be used complementary to the current limit-based loss function which has fixed limits. Moreover, we frame our model into a neural network structure that allows it to learn non-linear patterns between embedding vectors and the expected output which substantially improves the generalization power of the model in link prediction tasks. The model performs well in the empirical evaluations, improving upon the state-of-the-art in link prediction benchmarks. Since our approach involves several elements that model the relations between entities as the geometric distance of vectors from different views, we dubbed it multipledistance embeddings (MDE). Given the set of all entities E and the set of all relations R, we formally define a fact as a triple of the form (h, r, t) in which h is the head and t is the tail, h, t ∈ E and r ∈ R is a relation. A knowledge graph KG is a subset of all true facts KG ⊂ ζ and is represented by a set of triples. An embedding is a mapping from an entity or a relation to their latent representation. A latent representation is usually a (set of) vector(s), a matrix or a tensor of numbers. A relational learning model is made of an embedding function and a prediction function that given a triple (h, r, t) it determines if (h, r, t) ∈ ζ. We represent the embedding representation of an entity h with a lowercase letter h if it is a vector and with an uppercase letter H if it is a matrix. The ability to encode different patterns in the relations can show the generalization power of a model: Definition 1. A relation r is symmetric (antisymmetric) if ∀x, y r(x, y) ⇒ r(y, x) (r(x, y) ⇒ ¬r(y, x) ). A clause with such a structure has a symmetry (antisymmetry) pattern. Definition 2. A relation r 1 is inverse to relation r 2 if ∀x, y r 2 (x, y) ⇒ r 1 (y, x). A clause with such a form has an inversion pattern. Definition 3. A relation r 1 is composed of relation r 2 and relation r 3 if ∀x, y, z A clause with such a form has a composition pattern. Tensor Factorization and Multiplicative Models define the score of triples via pairwise multiplication of embeddings. DistMult simply multiplies the embedding vectors of a triple element by element h, r, t as the score function. Since multiplication of real numbers is symmetric, DistMult can not distinguish displacement of head relation and tail entities and therefore, it can not model anti-symmetric relations. ComplEx solves the issue of DistMult by the idea that the complex conjugate of the tail makes it non-symmetric. By introducing complex-valued embeddings instead of realvalued embeddings to DistMult, the score of a triple in ComplEx is Re(h diag(r)t) witht the conjugate of t and Re is the real part of a complex value. ComplEx is not efficient in encoding composition rules . In RESCAL instead of a vector, a matrix represents the relation r, and performs outer products of h and t vectors to this matrix so that its score function becomes h Rt. A simplified version of RESCAL is HolE that defines a vector for r and performs circular correlation of h and t has been found equivalent to ComplEx. Another tensor factorization model is Canonical Polyadic (CP) . In CP decomposition, each entity e is represented by two vectors h e, t e ∈ R d, and each relation r has a single embedding vector v r ∈ R d. MDE is similarly based on the idea of independent vector embeddings. A study suggests that in CP, the independence of vectors causes the poor performance of CP in KG completion, however, we show that the independent vectors can strengthen a model if they are combined complementarily. SimplE analogous to CP, trains on two sets of subject and object entity vectors. SimplE's score function, 1 2 h ei, r, t ej + 1 2 h ej, r −1, t ej, is the average of two terms. The first term is similar to DistMult. However, its combination with the second term and using a second set of entity vectors allows SimplE to avoid the symmetric issue of DistMult. SimplE allows learning of symmetry, anti-symmetry and inversion patterns. However, it is unable to efficiently encode composition rules, since it does not model a bijection mapping from h to t through relation r. In Latent Distance Approaches the score function is the distance between embedding vectors of entities and relations. In the view of social network analysis, originally proposed distance of entities −d(h, t) as the score function for modeling uni-relational graphs where d(., .) means any arbitrary distance, such as Euclidean distance. SE generalizes the distance for multi-relational data by incorporating a pair of relation matrices into it. TransE represents relation and entities of a triple by a vector that has this relation where. p is the p-norm. To better distinguish entities with complex relations, TransH (a) projects the vector of head and tail to a relation-specific hyperplane. Similarly, TransR follows the idea with relation-specific spaces and extends the distance function to M r h + r − M r t p. RotatE combines translation and rotation and defines the distance of a t from tail h which is rotated the amount r as the score function of a triple −d(h • r, t) where • is Hadamard product. Neural Network Methods train a neural network to learn the interaction of the h, r and t. ER-MLP is a two layer feedforward neural network considering h, r and t vectors in the input. NTN is neural tensor network that concatenates head h and tail t vectors and feeds them to the first layer that has r as weight. In another layer, it combines h and t with a tensor R that represents r and finally, for each relation, it defines an output layer r to represent relation embeddings. In SME relation r is once combined with the head h to get g u (h, r), and similarly it is combined with the tail t to get g v (t, r). SME defines a score function by the dot product of this two functions in the hidden layer. In the linear SME, g(e, r) is equal to M The score function of MDE involves multiple terms. We first explain the intuition behind each term and then explicate a framework that we suggest to efficiently utilize them such that we benefit from their strengths and avoid their weaknesses. Inverse Relation Learning: Inverse relations can be a strong indicator in knowledge graphs. For example, if IsP arentOf (m, c) represents that a person m is a parent of another person c, then this could imply IsChildOf (c, m) assuming that this represents the person c being the child of m. This indication is also valid in cases when this only holds in one direction, e.g. for the relations IsM otherOf and IsChildOf. In such a case, even though the actual inverse IsP arentOf may not even exist in the KG, we can still benefit from inverse relation learning. To learn the inverse of the relations, we define a score function S 2: Symmetric Relations Learning: It is possible to easily check that the formulation h + r − t allows 1 learning of anti-symmetric pattern but when learning symmetric relations, r tends toward zero which limits the ability of the model in separating entities specially if symmetric relations are frequent in the KG. For learning symmetric relations, we suggest the term S 3 as a score function. It learns such relations more efficiently despite it is limited in the learning of antisymmetric relations. Lemma 1. S 1 allows modeling antisymmetry, inversion and composition patterns and S 2 allows modeling symmetry patterns. (See proof in Appendix A) Relieving Limitations on Learning of Reflexive Relations: A previous study highlighted the common limitations of TransE, FTransE, STransE, TransH and TransR for learning reflexive relations where these translation-based models force the reflexive relations to become symmetric and transitive. To relieve these limitations, we define S 4 as a score function which is similar to the score of RotatE i.e., h • r − t p but with the Hadamard operation on the tail. In contrast to RotatE which represents entities as complex vectors, S 4 only holds in the real space: Lemma 2. The following restrictions of translation based embeddings approaches do not apply to the S 4 score function. R1: if a relation r is reflexive, on ∆ ∈ E, r it will be also symmetric on ∆. R2: if r is reflexive on ∆ ∈ E, r it will be also be transitive on ∆. (See proof in Appendix B) Model Definition: To incorporate different views to the relations between entities, we define these settings for the model: 1. Using limit-based loss instead of margin ranking loss. 2. Each aggregated term in the score represents a different view of entities and relations with an independent set of embedding vectors. 3. In contrast to ensemble approaches that incorporate models by training independently and testing them together, MDE is based on multi-objective optimization that jointly minimizes the objective functions. However, when aggregating different terms in the score function, the summation of opposite vectors can cause the norm of these vectors to diminish during the optimization. For example if S 1 and S 3 are added together, the minimization would lead to relation(r) vectors with zero norm value. To address this issue, we represent the same entities with independent variables in different distance functions. Based on CP, MDE considers four vectors e i, e j, e k, e l, ∈ R d as the embedding vector of each entity e, and four vectors r i, r j, r k, r l ∈ R d for each relation r. The score function of MDE for a triple (h, r, t) is defined as weighted sum of listed score functions: where ψ, w 1, w 2, w 3, w 4 ∈ R are constant values. In the following, we show using ψ and limitbased loss, the combination of the terms in equation 5 is efficient, such that if one of the terms recognises if a sample is true F M DE would also recognize it. Limit-based Loss: Because margin ranking loss minimizes the sum of error from directly comparing the score of negative to positive samples, when applying it to translation embeddings, it is possible that the score of a correct triplet is not small enough to hold the relation of the score function . To enforce the scores of positive triples become lower than those of negative ones, defines limited-based loss which minimizes the objective function such that Figure 1: Geometric illustration of the translation terms considered in MDE the score for all the positive samples become less than a fixed limit. extends the limit-based loss so that the score of the negative samples become greater than a fixed limit. We train our model with the same loss function which is: where − are the set of positive and negative samples and β 1, β 2 > 0 are constants denoting the importance of the positive and negative samples. This version of limit-based loss minimizes the aggregated error such that the score for the positive samples become less than γ 1 and the score for negative samples become greater than γ 2. To find the optimal limits for the limit-based loss, we suggest updating the limits during the training. (See the explanation in Appendix D). Lemma 3. There exist ψ and γ 1, γ 2 ≥ 0 (γ 1 ≥ γ 2), such that only if one of the terms in f M DE estimates a fact as true, f M DE also predicts it as a true fact. Consequently, the same also holds for the capability of MDE to allow learning of different relation patterns. (See proof in Appendix C) It is notable that without the introduction of ψ and the limits γ 1, γ 2 from the limit-based loss, Lemma 3 does not hold and framing the model with this settings makes the efficient combination of the terms in f M DE possible. In contrast to SimplE that ties the relation vectors of two terms in the score together, MDE does not directly relate them to take advantage of the independent relation and entity vectors in combining opposite terms. The learning of the symmetric relations is previously studied (e.g. in ) and (a) studied the training over the inverse of relations, however providing a way to gather all these benefits in one model is a novelty of MDE. Besides, complementary modeling of different vector-based views of a knowledge graph is a novel contribution. The score of MDE is already aggregating a multiplication of vectors to weights. We take advantage of this setting to model MDE as a layer of a neural network that allows learning the embedding vectors and multiplied weights jointly during the optimization. To create such a neural network we multiply ψ by a weight w 5 and we feed the MDE score to an activation function. We call this extension of MDE as MDE N N: where σ is logistic sigmoid function and w 1, w 2,..., w 5 are elements of the latent vector w that are estimated during the training of the model. This framing of MDE reduces the number of hyperparameters. The major advantage of MDE N N in comparison to the current distance-based models is that the logistic sigmoid activation function allows the non-linear mappings between the embedding vectors and the expected output for positive and the negative samples. Considering the ever growth of KGs and the expansion of the web, it is crucial that the time and memory complexity of a relational mode be minimal. Despite the limitations in expressivity, TransE is one of the popular models on large datasets due to its scalability. With O(d) time complexity (of one mini-batch), where d is the size of embedding vectors, it is more efficient than RESCAL, NTN, and the neural network models. Similar to TransE, the time complexity of MDE is O(d). Due to the additive construction of MDE, the inclusion of more distance terms keeps the time complexity linear in the size of vector embeddings. Datasets: We experimented on four standard datasets: WN18 and FB15k are extracted by from Wordnet Freebase . We used the same train/valid/test sets as in ( and the of TransR and NTN from , and ER-MLP from . The on the inverse relation excluded datasets are from , Table 13 for TransE and RotatE and the rest are from 2. Evaluation Settings: We evaluate the link prediction performance by ranking the score of each test triple against its versions with replaced head, and once for tail. Then we compute the hit at N (Hit@N), mean rank (MR) and mean reciprocal rank (MRR) of these rankings. We report the evaluations in the filtered setting. Implementation: We implemented MDE in PyTorch 3. Following , we generated one negative example per positive example for all the datasets. We used Adadelta as the optimizer and fine-tuned the hyperparameters on the validation dataset. The ranges of the hyperparameters are set as follows: embedding dimension 25, 50, 100, 200, batch size 100, 150, and iterations 50, 100, 1000, 1500, 2500, 3600. We set the initial learning rate on all datasets to 10. For MDE, the best embedding size and γ 1 and γ 2 and β 1 and β 2 values on WN18 were 50 and 1.9, 1.9, 2 and 1 respectively and for FB15k were 200, 10, 13, 1, 1. The best found embedding size and γ 1 and γ 2 and β 1 and β 2 values on FB15k-237 were 100, 9, 9, 1 and 1 respectively and for WN18RR were 50, 2, 2, 5 and 1. We selected the coefficient of terms in equation 5, by grid search in the range 0.1 to 1.0 and testing those combinations of the coefficients where they create a convex combination. Found values are w 1 = 0.16, w 2 = 0.33, w 3 = 0.16, w 4 =0.33. We also tested for the best value for ψ between {0.1, 0.2,. . ., 1.5}. We use ψ = 1.2 for all the experiments. For MDE N N, we use the same γ 1, γ 2, β 1 and β 2 values except for WN18 that the γ 1 and γ 2 are 4. We use the embedding size 50 for WN18RR, 200 for WN18, 200 for FB15k-237 and 200 for FB15k. We use ψ = 2 for all the MDE N N experiments. To regulate the loss function and to avoid over-fitting, we estimate the score function for two sets of independent vectors and we take their average in the prediction. Another advantage of this operation is the reduction of required training iterations. As a , MDE reaches to the 99 percent of its ranking performance in 100 iterations, and MDE N N reaches its best performance in the benchmarks in just 50 iterations. Table 2 shows the of our experiment on FB15k-237 and WN18RR, where the improvement is much more significant. Due to the existence of hard limits in the limit-based loss, the mean rank in both MDE and MDE N N is much lower than other methods. The comparison of MDE to other state-of-the-art models, regardless of the MDE N N, shows the competitive performance of MDE. It is observable that while MDE generates only one negative sample per positive sample and is using vector sizes between 50 to 200, it challenges RotatE which employs relatively large embedding dimensions (from 125 up to 1000) and high number of negative samples (up to 1024). We observe that the application of sigmoid in MDE N N improves it significantly in all the benchmarks. Particularly, in the more challenging tests over WN18RR and FB15k-237, the improvement is more significant. For example, we can see that the construction of the neural network from the model increased its Hit@10 on FB15k-237 from 0.484 to 0.999. From analyzing the MRR scores, we can see that RotatE must be totally off in few cases whereas the MDE N N model almost never seems to be far off, but frequently fails to put the correct entity on top. To our knowledge, MDE N N outperforms all the current embedding models in the MR and Hit@10 measures and specially performs better than all the existing models in all the measures on WN18RR and FB15k-237 benchmarks. To better understand the role of each term in the score function of MDE, we embark two ablation experiments. First, we train MDE using one of the terms alone, and observe the link prediction performance of each term in the filtered setting. In the second experiment, we remove one of the terms at a time and test the effect of the removal of that term on the model after 100 iterations. Table 3 summarizes the of the first experiment on WN18RR and FB15k-237. We can see that S 4 outperforms the other terms while S 1 and S 3 performs very similar on these two datasets. Between the four terms, S 2 performs the worst since most of the relations in the test datasets follow an antisymmetric pattern and S 2 is not efficient in modeling them. Table 4 shows the of the second experiment. The evaluations on WN18RR and WN18 show that removal of S 4 has the most negative effect on the performance of MDE. The removal of S 1 that was one of the good performing terms in the last experiment has the least effect. Nevertheless, S 1 improves the MRR in the MDE. Also, when we remove S 2, the MRR and Hit@10 are negatively influenced, indicating that there exist cases that S 2 performs better than the other terms, although, in the individual tests, it performed the worst between all the terms. In this study, we showed how MDE relieves the expressiveness restrictions of the distance-based embedding models and proposed a general method to override these limitations for the older models. Beside MDE and RotatE, most of the existing KG embedding approaches are unable to allow modeling of all the three relation patterns. We framed MDE into a Neural Network structure and validated our contributions via both theoretical proofs and empirical . We demonstrated that with multiple views to translation embeddings and using independent vectors (that previously were suggested to cause poor performance ) a model can outperform the existing state-of-the-art models for link prediction. Our experimental confirm the competitive performance of MDE and particularly MDE N N that achieves state-of-the-art MR and Hit@10 performance on all the benchmark datasets. A PROOF OF LEMMA 1. Let r 1, r 2, r 3 be relation vector representations and e i, e j, e k are entity representations. A relation r 1 between (e i, e k) exists when a triple (e i, r 1, e k) exists and we show it by r 1 (e i, e k). Formally, we have the following : Antisymmetric Pattern. If r 1 (e i, e j) and r 1 (e j, e i) hold, in equation 1 for S 1, then: e i + r 1 = e j ∧ e j + r 1 = e i ⇒ e i + 2r 1 = e i Therefore S 1 allows encoding of relations with antisymmetric patterns. Symmetric Pattern. If r 1 (e i, e j) and r 1 (e j, e i) hold, for S 2 we have: e i + e j − r 1 = 0 ∧ e j + e i − r 1 = 0 ⇒ e j + e i = r 1 Therefore S 2 allows encoding relations with symmetric patterns. For S 1 we have: Inversion Pattern. If r 1 (e i, e j) and r 2 (e j, e i) hold, from Equation 1 we have: Therefore S 1 allows encoding relations with inversion patterns. Composition Pattern. If r 1 (e i, e k), r 2 (e i, e j) and, r 3 (e j, e k) hold, from equation 1 we have: e i + r 1 = e k ∧ e i + r 2 = e j ∧ e j + r 3 = e k ⇒ r 2 + r 3 = r 1 Therefore S 1 allows encoding relations with composition patterns. B PROOF OF LEMMA 2. Proof. R1: For such reflexive r 1, if r 1 (e i, e i) then r l (e j, e j). In this equation we have: e i = r 1 e i ∧ e j = r 1 e j ⇒ r 1 = U ⇒ e i = r 1 e j where U is unit tensor. R2: For such reflexive r 1, if r 1 (e i, e j) and r l (e j, e k) then r 1 (e j, e i) and r l (e k, e j). In the above equation we have: e i = r 1 e j ∧ e j = r 1 e k ⇒ e i = r 1 r 1 e j e k ∧ r i = U ⇒ e i = e j e k ⇒ e i + e k = r l C PROOF OF LEMMA 3. We show there is boundries for γ 1, γ 2, w 1, w 2, w 3, w 4, such that learning a fact by one of the terms in f M DE is enough to classify a fact correctly. Proof. We show the boundaries for three aggregated terms in the the distance function, it is easily possible to extend it to four and more terms. It is enough to show that there is at least one set of boundaries for the positive and negative samples that follows the constraints. The case to prove is when three of the distance functions classify a fact negative N and the one distance function e.g. s 2 classify it as positive P, and the case that s 1 and s 3 classify a fact as positive and s 2 classify it as negative. We set w 1 = w 3 = 1/4 and w 2 = 1/2 and assume that Sum is the value estimated by the score function of MDE, we have: There exist a = 2 and γ 1 = γ 2 = 2 and ψ = 1 that satisfy γ 1 > Sum ≥ 0 and the inequality 8. loss = the from equation 6 It can be easily checked that without introduction of ψ, there is no value of Sum that can satisfy both γ 1 > Sum ≥ 0 and the inequality 8 and we calculated the value of ψ based on the values of γ 1, γ 2 and a. In case that future studies discover new interesting distances, this Lemma shows how to basically integrate them into MDE. While the limit-based loss resolves the issue of margin ranking loss with distance based embeddings, it does not provide a way to find the optimal limits. Therefore the mechanism to find limits for each dataset and hyper-parameter is the try and error. To address this issue, we suggest updating the limits in the limit-based loss function during the training iterations. We denote the moving-limit loss by loss guide. loss guide = lim δ,δ →γ1 where the initial value of δ 0, δ 0 is 0. In this formulation, we increase the δ 0, δ 0 toward γ 1 and γ 2 during the training iterations such that the error for positive samples minimizes as much as possible. We test on the validation set after each 50 epoch and take those limits that give the best value during the tests. The details of the search for limits is explained in Algorithm 1. After observing the most promising values for limits in the preset number of iterations, we stop the search and perform the training while having the δ values fixed(fixed limit-base loss) to allow the adaptive learning to reach loss values smaller than the threshold. We based this approach on the idea of adaptive learning rate , where the Adadelta optimizer adapts the learning rate after each iteration, therefore in the loss guided we can update the limits without stopping the training iterations. In our experiments, the variables in the Algorithm 1, are as follows. threshold = 0.05, ξ = 0.1.
A novel method of modelling Knowledge Graphs based on Distance Embeddings and Neural Networks
1,437
scitldr
Improved generative adversarial network (Improved GAN) is a successful method of using generative adversarial models to solve the problem of semi-supervised learning. However, it suffers from the problem of unstable training. In this paper, we found that the instability is mostly due to the vanishing gradients on the generator. To remedy this issue, we propose a new method to use collaborative training to improve the stability of semi-supervised GAN with the combination of Wasserstein GAN. The experiments have shown that our proposed method is more stable than the original Improved GAN and achieves comparable classification accuracy on different data sets. Generative adversarial networks (GANs) BID3 have been recently studied intensively and achieved great success in deep learning domain BID14 BID9 BID15. A typical GAN simulates a two-player minimax game, where one aims to fool the other and the overall system is finally able to achieve equilibrium. Specifically speaking, we have a generator G to generate fake data G(z) from a random variable z whose distribution density is p(z), and also we have a discriminator D(x) to discriminate the real x from the generated data G(z), where x ∼ p r (x) and p r is the distribution density of real data. We optimize the two players G(z) and D(x) by solving the following minimax problem: DISPLAYFORM0 This method is so called as the original GAN BID3. After this, many different types of GANs have been proposed, e.g., least-squared GAN BID9, cat-GAN BID15, W-GAN, Improved GAN BID14, so on and so forth, focusing on improving the performance of GANs and extending the GAN idea to other application scenarios. For instance, the original GAN is trained in a completely unsupervised learning way BID3, along with many variants, such as LS-GAN and cat-GAN. It was later extended to semi-supervised learning. In BID14, Salimans et al. proposed the Improved GAN to enable generation and classification of data simultaneously. In BID7, Li et al. extended this method to consider conditional data generation. Another issue regarding the unsupervised learning of GANs is the lack of training stability in the original GANs, mostly because of dimension mismatch. A lot of efforts have been dedicated to solve this issue. For instance, in, the authors theoretically found that the instability problem and dimension mismatch of the unsupervised learning GAN was due to the maxing out of Jensen-Shannon divergence between the true and fake distribution and therefore proposed using the Wasserstein distance to train GAN. However, to calculate the Wasserstein distance, the network functions are required to be 1-Lipschitz, which was simply implemented by clipping the weights of the networks in. Later, Gulrajani et. al. improved it by using gradient penalty BID4. Besides them, the same issue was also addressed from different perspectives. In BID13, Roth et al. used gradient norm-based regularization to smooth the f-divergence objective function so as to reduce dimension mismatch. However, the method could not directly work on f-divergence, which was intractable to solve, but they instead optimized its variational lower bound. Its converging rate is still an open question and its computational complexity may be high. On the other hand, there were also some efforts to solve the issue of mode collapse, so as to try to stabilize the training of GANs from another perspective, including the unrolled method in BID10, mode regularization with VAEGAN , and variance regularization with bi-modal Gaussian distributions BID5. However, all these methods were investigated in the context of unsupervised learning. Instability issue for semi-supervised GAN is still open. In this work, we focus on investigating the training stability issue for semi-supervised GAN. To the authors' best knowledge, it is the first work to investigate the training instability for semi-supervised GANs, though some were done for unsupervised GANs as aforementioned. The instability issue of the semi-supervised GAN BID14 is first identified and analyzed from a theoretical perspective. We prove that this issue is in fact caused by the vanishing gradients theorem on the generator. We thus propose to solve this issue by using collaborative training to improve its training stability. We theoretically show that the proposed method does not have vanishing gradients on the generator, such that its training stability is improved. Besides the theoretical contribution, we also show by experiments that the proposed method can indeed improve the training stability of the Improved GAN, and at the same time achieve comparable classification accuracy. It is also worth to note that BID7 proposed the Triple GAN that also possessed two discriminators. However, its purpose is focused on using conditional probability training (the original GAN uses unconditional probability) based on data labels to improve the training of GAN, but not on solving the instability issue. Therefore, the question of instability for the Triple GAN is still unclear. More importantly, the method, collaborative training, proposed for exploring the data labels with only unconditional probability in this paper, can also be applied to the Triple GAN to improve its training stability, in the case of conditional probability case. The rest of the paper is organized as follows: in Section 2, we present the generator vanishing gradient theorem of the Improved GAN. In Section 3, we propose a new method, collaborative training Wasserstein GAN (CTW-GAN) and prove its nonvanishing gradient theorem. In Section 4, we present our experimental and finally give our in Section 5. The improved GAN BID14 combines supervised and unsupervised learning to solve the semi-supervised classification problem by simulating a two-player minmax game with adversarial training. The adversarial training is split into two steps. In the first step, it minimizes the following objective function for discriminator D for data x and labels y: DISPLAYFORM0 In the second step, it minimizes the distance of feature matching to optimize the generator G: DISPLAYFORM1 where DISPLAYFORM2 and D (−3) (x) are the outputs from the (n − 3)-th layer for a net with n layers. In this subsection, we prove the theorem of vanishing gradients on the generator for Improved GAN. This explains why the Improved GAN lacks training stability, as showed on some datasets, such as MNIST (cf. Section 4).Theorem 2.1 (Vanishing gradients on the generator for Improved GAN) Let g θ: Z → X be a differentiable function that induces a distribution P g. Let P r be the real data distribution. Let D be a differentiable discriminator bounded by T, i.e., D(x) 2 ≤ T. If the discriminator is trained to converge, i.e., D − D * 2 <, and DISPLAYFORM0 Proof 2.1 See Appendix A.1.This theorem implies that the generator gradients vanish when the discriminator is trained to converge. In this case, the generator training saturates, which explains the training instability phenomenon of the Improved GAN. The way to solve this problem is our next question. In this section, we propose a new method to solve the instability issue of the Improved GAN by using collaborative training between two GANs. These two GANs contribute to the adversarial training from two different perspectives, which may help avoid the drawbacks of each one. This is the basic idea behind the proposed method. The detailed procedure of CTW-GAN can be summarized as a minimax game carried out in two steps:At the first step, the discriminators D c and D w are optimized simultaneously: DISPLAYFORM0 where DISPLAYFORM1 At the second step, the generator G is then optimized by applying the optimized two discriminators D c, D w to G: DISPLAYFORM2 where DISPLAYFORM3 The overall architecture for CTW-GAN is described by Figure 1, where x r and x u stand for labeled and unlabeled data respectively:Figure 1: Architecture for CTW-GAN Bearing in mind the generator vanishing gradients theorem for Improved GAN, we may ask if a similar problem exists for our proposed CTW-GAN. In the following, we prove that our proposed method does not have the vanishing gradients issue on the generator, which therefore improves the training stability of the original Improved GAN.Theorem 3.1 Let P r be any distribution. Let P θ be the distribution of g θ (z) with z being a random variable with a density p and g θ a continuous function with respect to θ. Then there is a set of solutions D c, D w to the problem DISPLAYFORM0 and we have DISPLAYFORM1 where the last term is the gradients of the Wasserstein distance W (P r, P θ), i.e., DISPLAYFORM2 when the term L(D c, D w) is well-defined. Proof 3.1 see Appendix A.2.Remark: the above D w L ≤ 1 is required to be 1-Lipschitz. The constraint can be realized by weight clipping or gradient penalty BID4.The proposed algorithm is described as follows:Algorithm 1 CTW-GAN with gradient penalty: Require: Gradient penalty λ p = 10, generator weightλ g, and Adam hyperparameter α = 0.0001 Require: Initial parameter θ w for D w, θ c for D c and θ g for G.while θ g does not converge do for i = 1 · · · m do Sample a real data x ∼ p r and a noisy data DISPLAYFORM3 Sample a real data x ∼ p r, a noisy data z ∼ p(z) and a random variable ∼ U DISPLAYFORM4 In this section, we shall present the experiments to evaluate the proposed method. Our evaluation goals are twofold. On one hand, we evaluate the stability of CTW-GAN in comparison to that of the original Improved GAN to see whether our proposed method improves the training stability or not. On the other hand, we evaluate whether the proposed method achieves comparable classification performance to the original Improved GAN. To this end, we run experiments on two datasets: MNIST and CIFAR-10. MNIST includes 50, 000 training images, 10, 000 validation images and 10, 000 testing images, which contain handwritten digits with size 28 × 28.Following BID14, we randomly select a small set of labeled data from the 60, 000 training and validation set to perform semi-supervised learning with the selection size of 20, 50, 100, and 200 labeled examples. We run our experiments 9 times by giving the program different seeds. We use the seeds from 1 − 9. For each seed, the labeled data is selected so as to have a balanced number of examples from each class. The rest of the training images are used as unlabeled data. In our method, we use three networks whose architectures are described in FIG0. We use batch normalization and add Gaussian noise to the output of each layer of the two discriminators as the original Improved GAN does BID14. We only tune the parameter λ = 0.1, 0.5 from two values on the MNIST dataset. We do not tune any other parameters, such as learning rate, step size, etc.: we keep these as in the original Improved GAN. The shown in TAB1 are reported with λ = 0.1, the threshold for gradient penalty is 10 and n critic = 5: DISPLAYFORM0 From the , we can easily see that the original improved GAN has one or two out of nine runs for training failure (unexpected high error rates and poor generate image quality). However, for our proposed method, no training failure occurs. This shows that our method improves the training stability indeed. On the other hand, besides making the training process more stable, our proposed method does not reduce the classification accuracy at all, which is beyond our original purpose of avoiding training instability of the Improved GAN. Reasoning it, it may imply that the information explored by the two discriminators may be very different, thus reflecting a distinct Method n=50 n=100 n=200 DGN BID6 3.33(±0.14) Virtual Adversarial BID11 2.12 Cat-GAN BID15 1.91 (±0.10) Skip Deep Generative Model BID8 1.32 (±0.07) Ladder network BID12 1.06 (±0.37) Auxiliary Deep Generative Model BID8 0 GAN 20.40 (±0.47) Ladder network BID12 19.58 (±0.46) Improved-GAN 0.1726 (±0.0032) Ours 0.1713 (±0.0014) There is no failure case found in three runs for the original GAN on CIFAR-10. We use 4000 labeled samples.source of information for data representation. Utilizing those different information sources may help to improve classification accuracy, as long as the source of information is meaningful to some extent, or at least not noise. In our method, we use a very simple network for D w with only two layers. It may be possible to further improve classification performance if a network with more layers is used. We leave it for future work. In this section, we test our proposed method on the data set of CIFAR-10. CIFAR-10 consists of colored images belonging to 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck. There are 50,000 training and 10,000 testing samples with the size of 32 × 32. We split 5,000 training data of CIFAR-10 for validation if needed. Following BID14, we use a 9 layer deep convolutional network with dropout and weight normalization for the discriminator D c. The generator G is a 4 layer deep CNN with batch normalization. We use a very simple network with three layers for the discriminator D w, due to the limiting GPU resource. The network architectures are given in FIG1. TAB2 summarizes our on the semi-supervised learning task. On CIFAR-10 dataset, it is interesting to see that there is no failure case found for the Improved GAN in three runs at the moment. From the theoretical viewpoint, this may be due to the abundant richness of the image features in color being much harder to be modeled by the neural nets than that of MNIST in grayscale. Thus, the discriminator D c trained on CIFAR-10 does not as easily converge as the one trained on MNIST, such that the gradients on the generator do not vanish. However, it does not mean that this possibility is avoided. In certain cases, as long as the discriminator is trained to converge., e.g., running more iterations than the generator, the gradients on the generator will surely vanish, as theoretically guaranteed by Theorem 2.1. On the other hand, our proposed method is still able to achieve comparable to the original Improved GAN, besides providing a theoretical guarantee to the training stability. Due to the limiting GPU resource, we use a very simple network for D w. In this sense, the characteristics captured by this network may not be rich enough. However, the showed that even with the very simple network, the classification performance obtained is roughly comparable to that of the Improved GAN. We expect that it would be possibly improved further if we have more GPU resources and are able to train a deeper network for D w. In the paper, we study the training instability issue of semi-supervised improved GAN. We have found that the training instability is mainly due to the vanishing gradients on the generator of the Improved GAN. In order to make the training of the Improved GAN more stable, we propose a collaborative training method to combine Wasserstein GAN with the semi-supervised improved GAN. Both theoretical analysis and experimental on MNIST and CIFAR-10 have shown the effectiveness of the proposed method to improve training stability of the Improved GAN. In addition, it also achieves the classification accuracy comparable to the original Improved GAN. We would like to thank National Natural Science Foundation of China FORMULA0 for previously supporting the authors to prepare for the knowledge and skills demanded by this work.
Improve Training Stability of Semi-supervised Generative Adversarial Networks with Collaborative Training
1,438
scitldr
It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP. Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network. In this work, we derive the exact equivalence between infinitely wide, deep, networks and GPs with a particular covariance function. We further develop a computationally efficient pipeline to compute this covariance function. We then use the ing GP to perform Bayesian inference for deep neural networks on MNIST and CIFAR-10. We observe that the trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error. We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and that the GP-based predictions typically outperform those of finite-width networks. Finally we connect the prior distribution over weights and variances in our GP formulation to the recent development of signal propagation in random neural networks. Deep neural networks have emerged in recent years as flexible parametric models which can fit complex patterns in data. As a contrasting approach, Gaussian processes have long served as a traditional nonparametric tool for modeling. An equivalence between these two approaches was derived in BID17, for the case of one layer networks in the limit of infinite width. Neal (1994a) further suggested that a similar correspondence might hold for deeper networks. Consider a deep fully-connected neural network with i.i.d. random parameters. Each scalar output of the network, an affine transformation of the final hidden layer, will be a sum of i.i.d. terms. As we will discuss in detail below, in the limit of infinite width the Central Limit Theorem 1 implies that the function computed by the neural network (NN) is a function drawn from a Gaussian process (GP). In the case of single hidden-layer networks, the form of the kernel of this GP is well known BID17 BID25 ).This correspondence implies that if we choose the hypothesis space to be the class of infinitely wide neural networks, an i.i.d. prior over weights and biases can be replaced with a corresponding GP prior over functions. As noted by BID25, this substitution enables exact Bayesian inference for regression using neural networks. The computation requires building the necessary covariance matrices over the training and test sets and straightforward linear algebra computations. In light of the resurgence in popularity of neural networks, it is timely to revisit this line of work. We delineate the correspondence between deep and wide neural networks and GPs and utilize it for Bayesian training of neural networks on regression tasks. Our work touches on aspects of GPs, Bayesian learning, and compositional kernels. The correspondence between infinite neural networks and GPs was first noted by BID17 b). BID25 computes analytic GP kernels for single hidden-layer neural networks with error function or Gaussian nonlinearities and noted the use of the GP prior for exact Bayesian inference in regression. BID6 discusses several routes to building deep GPs and observes the degenerate form of kernels that are composed infinitely many times -a point we will return to Section 3.2 -but they do not derive the form of GP kernels as we do. BID10 also discusses constructing kernels equivalent to infinitely wide deep neural networks, but their construction does not go beyond two hidden layers with nonlinearities. Related work has also appeared outside of the GP context but in compositional kernel constructions. BID2 derives compositional kernels for polynomial rectified nonlinearities, which includes the Sign and ReLU nonlinearities, and can be used in GPs; our manner of composing kernels matches theirs, though the context is different. BID4 extends the construction of compositional kernels to neural networks whose underlying directed acyclic graph is of general form. They also prove, utilizing the formalism of dual activations, that compositional kernels originating from fully-connected topologies with the same nonlinearity become degenerate when composed infinitely many times. In a different context than compositional kernels, BID19; BID24 study the same underlying recurrence relation for the specific case of fully-connected networks and bounded nonlinearities. They distinguish regions in hyperparameter space with different fixed points and convergence behavior in the recurrence relations. The focus in these works was to better understand the expressivity and trainability of deep networks. Drawing inspiration from the multi-layer nature of deep neural networks, there is a line of work considering various approaches to stacking GPs, such as deep GPs BID15; BID3; BID11; BID6; BID1 ), which can give rise to a richer class of probabilistic models beyond GPs. This contrasts with our work, where we study GPs that are in direct correspondence with deep, infinitely wide neural networks. BID14 has recently explored the performance of GP models with deep kernels given in BID2, implemented with scalable approximations. However, they do not discuss the equivalence between deep neural networks and GPs with compositional kernels, which constitutes a conceptual contribution of our work. Furthermore, we note that the GP kernels in our work are more general than the compositional kernel construction outlined in BID2 in two respects: (i) we are not limited to rectified polynomials but can deal with general nonlinearities, and (ii) we consider two additional hyperparameters in the kernels, which would correspond to the weight and bias parameter variances in a neural network. Finally, BID8 connects dropout in deep neural networks with approximate Bayesian inference in deep GPs. Another series of recent works BID28 a); BID0 ), termed deep kernel learning, utilize GPs with base kernels which take in features produced by a deep multilayer neural network, and train the ing model end-to-end. Our work differs from these in that our GP corresponds to a multilayer neural network. Additionally, our GP kernels have many fewer parameters, and these parameters correspond to the hyperparameters of the equivalent neural network. We begin by specifying the form of a GP which corresponds to a deep, infinitely wide neural network -hereafter referred to as the Neural Network GP (NNGP) -in terms of a recursive, deterministic computation of the kernel function. The prescription is valid for generic pointwise nonlinearities in fully-connected feedforward networks. We develop a computationally efficient method (Section 2.5) to compute the covariance function corresponding to deep neural networks with fixed hyperparameters. In this work, as a first proof of concept of our NNGP construction, we focus on exact Bayesian inference for regression tasks, treating classification as regression on class labels. While less principled, least-squares classification performs well BID23 and allows us to compare exact inference via a GP to prediction by a trained neural network on well-studied tasks (MNIST and CIFAR-10 classification). Note that it is possible to extend GPs to softmax classification with cross entropy loss BID26; BID21 ), which we aim to investigate in future work. We conduct experiments making Bayesian predictions on MNIST and CIFAR-10 (Section 3) and compare against NNs trained with standard gradient-based approaches. The experiments explore different hyperparameter settings of the Bayesian training including network depth, nonlinearity, training set size (up to and including the full dataset consisting of tens of thousands of images), and weight and bias variance. Our experiments reveal that the best NNGP performance is consistently competitive against that of NNs trained with gradient-based techniques, and the best NNGP setting, chosen across hyperparameters, often surpasses that of conventional training (Section 3, TAB0). We further observe that, with increasing network width, the performance of neural networks with gradient-based training approaches that of the NNGP computation, and that the GP uncertainty is strongly correlated with prediction error. Furthermore, the performance of the NNGP depends on the structure of the kernel, which can be connected to recent work on signal propagation in networks with random parameters BID24. We begin by specifying the correspondence between GPs and deep, infinitely wide neural networks, which hinges crucially on application of the Central Limit Theorem. We review the single-hidden layer case (Section 2.2) before moving to the multi-layer case (Section 2.3). Consider an L-hidden-layer fully-connected neural network with hidden layers of width N l (for layer l) and pointwise nonlinearities φ. Let x ∈ R din denote the input to the network, and let z L ∈ R dout denote its output. The ith component of the activations in the lth layer, post-nonlinearity and postaffine transformation, are denoted x l i and z l i respectively. We will refer to these as the post-and pre-activations. (We let x 0 i ≡ x i for the input, dropping the Arabic numeral superscript, and instead use a Greek superscript x α to denote a particular input α). Weight and bias parameters for the lth layer have components W l ij, b l i, which are independent and randomly drawn, and we take them all to have zero mean and variances σ 2 w /N l and σ 2 b, respectively. GP(µ, K) denotes a Gaussian process with mean and covariance functions µ(·), K(·, ·), respectively. We briefly review the correspondence between single-hidden layer neural networks and GPs BID17 b); BID25 ). The ith component of the network output, z 1 i, is computed as, DISPLAYFORM0 where we have emphasized the dependence on input x. Because the weight and bias parameters are taken to be i.i.d., the post-activations x 1 j, x 1 j are independent for j = j. Moreover, since z 1 i (x) is a sum of i.i.d terms, it follows from the Central Limit Theorem that in the limit of infinite width N 1 → ∞, z 1 i (x) will be Gaussian distributed. Likewise, from the multidimensional Central Limit Theorem, any finite collection of {z DISPLAYFORM1} will have a joint multivariate Gaussian distribution, which is exactly the definition of a Gaussian process. Therefore we conclude that z DISPLAYFORM2, a GP with mean µ 1 and covariance K 1, which are themselves independent of i. Because the parameters have zero mean, we have that µ 1 (x) = E z 1 i (x) = 0 and, DISPLAYFORM3 where we have introduced C(x, x) as in BID17; it is obtained by integrating against the distribution of W 0, b 0. Note that, as any two z 1 i, z 1 j for i = j are joint Gaussian and have zero covariance, they are guaranteed to be independent despite utilizing the same features produced by the hidden layer. The arguments of the previous section can be extended to deeper layers by induction. We proceed by taking the hidden layer widths to be infinite in succession (N 1 → ∞, N 2 → ∞, etc.) as we continue with the induction, to guarantee that the input to the layer under consideration is already governed by a GP. In Appendix C we provide an alternative derivation in terms of Bayesian marginalization over intermediate layers, which does not depend on the order of limits, in the case of a Gaussian prior on the weights. A concurrent work BID5 further derives the convergence rate towards a GP if all layers are taken to infinite width simultaneously, but at different rates. Suppose that z l−1 j is a GP, identical and independent for every j (and hence x l j (x) are independent and identically distributed). After l − 1 steps, the network computes DISPLAYFORM0 As before, z l i (x) is a sum of i.i.d. random terms so that, as N l → ∞, any finite collection {z DISPLAYFORM1 By induction, the expectation in Equation 4 is over the GP governing z DISPLAYFORM2, but this is equivalent to integrating against the joint distribution of only z l−1 i (x) and z l−1 i (x). The latter is described by a zero mean, two-dimensional Gaussian whose covariance matrix has distinct entries DISPLAYFORM3, and K l−1 (x, x). As such, these are the only three quantities that appear in the . We introduce the shorthand DISPLAYFORM4 to emphasize the recursive relationship between K l and K l−1 via a deterministic function F whose form depends only on the nonlinearity φ. This gives an iterative series of computations which can be performed to obtain K L for the GP describing the network's final output. For the base case DISPLAYFORM5; we can utilize the recursion relating K 1 and K 0, where DISPLAYFORM6 In fact, these recurrence relations have appeared in other contexts. They are exactly the relations derived in the mean field theory of signal propagation in fully-connected random neural networks BID19 BID24 ) and also appear in the literature on compositional kernels BID2 BID4 ). For certain activation functions, Equation 5 can be computed analytically BID2 BID4 ). In the case of the ReLU nonlinearity, it yields the well-known arccosine kernel BID2 ) whose form we reproduce in Appendix B. When no analytic form exists, it can instead be efficiently computed numerically, as described in Section 2.5. Here we provide a short review of how a GP prior over functions can be used to do Bayesian inference; see e.g. BID21 for a comprehensive review of GPs. Given a dataset DISPLAYFORM0, we wish to make a Bayesian prediction at test point x * using a distribution over functions z(x). This distribution is constrained to take values z ≡ (z 1, ..., z n) on the training inputs x ≡ (x 1, ..., x n) and, DISPLAYFORM1 where t = (t 1, ..., t n) T are the targets on the training set, and P (t|z) corresponds to observation noise. We will assume a noise model consisting of a Gaussian with variance σ 2 centered at z. If the conditions of Section 2.2 or 2.3 apply, our choice of prior over functions implies that z 1,..., z n, z * are n + 1 draws from a GP and z *, z|x *, x ∼ N (0, K) is a multivariate Gaussian whose covariance matrix has the form DISPLAYFORM2 where the block structure corresponds to the division between the training set and the test point. DISPLAYFORM3 As is standard in GPs, the integral in Equation 7 can be done exactly, ing in z DISPLAYFORM4 where I n is the n × n identity. The predicted distribution for z * |D, x * is hence determined from straightforward matrix computations, yet nonetheless corresponds to fully Bayesian training of the deep neural network. The form of the covariance function used is determined by the choice of GP prior, i.e. the neural network model class, which depends on depth, nonlinearity, and weight and bias variances. We henceforth resume placing a superscript L as in K L to emphasize the choice of depth for the compositional kernel. Given an L-layer deep neural network with fixed hyperparameters, constructing the covariance matrix K L for the equivalent GP involves computing the Gaussian integral in Equation 4 for all pairs of training-training and training-test points, recursively for all layers. For some nonlinearities, such as ReLU, this integration can be done analytically. However, to compute the kernel corresponding to arbitrary nonlinearities, the integral must be performed numerically. The most direct implementation of a numerical algorithm for K L would be to compute integrals independently for each pair of datapoints and each layer. This is prohibitively expensive and costs O n 2 g L(n 2 train + n train n test), where n 2 g is the sampling density for the pair of Gaussian random variables in the 2D integral and n train, n test are the training and test set sizes, respectively. However, by careful pipelining, and by preprocessing all inputs to have identical norm, we can improve this cost to O n 2 g n v n c + L(n 2 train + n train n test), where n v and n c are sampling densities for a variance and correlation grid, as described below. In order to achieve this, we break the process into several steps:1. Generate: pre-activations u = [−u max, · · ·, u max] consisting of n g elements linearly spaced between −u max and u max; variances s = [0, · · ·, s max] with n v linearly spaced elements, where s max < u 2 max; and correlations c = (−1, · · ·, 1) with n c linearly spaced elements. Note that we are using fixed, rather than adaptive, sampling grids to allow operations to be parallelized and reused across datapoints and layers.2. Populate a matrix F containing a lookup table for the function F φ in Equation 5. This involves numerically approximating a Gaussian integral, in terms of the marginal variances s and correlations c. We guarantee that the marginal variance is identical for each datapoint, by preprocessing all datapoints to have identical norm at the input layer, so the number of entries in the lookup table need only be n v n c. These entries are computed as 2: DISPLAYFORM0 3. For every pair of datapoints x and x in layer l, compute K l (x, x) using Equation 5.Approximate the function DISPLAYFORM1 Step 2, where we interpolate into s using the value of K l−1 (x, x), and interpolate into c using DISPLAYFORM2, due to data preprocessing to guarantee constant norm.4. Repeat the previous step recursively for all layers. Bilinear interpolation has constant cost, so this has cost O L(n 2 train + n train n test).This computational recipe allows us to compute the covariance matrix for the NNGP corresponding to any well-behaved nonlinearity φ. All computational steps above can be implemented using accelerated tensor operations, and computation of K L is typically faster than solving the system of linear equations in Equation 8-9. Figure 6 illustrates the close agreement between the kernel function computed numerically (using this approach) and analytically, for the ReLU nonlinearity. It also illustrates the angular dependence of the kernel and its evolution with increasing depth. Finally, note that the full computational pipeline is deterministic and differentiable. The shape and properties of a deep network kernel are purely determined by hyperparameters of the deep neural network. Since GPs give exact marginal likelihood estimates, this kernel construction may allow principled hyperparameter selection, or nonlinearity design, e.g. by gradient ascent on the log likelihood w.r.t. the hyperparameters. Although this is not the focus of current work, we hope to return to this topic in follow-up work. An open source implementation of the algorithm is available at https://github.com/brainresearch/nngp. We compare NNGPs with SGD 3 trained neural networks on the permutation invariant MNIST and CIFAR-10 datasets. The baseline neural network is a fully-connected network with identical width at each hidden layer. Training is on the mean squared error (MSE) loss, chosen so as to allow direct comparison to GP predictions. Formulating classification as regression often leads to good BID22. Future work may involve evaluating the NNGP on a cross entropy loss using the approach in BID26 BID21. Training used the Adam optimizer BID13 ) with learning rate and initial weight/bias variances optimized over validation error using the Google Vizier hyperparameter tuner BID9. Dropout was not used. In future work, it would be interesting to incorporate dropout into the NNGP covariance matrix using an approach like that in BID24. For the study, nonlinearities were chosen to be either rectified linear units (ReLU) or hyperbolic tangent (Tanh). Class labels were encoded as a one-hot, zero-mean, regression target (i.e., entries of -0.1 for the incorrect class and 0.9 for the correct class). We constructed the covariance kernel numerically for ReLU and Tanh nonlinearities following the method described in Section 2.5.Performance: We find that the NNGP often outperforms trained finite width networks. See TAB0 and FIG0. The NNGP often outperforms finite width networks, and neural network performance more closely resembles NNGP performance with increasing width. Test accuracy and mean squared error on MNIST and CIFAR-10 dataset are shown for the best performing NNGP and best performing SGD trained neural networks for given width.' NN-best' denotes the best performing (on the validation set) neural network across all widths and trials. Often this is the neural network with the largest width. We additionally find the performance of the best finite-width NNs, trained with a variant of SGD, approaches that of the NNGP with increasing layer width. This is interesting from at least two, potentially related, standpoints. NNs are commonly believed to be powerful because of their ability to do flexible representation learning, while our NNGP uses fixed basis functions; nonetheless, in our experiments we find no salient performance advantage to the former. It hints at a possible relationship between SGD and Bayesian inference in certain regimes -were the neural networks trained in a fully Bayesian fashion, rather than by SGD, the approach to NNGP in the large width limit would be guaranteed. There is recent work suggesting that SGD can implement approximate Bayesian inference BID16 under certain assumptions. The similarity of the performance of the widest NN in FIG0 with the NNGP suggests that the limit of infinite network width, which is inherent to the GP, is far from being a disadvantage. Indeed, in practice it is found that the best generalizing NNs are in fact the widest. To support this, in FIG1 we show generalization gap from an experiment in which we train 180 fully-connected networks with five hidden layers on CIFAR-10 with a range of layer widths. For this experiment, we trained the networks using a standard cross entropy loss rather than MSE, leading to a slight difference in performance. Uncertainty: One benefit in using a GP is that, due to its Bayesian nature, all predictions have uncertainty estimates (Equation 9). For conventional neural networks, capturing the uncertainty in a model's predictions is challenging BID7. In the NNGP, every test point has an explicit estimate of prediction variance associated with it (Equation 9). In our experiments, we observe that the NNGP uncertainty estimate is highly correlated with prediction error (Figure 3). commonly approach a functionally uninteresting fixed point with depth l → ∞, in that K ∞ (x, x) becomes a constant or piecewise constant map. We now briefly relate our ability to train NNGPs with the convergence of K l (x, x) to the fixed-point kernel. We will be particularly interested in contextualizing our in relation to BID19 For the Tanh nonlinearity, there are two distinct phases respectively called the "ordered" phase and the "chaotic" phase that can be understood as a competition between the weights and the biases of the network. A diagram showing these phases and the boundary between them is shown in Figure 4a. In the ordered phase, the features obtained by propagating an input through the each layer of the recursion become similar for dissimilar inputs. Fundamentally, this occurs because the different inputs share common bias vectors and so all inputs end up just approaching the random bias. In this case the covariance K l (x, x) → q * for every pair of inputs x, x, where q * is a constant that depends only on σ 2 w and σ 2 b. All inputs have unit correlation asymptotically with depth. By contrast in the chaotic phase the weight variance σ 2 w dominates and similar inputs become dissimilar with depth as they are randomly projected by the weight matrices. In this case, the covariance K l (x, x) → q * for x = x but q * c * for x = x. Here c * < 1 is the fixed point correlation. In each of these regimes, there is also a finite depth-scale ξ which describes the characteristic number of layers over which the covariance function decays exponentially towards its fixed point form. Exactly at the boundary between these two regimes is a line in (σ FORMULA0 that this approach to the fixed-point covariance fundamentally bounded whether or not neural networks could successfully be trained. It was shown that initializing networks on this line allowed for significantly deeper neural networks to be trained. For ReLU networks a similar picture emerges, however there are some subtleties due to the unbounded nature of the nonlinearity. In this case for all σ 2 w and σ 2 b, K ∞ (x, x) = q * for all x, x and every point becomes asymptotically correlated. Despite this, there are again two phases: a "bounded" phase in which q * is finite (and nonzero) and an unbounded phase in which q * is either infinite or zero. As in the Tanh case there are depth scales that control the rate of convergence to these fixed points and therefore limit the maximum trainable depth. The phase diagram for the ReLU nonlinearity is also shown in Figure 4b.In a striking analogy with the trainability of neural networks, we observe that the performance of the NNGP appears to closely track the structure from the phase diagram, clearly illustrated in Figure 4. Indeed, we see that as for hyperparameter settings that are far from criticality, the GP is unable to train and we encounter poor test set performance. By contrast, near criticality we observe that our models display high accuracy. Moreover, we find that the accuracy appears to drop more quickly away from the phase boundary with increase in depth L of the GP kernel, K L. To understand this effect we note that information about data will be available to our model only through the difference DISPLAYFORM0 However, as the depth gets larger, this difference becomes increasingly small and at some point can no longer be represented due to numerical precision. At this point our test accuracy begins to quickly degrade to random chance. By harnessing the limit of infinite width, we have specified a correspondence between priors on deep neural networks and Gaussian processes whose kernel function is constructed in a compositional, but fully deterministic and differentiable, manner. Use of a GP prior on functions enables exact Bayesian inference for regression from matrix computations, and hence we are able to obtain predictions and uncertainty estimates from deep neural networks without stochastic gradient-based training. The performance is competitive with the best neural networks (within specified class of fully-connected models) trained on the same regression task under similar hyperparameter settings. While we were able to run experiments for somewhat large datasets (sizes of 50k), we intend to look into scalability for larger learning tasks, possibly harnessing recent progress in scalable GPs BID20; BID12 ). In our experiments, we observed the performance of the optimized neural network appears to approach that of the GP computation with increasing width. Whether gradient-based stochastic optimization implements an approximate Bayesian computation is an interesting question BID16. Further investigation is needed to determine if SGD does approximately implement Bayesian inference under the conditions typically employed in practice. Additionally, the NNGP provides explicit estimates of uncertainty. This may be useful in predicting model failure in critical applications of deep learning, or for active learning tasks where it can be used to identify the best datapoints to hand label. A DRAWS FROM AN NNGP PRIOR FIG5 illustrates the nature of the GP prior for the ReLU nonlinearity by depicting samples of 1D functions z(x) drawn from a ReLU GP, GP(0, K L), with fixed depth L = 10 and (σ Figure 6: The angular structure of the kernel and its evolution with depth. Also illustrated is the good agreement between the kernel computed using the methods of Section 2.5 (blue, starred) and the analytic form of the kernel (red). The depth l in K l runs from l = 0,..., 9 (flattened curves for increasing l), and (σ In the main text, we noted that the recurrence relation Equation 5 can be computed analytically for certain nonlinearities. In particular, this was computed in BID2 for polynomial rectified nonlinearities. For ReLU, the including the weight and bias variance is DISPLAYFORM0 To illustrate the angular form of K l (x, x) and its evolution with l, in Figure 6 we plot K l (θ) for the ReLU nonlinearity, where θ is the angle between x and x with norms such that ||x|| 2 = ||x || 2 = d in. We observe a flattening of the angular structure with increase in depth l, as predicted from the understanding in Section 3.2. Simultaneously, the figure also illustrates the good agreement between the kernel computed using the numerical implementation of Section 2.5 (blue, starred) and the analytic arccosine kernel, Equation 11 (red), for a particular choice of hyperparameters (σ In this section, we present an alternate derivation of the equivalence between infinitely wide deep neural networks and Gaussian process by marginalization over intermediate layers. For this derivation, we take the weight and bias parameters to be drawn from independent Gaussians, with zero mean and appropriately scaled variance. We are interested in finding the distribution p(z L |x) over network outputs z L ∈ R dout×B, conditioned on network inputs x ∈ R din×B, for input dimensionality d in, output dimensionality d out, and dataset size B. Intervening layers will have width N l, z l ∈ R N l+1 ×B for L > l > 0. We define the second moment matrix (here post-nonlinearity) for each layer l to be DISPLAYFORM0 Our approach is to think of intermediate random variables corresponding to these second moments defined above. By definition, K l only depends on z l−1. In turn, the pre-activations z l are described by a Gaussian process conditioned on the second moment matrix K l, DISPLAYFORM1 where DISPLAYFORM2 This correspondence of each layer to a GP, conditioned on the layer's second moment matrix, is exact even for finite width N l because the parameters are drawn from a Gaussian. Altogether, this justifies the graphical model depicted in Figure 7.We will write p(z L |x) as an integral over all the intervening second moment matrices DISPLAYFORM3 This joint distribution can be decomposed as DISPLAYFORM4 The directed decomposition in Equation 16 holds because DISPLAYFORM5 Figure 7: Graphical model for neural network's computation. The sum in Equation 12 for l > 0 is a sum over i.i.d. terms. As N l grows large, the Central Limit Theorem applies, and p K l |K l−1 converges to a Gaussian with variance that shrinks as 1 N l. Further, in the infinite width limit it will go to a delta function, DISPLAYFORM6 with F (·) defined as in FIG5. Similarly, the dependence of K 0 on x can be expressed as a delta function, DISPLAYFORM7 So, in the limit of infinite width, z L |x is described by a Gaussian process with kernel DISPLAYFORM8 We outline details of the experiments for Section 3. For MNIST we use a 50k/10k/10k split of the training/validation/test dataset. For CIFAR-10, we used a 45k/5k/10k split. The validation set was used for choosing the best hyperparameters and evaluation on the test set is reported. For training neural networks hyperparameters were optimized via random search on average 250 trials for each choice of (n train, depth, width, nonlinearity).Random search range: Learning rate was sampled within (10 −4, 0.2) in log-scale, weight decay constant was sampled from (10 −8, 1.0) in log-scale, σ w ∈ [0.01, 2.5], σ b ∈ [0, 1.5] was uniformly sampled and mini-batch size was chosen equally among.For the GP with given depth and nonlinearity, a grid of 30 points evenly spaced from 0.1 to 5.0 (for σ 2 w) and 30 points evenly spaced from 0 to 2.0 (for σ Computation time: We report computation times for NNGP experiments. The grid generation with took 440-460s with 6 CPUs for n g = 501, n v = 501, n c = 500, which was amortized over all the experiments. For full (50k) MNIST, constructing K DD for each layer took 90-140s (depending on CPU generation) running on 64 CPUs. Solving linear equations via Cholesky decomposition took 180-220s for 1000 test points. For all the experiments we used pre-computed lookup tables F with n g = 501, n v = 501, n c = 500, and s max = 100. Default value for the target noise σ 2 was set to 10 −10 and was increased by factor of 10 when Cholesky decomposition failed while solving Equation 8 and 9. We refer to BID21 for standard numerically stable implementation of GP regression. Here we include more from experiments described in Section 3.Uncertainty: Relationship between the target MSE and the GP's uncertainty estimate for smaller training set size is shown in Figure 8. 0.5566 GP-3-3.48-1.52 0.5558
We show how to make predictions using deep networks, without training deep networks.
1,439
scitldr
Search space is a key consideration for neural architecture search. Recently, Xie et al. (2019a) found that randomly generated networks from the same distribution perform similarly, which suggest we should search for random graph distributions instead of graphs. We propose graphon as a new search space. A graphon is the limit of Cauchy sequence of graphs and a scale-free probabilistic distribution, from which graphs of different number of vertices can be drawn. This property enables us to perform NAS using fast, low-capacity models and scale the found models up when necessary. We develop an algorithm for NAS in the space of graphons and empirically demonstrate that it can find stage-wise graphs that outperform DenseNet and other baselines on ImageNet. Neural architecture search (NAS) aims to automate the discovery of neural architectures with high performance and low cost. Of primary concern to NAS is the design of the search space, which needs to balance multiple considerations. For instance, too small a space would exclude many good solutions, whereas a space that is too large would be prohibitively expensive to search through. An ideal space should have a one-to-one mapping to solutions and sufficiently smooth in order to accelerate the search. A common technique to keep the search space manageable is to search for a small cell structure, typically containing about 10 operations with 1-2 input sources each. When needed, identical cells are stacked to form a large network. This technique allows cells found on, for instance, CIFAR-10 to work on ImageNet. Though this practice is effective, it cannot be used to optimize the overall network structure. In both manual and automatic network design, the overall network structure is commonly divided into several stages, where one stage operates on one spatial resolution and contains several nearidentical layers or multi-layer structures (i.e., cells). For example, ResNet-34 contains 4 stages with 6, 8, 12, and 6 convolutional layers, respectively. DenseNet-121 contains 4 stages with 6, 12, 24, and 16 two-layer cells. AmoebaNet-A has 3 stages, within each 6 cells are arranged sequentially. Among cells in the same stage, most connections are sequential with skip connections occasionally used. As an exception, DenseNet introduces connections between every pairs of cells within the same stage. Here we emphasize the difference between a stage and a cell. A cell typically contains about 10 operations, each taking input from 1-2 other operations. In comparison, a stage can contain 60 or more operations organized in repeated patterns and the connections can be arbitrary. A network usually contains only 3-4 stages but many more cells. In this paper, we focus on the network organization at the level of stage rather than cell. recently showed that the stage structure can be sampled from probabilistic distributions of graphs, including Erdős-Rényi (ER), Watts-Strogatz (WS), and Barabási-Albert (BA), yielding high-performing networks with low in-group variance. This finding suggests the random graph distribution, rather than the exact graph, is the main causal factor behind network performance. Thus, searching for the graph is likely not as efficient as searching for the random (c) m0 = m = 100, n = 1000 Figure 1: Three adjacency matrices of graphs generated by the Barabási-Albert model with m = m 0 = 0.1n. A black dot at location (i, j) denotes an edge from node i to node j. The sequence of matrices converges to its limit, the graphon, as n → ∞. Figure 2: Graphons for common random graph models. Different shades denote different probabilities (e.g., p and 1 − p). The Erdős-Rényi model has two parameters: number of nodes n and probability p. The Watts-Strogatz (WS) model has three parameters: number of nodes n, replacement probability p, and initial neighborhood width k. Technically, the WS model has a constant number of edges, violating exchangeability for random graphs; graphs sampled from (b) converges in probability to the same number of edges as n increases. graph distribution. The parameter space of random graph distributions may appear to be a good search space. We propose a different search space, the space of graphons, and argue for its superiority as an NAS search space. Formally introduced in Section 3, a graphon is a measurable function defined on 2 → and a probabilistic distribution from which graphs can be drawn. Graphons are limit objects of Cauchy sequences of finite graphs under the cut distance metric. Figure 1 visualizes three adjacency matrices randomly generated by the Barabási-Albert (BA) model with increasing numbers of nodes. It is easy to see that, as the number of nodes increases, the sequence of random graphs converges to its limit, a graphon. The BA model starts with an initial seed graph with m 0 nodes and arbitrary interconnections. Here we choose a complete graph as the seed. It sequentially adds new nodes until there are n nodes in the graph. For every new node v new, m edges are added, with the probability of adding an edge between v new and the node v i being proportional to the degree of v i. In Figure 1, we let m = m 0 = 0.1n. The fact that different parameterization in the same adjacency matrix suggests that directly searching in the parameter space will revisit the same configuration and is less efficient than searching in the graphon space. Additionally, graphon provides a unified and more expressive space than common random graph models. Figure 2 illustrates the graphons for the WS and the ER models. We can observe that these random models only capture a small proportion of all possible graphons. The graphon space allows new possibilities such as interpolation or striped combination of different random graph models. Finally, graphon is scale-free, so we should be able to sample an arbitrary-sized stage-wise architecture with identical layers (or cells) from a graphon. This allows us to perform expensive NAS on small datasets (e.g., CIFAR-10) using low-capacity models and obtain large stage-wise graphs to build large models. By relating graphon theory to NAS, we provide theoretically motivated techniques that scale up stage-wise graphs, which are shown to be effective in practice. Our experiments aim to fairly compare the stage-wise graphs found by our method against DenseNet and the WS random graph model by keeping other network structures and other hyperparameters constant. The indicate that the graphs found outperform the baselines consistently across a range of model capacities. The contribution of this paper revolves around building a solid connection between theory and practice. More specifically, • We propose graphon, a generalization of random graphs, as a search space for stage-wise neural architecture that consists of connections among mostly identical units. • We develop an operationalization of the theory on graphon in the representation, scaling and search of neural stage-wise graphs that perform well in fair comparisons. We review the NAS literature with a focus on the distinction between cell and stage structures. In the pioneering work of, a recurrent neural network served as the controller that outputs all network parameters for all layers without such distinction. Later works attempted to reduce the cost of search by constraining the search to cell structures only. searched for a single cell structure and perform downsampling using pooling operations.,, and searched for two types of cells: a reduction cell that includes downsampling, and a normal cell that does not. grew the cell structure from the simplest 1-operation cell to the maximum of 5 operations. DARTS relaxed discrete choices and enabled gradient-based optimization. imposed a probabilistic formulation on DARTS. While various manual designs of stage structures have been proposed (e.g., Huang et al. 12, Larsson et al. 16), NAS for stage-wise graph has received relatively little attention. found that random graphs generated by properly parameterized Watts-Strogatz models outperform manual designs for stage-wise structures. redefined the stage-wise graphs so that a node can take multiple inputs but output only a single channel, ing in large stage-wise graphs with up to 2560 nodes. evolved the connections between multiple residual blocks for applications in video processing. As a differennt type of global structure, optimized organization of downsampling and upsampling. In summary, we believe that NAS for stage structures is an emerging research direction with many unexplored opportunities. Another approach for accelerate search is to share weights among different architectures. built a lattice where a chain-structured network is a path from the beginning to the end. In ENAS, a controller is trained using gradient policy to select a subgraph, which is subsequently trained using cross-entropy. The process repeats, sharing network weights across training sessions and subgraphs. In the one-shot approach, the hypergraph is only trained once. After that, subgraphs are sampled from the hypergraph and evaluated. Finally, the best performing subgraph is retrained. Our focus in this paper is to validate that the mathematical theory of graphon can be effectively operationalized and leave weight sharing as future work. 3 ON GRAPHON The definition of graphon is straightforward, but its relation with graphs requires some standard concepts from real analysis, which we introduce below. In machine learning, graphon has found applications in hierarchical clustering and graph classification. A metric space is a space with a distance function. More specifically, a metric space is a set M with a metric function d: M × M → R that defines the distance between any two points in M and satisfies the following properties: Identity: As an example, the set of real numbers R with the absolute difference metric d abs (x, y) = |x − y| is a metric space. A Cauchy sequence is defined as a sequence (x 1, x 2, . . .) whose elements become infinitely close to each other as we move along the sequence. Formally, for any positive ∈ R +, there exists a positive integer A complete metric space is a metric space (M, d) in which every Cauchy sequence converges to a point in M. It turns out that some familiar spaces, such as rational numbers Q under the metric d abs, are not complete. As an example, the sequence defined by x 0 = 1, x n+1 = x n /2 + 1/x n converges to an irrational number √ 2. The space of real numbers R with d abs, however, is a complete metric space. Given any metric space, we can form its completion, in general, by adding limit points. In the case of graphs, the limits allow explicit descriptions. We consider a weighted undirected graph G = (V, E) where every v i ∈ V is associated with a node weight α i and every edge e ij ∈ E is associated with edge weight β ij. When necessary, we also write α i = α i (G) to highlight the graph the node belongs to. The weighted graph is a generalization of the simple graph, where all nodes have weight 1 and all edge have weight 1. Edges that do not exist are considered to have weight 0. show the following. Theorem 1. Every Cauchy sequence of weighted graphs in the metric δ converges to a graphon. Theorem 2. An weighted graphon is the limit of some Cauchy sequence in the metric δ. The definition of the cut distance δ relies on its discrete versions d andδ. To avoid unnecessary technical details, here we introduce the intuition of d and leaveδ and δ to Appendix C. For a partition S, T of V, the cut size is defined as cut(S, T, G) = vi∈S,vj ∈T α i α j β ij. When two graphs G and G = (V, E) have the same set of nodes (i.e. V = V), the partition S, T applies to both graphs. We can then define The seemingly counter-intuitive d is a sensible metric for random graphs. Consider two random graphs with n nodes from the same Erdős-Rényi model with edge density 1/2. As they are identically distributed, their distance should be small. However, their edit distance, or the number of edges where the two graphs differ, is likely quite large. In contrast, d is only O(1/n) with high probability, which is consistent with our intuition. For directed graphs with the potential of self-loops, define a digraphon as a 5-tuple (W 00, W 01, W 10, W 11, w). For nodes u i and u j, W 00 (i, j) describes the probability that no edge exists between them; W 01 (i, j) the probability that an edge goes from u i to u j; W 10 (i, j) the probability that an edge goes from u j to u i, and W 11 (i, j) the probability that two edges go both ways. w i is the probability for a self-loop at u i. For all i, j, W 00 (i, j)+W 01 (i, j)+W 10 (i, j)+W 11 (i, j) = 1. Since the graphon is a limit object, we must approaximate it with finite means. Here we use a step function approximation. We utilize a matrix B ∈ n×n as the adjacency matrix of a weighted graph whose edgeweights β ij represent the mean of a n. This approximation converges to the graphon when n tends to infinity (Lemma 3.2, Borgs et al. 3). In order to represent directed acyclic graphs in neural networks, we require the adjacency matrices to be upper-triangular and has a zero vector on the diagonal. This is equivalent to imposing a total ordering ≺ on the nodes and requiring i ≺ j for a directed edge to go from v i to v j. In the following, we propose theoretically motivated techniques for sampling stage-wise graphs of different sizes from the graphon representation and NAS for graphon. Given a finite graph G est with edge weights β ij ∈ and uniform nodeweights, we can sample a simple graph G sample (i.e. with all edgeweights equal to 0 or 1) with n nodes by drawing every edge independently as a 0-1 variable from the Bernoulli distribution parameterized by β ij. show G sample converges to G est in probability as the number of nodes n increases. This procedure requires G est and G sample to have the same number of nodes. By utilizing properties of the graphon metric space, we can sample a graph with more than n nodes from the graph G est with n nodes. When the upsampling factor is an integer k, we first create a new weighted graph G est [k] with kn nodes using the so-called k-fold blow-up procedure, and then use the above procedure to sample G sample with kn nodes. The k-fold blow-up of a graph G est is created by splitting every node in G est into k > To handle the case when the upsampling factor is not an integer, we propose a method called fractional blow-up. Suppose we want to create a stage-wise graph with kn + m nodes. We first perform k-fold blow-up to create a new graph with kn nodes and equal nodeweight 1/kn. After that, we shift the nodeweights such that the first m nodes have nodeweights 2/(kn + m) and the rest kn − m nodes have nodeweights 1/(kn + m). We subsequently split the first m node into 2 nodes, yielding a graph of equal nodeweights. As detailed in Appendix D, the cut distance between G est and G est can be bounded by where β ∆ denotes the maximum difference between any two edge weights. Since β ij ∈, β ∆ ≤ 1. From G est, we can then sample a simple graph with kn + m nodes. It is worth noting that the proposed upscaling methods differ from conventional upsampling techniques like linear or bilinear interpolation. In Appendix D.3, we show that, under moderate conditions, the k-fold blow-up graph is strictly closer to the original graph than a graph created by interpolation. We now introduce the search algorithm. When optimizing discrete objects with gradient descent, the Gumbel softmax has been widely used. Given a multinomial distribution with probabilities π 0,..., π K, we independently draw a K-dimensional vector β from a Gumbel distribution: ∀k, γ k ∼ Gumbel. The Gumbel softmax is defined as The Gumbel softmax can be understood as pitting the choices from 1 to K against each other, while the random perturbation from β enables exploration. Our search algorithm optimizes the input connections to each node in the stage-wise graph. Empirically, we find it important to let different cells in the same stage learn to collaborate during the a node on the 0-1 lattice a state on the lattice visited by SGD the estimated graphon Figure 3: The intuition on the search for the optimal graphon. When searching in the space of adjacency matrices, we only can only move on the 0-1 lattice. The optimal graphon, denoted by the filled black dot, is estimated by taking the average of the states visited by SGD. search. Therefore, we devise an algorithm that pits different input combinations against each other. Specifically, the node v i may take input from nodes v 1,..., v i−1, from which we sample K subsets. For example, for the seventh node v 7, we could sample {v 1, v 3, v 5} or {v 5, v 6} and so on. We assign a structural parameter π k to every input subset. Finding the adjacency matrix amounts to finding the best input subset for every node. We also assign separate model parameters to the same node in different input subsets. This allows all nodes in the same subset to learn to collaborate, which would be difficult if the parameters were shared across input subsets. The outputs of nodes in the same subset are aggregated using concatenation with zero padding for aligning the dimensions. During the search, the input to node v i is computed as the convex combination of the outputs from all K subsets, For the pseudo-code of the search algorithm, the reader is referred to Appendix B. We use cross-entropy loss and standard stochastic gradient descent (SGD) with momentum to optimize the model parameters together with subset weights π. After the search completes, We pick the input subset with the highest π k in the final adjacency matrix. However, the goal of the search is to find a probabilistic distribution of random graphs, not a single graph. As indicated by and, the last phase of SGD may be considered as a Markov chain that draws samples from a stationary distribution. Figure 3 illustrates this intuition. Thus, we compute the graphon as the average of adjacency matrix found by the last phase of the search. DenseNet is a classical example where the stage-wise structure plays a major role in improving the network performance. Therefore, in the experiment, we focus on improving and comparing with DenseNet. In order to create fair comparisons, we focus the search on the stage-wise structure and strictly follow DenseNet for the global network layout and the cell structure (See details in Appendix A.1). Following, we start the search from the graphon that corresponds to the WS distribution with k/n = 0.4 and p = 0.75, which we denote as WS-G(0.4, 0.75). We limit the search to input subsets that differ by 1 edge from the starting point. We perform the search on CIFAR-10 for 300 epochs with 8 cells in every stage. The search completes within 24 hours on 1 GPU. We create four groups of comparison for the four DenseNet variants: DenseNet-121, DenseNet-169, DenseNet-201, and DenseNet-264 in an increasing order of model capacity. We use the proposed scaling method (Section 4.1) to scale up the graphs in the four stages to roughly match the corresponding DenseNet variant. The largest stage-wise graph, containing 64 nodes, is used in the DenseNet-264 group. We also adjust the growth rate c so that the numbers of parameters of all models are as close to DenseNet as possible to enable fair comparisons. However, as the number of parameters depends on the stage-wise connections, which are randomly drawn from the same distribution, we do not have precise control over the number of parameters. We strictly follow the standard hyperparameters and data augmentation techniques used by DenseNet. The networks are trained for 90 epoch on ImageNet; from 6 independent training sessions are averaged. For a detailed list of hyperparameters and data augmentation, see Appendix A.2. We introduce two other baseline models besides DenseNet. In the first model, the stage-wise graphs are generated by randomly deleting edges from the fully connected graph of DenseNet. In the second model, we use random graphs generated from WS-G(0.4, 0.75), which are similar to the best random architecture in and shown to be competitive with Amoeba, PNAS and DARTS on an equal-parameter basis. For evaluation, we report the average performance on the ILSVRC-2012 validation set and the new ImageNet V2 test set. For ILSVRC-2012 validation, we report the performance directly after epoch 90. For ImageNet V2, we select the best performing model on the ILSVRC-2012 validation set from the 90 epochs and test it on ImageNet V2. In order to mitigate the effects of random variance, we sample 6 graphs from every graphon and report the average accuracy and standard deviation, as well as the number of parameters in Table 1. Across all groups of comparison and model parameter setting, the graphon found by our algorithm consistently achieves the highest accuracy despite having the least parameters. On ImageNet validation, the top-1 performance gap between our method and DensetNet is 0.4% except for 0.1% for DenseNet-121. On ImageNet V2, the performance gap is up to 0.8%. The WS-G baseline in most comparisons is stronger than DenseNet but weaker than the propose technique. We attribute the performance differences to the stage-wise graphs, since we have strictly applied the same setting, including the global network structure, the cell structure, and hyperparameter settings. The first we draw is the effectiveness of the theoretically motivated scaling technique for graphon. We scaled up the 11-node graph found by the search to graphs with up to 64 nodes in the experiments. We also scaled the WS(4, 0.25) network, initially defined for 32 nodes in, to 64 nodes in the DenseNet-264 group. The experiments show that after scaling, the relative rankings of these methods are maintained, suggesting that the proposed scaling technique incurs no performance loss. Second, we observe the standard deviations for most methods are low, even though they edge a bit higher for ImageNet V2 where model selection has been carried out. This is consistent with the findings of and reaffirms that searching for random graphs is a valid approach for NAS. Finally, we emphasize that these are created for the purpose of fair comparisons and not for showcasing the best possible performance. Our goal is to show that the graphon space and the associated cut distance metric provide a feasible approach for NAS and the empirical evidences support our argument. The design of search space is of paramount importance for neural architecture search. Recent work suggests that searching for random graph distributions is an effective strategy for the organization of layers within one stage. Inspired by mathematical theories on graph limits, we propose a new search space based on graphons, which are the limits of Cauchy sequences of graphs based on the cut distance metric. The contribution of this paper is the operationalization of the graphon theory as practical NAS solutions. First, we intuitively explain why graphon is a superior search space than the parameter space of random graph models such as the Erdős-Rényi model. Furthermore, we propose a technique for scaling up random graphs found by NAS to arbitrary size and present a theoretical analysis under the cut distance metric associated with graphon. Finally, we describe an operational algorithm that finds stage-wise graphs that outperform manually designed DenseNet as well as randomly wired architectures in. Although we find neural architectures with good performance, we remind the reader that absolute performance is not the goal of this paper. Future work involves expanding the work to different operators in the same stage graph. This can be achieved, for example, in the same manner that digraphon accommodates different types of connections. We contend that the achieved in this paper should not be considered an upper bound, but only the beginning, of what can be achieved. We believe this work opens the door toward advanced NAS algorithms in the space of graphon and the cut distance metric. A The DenseNet network contains a stem network before the first stage, which contains a 3 × 3 convolution, batch normalization, ReLU and max-pooling. This is followed by three stages for CIFAR-10 and four stages for ImageNet. Between every two stages, there is a transition block containing a 1×1 convolution for channel reduction and a 2 × 2 average pool with stride 2 for downsampling. The network ends with a 7 × 7 global average pooling and a linear layer before a softmax. Figure 4 shows the cell structure for DenseNet, which contains two convolutions with different kernel size: 1 × 1 and 3 × 3. Each of the two convolutions are immediately preceded by a batch normalization and ReLU. Every cell in the same stage outputs c channels. The input to the n th cell is the concatenation of outputs from the cell 1 to cell n − 1, for a total of c(n − 1) channels. As every cell increments the number of input channels by c, it is called the growth rate. Some detailed hyperparameters in our experiments are as follows. During the architecture search stage, we train for 300 epochs. The learning rate schedule follows DenseNet, i.e. The initial learning rate is set to 0.1, and is divided by 10 at epochs 150 and 225. Momentum is set at 0.9. Our batch size is 64 and growth rate in the DenseNet framework is set to 32. For the Gumble softmax, initial temperature is set at 1.0 and the minimum temperature is set at 0.1 with an anneal rate of 0.03. For ImageNet training, we train for 90 epochs. We use label smoothing, which assigns probability 0.1 to all labels other than the ground truth. We use a batch size of 256 = 64 per GPU × 4 GPUs or 260 = 52 per GPU × 5 GPUs, depending on the GPU memory size. For the 4 stages, growth rates are set at 26, 26, 26, 32 to match the number of parameters for Densenet-121. Learning rate is set at 0.1, divided by 10 at epochs 30, 60, and 80. We use Nesterov momentum of 0.9 and weight decay of 0.00005. We also adopt the following data augmentation for ImageNet training, which are executed sequentially: Inception-style random cropping and aspect ratio distortion, resizing the image to 224 × 224, color jitter with factors for brightness, contrast and saturation all set to 0.4, horizontal flip with probability 0.5, and AlexNet-style random changes to the brightness of the RGB channels. B THE SEARCH ALGORITHM Algorithm 1 shows the detailed procedures. Specifically, for every node v on the graph, we sample K subsets of nodes that could provide input to v, while avoiding cycles (lines 5-7). For example, for the seventh node v 7, we could sample {v 1, v 3, v 5} or {v 4, v 6} and so on. We assign one weight parameter π k to every input subset (line 8). The model parameters for node u is denoted by θ u. Every sampled input edge subset U(v, k) employ the same neural network operations in u, but with different parameters θ u,k (lines 9-11). In particular, the input to node v is the convex combination Input: totally ordered nodes V, number of subsets k 3: for each node v ∈ V do 5: I(v) ← {u | u ≺ v, u ∈ V} all nodes preceding v to avoid cycles 6: α v ← a random vector in R K weights for the subsets 9: for each input subset U(v, k) ∈ I s (v) do 10: for each node u ∈ U(v, k) do 11: INITWEIGHT(θ u,k) create a set of weights θ u,k for u Input: nodes V, input to the stage x, temperature τ, initialized Output: a feature map extracted by the current stage 15: in(v in) ← x v in is a dummy sending outputs to all nodes. 16: for each node v ∈ V do 17: for each node u ∈ U(v, k) do 19: apply the operation of u to its input 20: out(k) ←AGGREGATE(out(u, k), ∀u) sum or concatenation 21: Sample γ ∼ Gumbel 22: perform Gumbel softmax 23: v out is a dummy whose input is the stage's output. of the outputs from all K subsets. This allows all nodes in the same subset to learn to collaborate, which would be difficult if the parameters were shared across input subsets. During the forward computation, outputs from nodes in the same input subset are aggregated by either summation or concatenation (line 20). Finally, different input subsets are forced to compete by the Gumbel softmax (lines 21-23). We pick the input edge subset with the highest π as the winning adjacency matrix. For a partition S, T of V, the cut size is defined as When two graphs G and G = (V, E) have the same set of nodes (i.e. V = V), the partition S, T applies to both graphs. We can then define When the graphs have the same number of nodes, but the correspondence between nodes is unknown, the distance is defined as the minimum over all isomorphismG ∼ = G. In general, the two graphs do not have the same number of nodes. Thus, in the most general metric δ, we allow the correspondence between nodes to be fractional. We define the "overlay" matrix L ∈ R |V|×|V |. The entry L ip denotes the fractional mapping from u i ∈ V to u p ∈ V, subject to it is β pq. The cut distance δ takes the minimum over all possible overlay matrices. The fractional overlay is applicable even when the two graphs have the same node count and can lead to a lower distance than integer overlay. In fractional upsampling, we start with a graph G with n nodes and generate a new graph with n + m nodes (0 < m < n). To achieve this, we perform two operations sequentially. First, we shift the node weights such that the m nodes have weight 2/(n + m) and the rest n − m nodes have weights 1/(n + m). Next, in what we call a partial blow-up, we split each of the m nodes into two nodes. After the upsampling operation, all nodes have equal weights 1/(n + m). The requirement for equal weight is needed as the nodeweights are the probabilities of every node being sampled. Here, the k-way split of node v i is defined as replacing v i with k new nodes v i1,..., v ik, each having node weight α i /k. For any other node v j, the edge (v ik, v j) has the same weight as (v i, v j) and the same applies to (v j, v ik). In the following, we analyze the cut distance between the new graph and the original graph and show its upper bound is (n−m)m n(n+m) β ∆, where β ∆ denotes the maximum difference between any two edgeweights. Theorem 3. Let G = (V, E) and G = (V, E) be two weighted graphs that have the same set of n nodes with different nodeweights and the same set of edges with the same edgeweights. If every node in G has nodeweight 1/n, whereas G has m nodes (0 < m < n) with nodeweight 2/(n + m) and n − m nodes having weight 1/(n + m), then where β ∆ denotes the maximum difference between any two edge weights. Proof. We create an overlay matrix L with the diagonal terms L ii = min(α i (G), α i G ). For the first m diagonal terms, L ii = 1/n, and for the rest the diagonal terms are 1/(n + m). Then If i = p and j = q, then β ij (G) = β pq (G). Thus, Note that G[k] and G[itpl(k)] have equal number of nodes. By definition, for a partition S, T of the node set V, Now we construct the partition S, T such that the first km nodes belong to S and the rest kn − km nodes belong to T. Since the adjacency matrix of G is upper triangular and non-zero, without loss of generality we pick m such that m i=1 β im (G) = n i=m+1 β im (G). Since all entries below the diagonal are zero, that inequality is satisfied as long as the column sum at the index m is non-zero. Hence, we let S = {v 1 . . . v km} and T = {v km+1, . . ., v kn}. Let the cut distance under this partition S, we have proven the desired proposition. Theorem 5 shows the k-fold blow-up method is a better approximation of the original graph in terms of the cut distance δ than the 1D linear interpolation. But the exact k-fold blow-up is only applicable when k is an integer. If a graph of size n + m(0 < m < n) is desired, we need to resort to the fractional blow-up method, which has been analyzed in Theorems 3 and 4. We show that when m is 1 or n − 1, this partial blowup operation does not cause δ to change more than O(β ∆ /n). However, when m is n/2, δ between the original graph and the new graph could be up to β ∆ /6. This suggests that the fractional upsampling in a graph that is similar to the original when only a small number of nodes (relative to n) is added.
Graphon is a good search space for neural architecture search and empirically produces good networks.
1,440
scitldr
Adversarial training is by far the most successful strategy for improving robustness of neural networks to adversarial attacks. Despite its success as a defense mechanism, adversarial training fails to generalize well to unperturbed test set. We hypothesize that this poor generalization is a consequence of adversarial training with uniform perturbation radius around every training sample. Samples close to decision boundary can be morphed into a different class under a small perturbation budget, and enforcing large margins around these samples produce poor decision boundaries that generalize poorly. Motivated by this hypothesis, we propose instance adaptive adversarial training -- a technique that enforces sample-specific perturbation margins around every training sample. We show that using our approach, test accuracy on unperturbed samples improve with a marginal drop in robustness. Extensive experiments on CIFAR-10, CIFAR-100 and Imagenet datasets demonstrate the effectiveness of our proposed approach. A key challenge when deploying neural networks in safety-critical applications is their poor stability to input perturbations. Extremely tiny perturbations to network inputs may be imperceptible to the human eye, and yet cause major changes to outputs. One of the most effective and widely used methods for hardening networks to small perturbations is "adversarial training" , in which a network is trained using adversarially perturbed samples with a fixed perturbation size. By doing so, adversarial training typically tries to enforce that the output of a neural network remains nearly constant within an p ball of every training input. Despite its ability to increase robustness, adversarial training suffers from poor accuracy on clean (natural) test inputs. The drop in clean accuracy can be as high as 10% on CIFAR-10, and 15% on Imagenet , making robust models undesirable in some industrial settings. The consistently poor performance of robust models on clean data has lead to the line of thought that there may be a fundamental trade-off between robustness and accuracy , and recent theoretical characterized this tradeoff (; ;). In this work, we aim to understand and optimize the tradeoff between robustness and clean accuracy. More concretely, our objective is to improve the clean accuracy of adversarial training for a chosen level of adversarial robustness. Our method is inspired by the observation that the constraints enforced by adversarial training are infeasible; for commonly used values of, it is not possible to achieve label consistency within an -ball of each input image because the balls around images of different classes overlap. This is illustrated on the left of Figure 1, which shows that the -ball around a "bird" (from the CIFAR-10 training set) contains images of class "deer" (that do not appear in the training set). If adversarial training were successful at enforcing label stability in an = 8 ball around the "bird" training image, doing so would come at the unavoidable cost of misclassifying the nearby "deer" images that come along at test time. At the same time, when training images lie far from the decision boundary (eg., the deer image on the right in Fig 1), it is possible to enforce stability with large with no compromise in clean accuracy. When adversarial training on CIFAR-10, we see that = 8 is too large for some images, causing accuracy loss, while being unnecessarily small for others, leading to sub-optimal robustness. Figure 1: Overview of instance adaptive adversarial training. Samples close to the decision boundary (bird on the left) have nearby samples from a different class (deer) within a small L p ball, making the constraints imposed by PGD-8 / PGD-16 adversarial training infeasible. Samples far from the decision boundary (deer on the right) can withstand large perturbations well beyond = 8. Our adaptive adversarial training correctly assigns the perturbation radius (shown in dotted line) so that samples within each L p ball maintain the same class. The above observation naturally motivates adversarial training with instance adaptive perturbation radii that are customized to each training image. By choosing larger robustness radii at locations where class manifolds are far apart, and smaller radii at locations where class manifolds are close together, we get high adversarial robustness where possible while minimizing the clean accuracy loss that comes from enforcing overly-stringent constraints on images that lie near class boundaries. As a , instance adaptive training significantly improves the tradeoff between accuracy and robustness, breaking through the pareto frontier achieved by standard adversarial training. Additionally, we show that the learned instance-specific perturbation radii are interpretable; samples with small radii are often ambiguous and have nearby images of another class, while images with large radii have unambiguous class labels that are difficult to manipulate. Parallel to our work, we found that uses adaptive margins in a max-margin framework for adversarial training. Their work focuses on improving the adversarial robustness, which differs from our goal of understanding and improving the robustness-accuracy tradeoff. Moreover, our algorithm for choosing adaptive margins significantly differs from that of. Adversarial attacks are data items containing small perturbations that cause misclassification in neural network classifiers . Popular methods for crafting attacks include the fast gradient sign method (FGSM) which is a one-step gradient attack, projected gradient descent (PGD) which is a multi-step extension of FGSM, the C/W attack , DeepFool (, and many more. All these methods use the gradient of the loss function with respect to inputs to construct additive perturbations with a norm-constraint. Alternative attack metrics include spatial transformer attacks , attacks based on Wasserstein distance in pixel space , etc. Defending against adversarial attacks is a crucial problem in machine learning. Many early defenses (; ;), were broken by strong attacks. Fortunately, adversarially training is one defense strategy that remains fairly resistant to most existing attacks. denote the set of training samples in the input dataset. In this paper, we focus on classification problems, hence, y i ∈ {1, 2, . . . N c}, where N c denotes the number of classes. Let f θ (x): R c×m×n → R Nc denote a neural network model parameterized by θ. Classifiers are often trained by minimizing the cross entropy loss given by whereỹ i is the one-hot vector corresponding to the label y i. In adversarial training, instead of optimizing the neural network over the clean training set, we use the adversarially perturbed training set. Mathematically, this can be written as the following min-max problem This problem is solved by an alternating stochastic method that takes minimization steps for θ, followed by maximization steps that approximately solve the inner problem using k steps of PGD. For more details, refer to. end if 8: S + = {i|f (x i) is correctly classified as y i } S − = {i|f (x i) is incorrectly classified as y i } 11: 12: end for To remedy the shortcomings of uniform perturbation radius in adversarial training (Section 1), we propose Instance Adaptive Adversarial Training (IAAT), which solves the following optimization: Like vanilla adversarial training, we solve this by sampling mini-batches of images {x i}, crafting adversarial perturbations {δ i} of size at most {i}, and then updating the network model using the perturbed images. The proposed algorithm is distinctive in that it uses a different i for each image x i. Ideally, we would choose each i to be as large as possible without finding images of a different class within the i -ball around x i. Since we have no a-priori knowledge of what this radius is, we use a simple heuristic to update i after each epoch. After crafting a perturbation for x i, we check if the perturbed image was a successful adversarial example. If PGD succeeded in finding an image with a different class label, then i is too big, so we replace i ← i − γ. If PGD failed, then we set i ← i + γ. Since the network is randomly initialized at the start of training, random predictions are made, and this causes {i} to shrink rapidly. For this reason, we begin with a warmup period of a few (usually 10 epochs for CIFAR-10/100) epochs where adversarial training is performed using uniform for every sample. After the warmup period ends, we perform instance adaptive adversarial training. A detailed training algorithm is provided in Alg. 1. Set i = 2 8: else 9: To evaluate the robustness and generalization of our models, we report the following metrics: test accuracy of unperturbed (natural) test samples, adversarial accuracy of white-box PGD attacks, adversarial accuracy of transfer attacks and accuracy of test samples under common image corruptions . Following the protocol introduced in , we do not train our models on any image corruptions. On CIFAR-10 and CIFAR-100 datasets, we perform experiments on Resnet-18 and WideRenset-32-10 models following . All models are trained on PGD-10 attacks i.e., 10 steps of PGD iterations are used for crafting adversarial attacks during training. In the whitebox setting, models are evaluated on: PGD-10 attacks with 5 random restarts, PGD-100 attacks with 5 random restarts, and PGD-1000 attacks with 2 random restarts. For transfer attacks, an independent copy of the model is trained using the same training algorithm and hyper-parameter settings, and PGD-1000 adversarial attacks with 2 random restarts are crafted on the surrogate model. For image corruptions, following , we report average classification accuracy on 19 image corruptions. Beating the robustness-accuracy tradeoff: In adversarial training, the perturbation radius is a hyper-parameter. Training models with varying produces a robustness-accuracy tradeoff curvemodels with small training achieve better natural accuracy and poor adversarial robustness, while models trained on large have improved robustness and poor natural accuracy. To generate this tradeoff, we perform adversarial training with in the range {1, 2, . . . 8}. Instance adaptive adversarial training is then compared with respect to this tradeoff curve in Fig. 3a, 3b. Two versions of IAAT are reported -with and without a warmup phase. In both versions, we clearly achieve an improvement over the accuracy-robustness tradeoff. Use of the warmup phase helps retain robustness with a drop in natural accuracy compared to its no-warmup counterpart. Clean accuracy improves for a fixed level of robustness: On CIFAR-10, as shown in Table. 1, we observe that our instance adaptive adversarial training algorithm achieves similar adversarial robustness as the adversarial training baseline. However, the accuracy on clean test samples increases by 4.06% for Resnet-18 and 4.49% for WideResnet-32-10. We also observe that the adaptive training algorithm improves robustness to unseen image corruptions. This points to an improvement in overall generalization ability of the network. On CIFAR-100 (Table. 2), the performance gain in natural test accuracy further increases -8.79% for Resnet-18, and 9.22% for Wideresnet-32-10. The adversarial robustness drop is marginal. Maintaining performance over a range of test: Next, we plot adversarial robustness over a sweep of values used to craft attacks at test time. Fig. 4a, 4b shows an adversarial training baseline with = 8 performs well at high regimes and poorly at low regimes. On the other hand, adversarial training with = 2 has a reverse effect, performing well at low and poorly at high regimes. Our instance adaptive training algorithm maintains good performance over all regimes, achieving slightly less performance than the = 2 model for small test, and dominating all models for larger test. Interpretability of: We find that the values of i chosen by our adaptive algorithm correlate well with our own human concept of class ambiguity. Figure 2 (and Figure 6 in Appendix B) shows that a sampling of images that receive small i contains many ambiguous images, and these images are perturbed into a (visually) different class using = 16. In contrast, images that receive a large i have a visually definite class, and are not substantially altered by an = 16 perturbation. Robustness to other attacks: While our instance adaptive algorithm is trained on PGD attacks, we are interested to see if the trained model improves robustness on other adversarial attacks. As shown in Table. 3, IAAT achieves similar level of robustness as adversarial training on other gradient-based attacks, while improving the natural accuracy. , we attack Imagenet models using random targeted attacks instead of untargeted attacks as done in previous experiments. During training, adversarial attacks are generated using 30 steps of PGD. As a baseline, we use adversarial training with a fixed of 16/255. This is the setting used in. Adversarial training on Imagenet is computationally intensive. To make training practical, we use distributed training with synchronized SGD on 64/128 GPUs. More implementation details can be found in Appendix E. At test time, we evaluate the models on clean test samples and on whitebox adversarial attacks with = {4, 8, 12, 16}. PGD-1000 attacks are used. Additionally, we also report normalized mean corruption error (mCE), an evaluation metric introduced in to test the robustness of neural networks to image corruptions. This metric reports mean classification error of different image corruptions averaged over varying levels of degradation. Note that while accuracies are reported for natural and adversarial robustness, mCE reports classification errors, so lower numbers are better. Our experimental are reported in Table. 4. We observe a huge drop in natural accuracy for adversarial training (25%, 22% and 20% drop for Resnet-50, 101 and 152 respectively). Adaptive adversarial training significantly improves the natural accuracy -we obtain a consistent performance gain of 10+% on all three models over the adversarial training baseline. On whitebox attacks, IAAT outperforms the adversarial training baseline on low regimes, however a drop of 13% is observed at high's (= 16). On the corruption dataset, our model consistently outperforms adversarial training. Recall from Section 3 that during warmup, adversarial training is performed with uniform normbound constraints. Once the warmup phase ends, we switch to instance adaptive training. From Table 5 and 6, we observe that when warmup is used, adversarial robustness improves with a small drop in natural accuracy, with more improvements observed in CIFAR-100. However, as shown in Fig. 3a and 3b, both these settings improve the accuracy-robustness tradeoff. We are interested in estimating instance-specific perturbation radius i such that predictions are consistent within the chosen i -ball. To obtain an exact estimate of such an i, we can perform a line search as follows: Given a discretization η and a maximum perturbation radius max, generate PGD attacks with radii {iη}. Choose the desired i as the maximum iη for which the prediction remains consistent as that of the ground-truth label. We compare the performance of exact line search with that of IAAT in Table 7. We observe that exact line search marginally improves compared to IAAT. However, exact line search is computationally expensive as it requires performing max /η additional PGD computations, whereas IAAT requires only 2. In this work, we focus on improving the robustness-accuracy tradeoff in adversarial training. We first show that realizable robustness is a sample-specific attribute: samples close to the decision boundary can only achieve robustness within a small ball, as they contain samples from a different class beyond this radius. On the other hand samples far from the decision boundary can be robust on a relatively large perturbation radius. Motivated by this observation, we develop instance adaptive adversarial training, in which label consistency constraints are imposed within sample-specific perturbation radii, which are in-turn estimated. Our proposed algorithm has empirically been shown to improve the robustness-accuracy tradeoff in CIFAR-10, CIFAR-100 and Imagenet datasets. A recent paper that addresses the problem of improving natural accuracy in adversarial training is mixup adversarial training , where adversarially trained models are optimized using mixup loss instead of the standard cross-entropy loss. In this paper, natural accuracy was shown to improve with no drop in adversarial robustness. However, the robustness experiments were not evaluated on strong attacks (experiments were reported only on PGD-20). We compare our implementation of mixup adversarial training with IAAT on stronger attacks in Table. 8. We observe that while natural accuracy improves for mixup, drop in adversarial accuracy is much higher than IAAT. A visualization of samples from CIFAR-10 dataset with the corresponding value assigned by IAAT is shown in Figure. 5. We observe that samples for which low's are assigned are visually confusing (eg., top row of Figure. 5), while samples with high distinctively belong to one class. In addition, we also show more visualizations of samples near decision boundary which contain samples from a different class within a fixed ∞ ball in Figure. 6. The infeasibility of label consistency constraints within the commonly used perturbation radius of ∞ = 8 is apparent in this visualization. Our algorithm effectively chooses an appropriate that retains label information within the chosen radius. Next, we visualize the evolution of over epochs in adaptive adversarial training. A plot showing the average growth, along with the progress of 3 randomly picked samples are shown in Fig. 7a and 7b. We observe that average converges to around 11, which is higher than the default setting of = 8 used in adversarial training. Also, each sample has a different profile -for some, increases well beyond the commonly use radius of = 8, while for others, it converges below it. In addition, a plot showing the histogram of's at different snapshots of training is shown in Fig. 8. We observe an increase in spread of the histogram as the training progresses. Testing against a strong adversary is crucial to assess the true robustness of a model. A popular practice in adversarial robustness community is to attack models using PGD with many attack iterations . So, we test our instance adaptive adversarially trained models on a sweep of PGD iterations for a fixed level. Following , we perform the sweep upto 2000 attack steps fixing = 16. The ing plot is shown in Figure. 9. For all three Resnet models, we observe a saturation in adversarial robustness beyond 500 attack iterations. As shown in Alg. 2, IAAT algorithm has two hyper-parameters -smoothing constant β and discretization γ. In this section, we perform a sensitivity analysis of natural and robust accuracies by varying these hyper-parameters. Results are reported in Table. 9. We observe that the algorithm is not too sensistive to the choice of hyper-parameters. But the best performance is obtained for γ = 1.9 and β = 0.1. On CIFAR-10 and CIFAR-100 datasets, our implementation follows the standard adversarial training setting used in. During training, adversarial examples are generated using PGD-10 attacks, which are then used to update the model. All hyperparameters we used are tabulated in Table. 10. For Imagenet implementation, we mimic the setting used in. During training, adversaries are generated with PGD-30 attacks. This is computationally expensive as every training update is followed by 30 backprop iterations to generate the adversarial attack. To make training feasible, we perform distributed training using synchronized SGD updates on 64 / 128 GPUs. We follow the training recipe introduced in for large batch training. Also, during training, adversarial attacks are generated with FP-16 precision. However, in test phase, we use FP-32. We further use two more tricks to speed-up instance adaptive adversarial training: A weaker attacker(PGD-10) is used in the algorithm for selecting (Alg. 2). After i is selected per Alg. 2, we clip it with a lower-bound i.e., i ← max(i, lb). lb = 4 was used in our experiments. Hyperparameters used in our experiments are reported in Table 11. All our models were trained on PyTorch. Resnet-50 model was trained on 64 Nvidia V100 GPUs, while Resnet-101 and Resnet-152 models were trained on 128 GPUs. Time taken for instance adaptive adversarial training for all models is reported in Table. 12.
Instance adaptive adversarial training for improving robustness-accuracy tradeoff
1,441
scitldr
Machine learning models including traditional models and neural networks can be easily fooled by adversarial examples which are generated from the natural examples with small perturbations. This poses a critical challenge to machine learning security, and impedes the wide application of machine learning in many important domains such as computer vision and malware detection. Unfortunately, even state-of-the-art defense approaches such as adversarial training and defensive distillation still suffer from major limitations and can be circumvented. From a unique angle, we propose to investigate two important research questions in this paper: Are adversarial examples distinguishable from natural examples? Are adversarial examples generated by different methods distinguishable from each other? These two questions concern the distinguishability of adversarial examples. Answering them will potentially lead to a simple yet effective approach, termed as defensive distinction in this paper under the formulation of multi-label classification, for protecting against adversarial examples. We design and perform experiments using the MNIST dataset to investigate these two questions, and obtain highly positive demonstrating the strong distinguishability of adversarial examples. We recommend that this unique defensive distinction approach should be seriously considered to complement other defense approaches. Machine learning models including SVMs BID0 and especially deep neural networks BID17 can be easily fooled by adversarial examples which are generated from the natural examples with small perturbations. Quite often, both machine learning models and humans can classify the natural examples such as the images of pandas with high accuracy, and humans can still classify the adversarial examples as pandas with high accuracy because the small perturbations are imperceptible; however, machine learning models are fooled to misclassify adversarial examples as some targets such as gibbons BID4 desired by attackers. This intriguing property or vulnerability of machine learning models poses a critical challenge to machine learning security, and it impedes the wide application of machine learning in many important domains such as computer vision (e.g., for self driving cars) and even in malware detection BID0 BID19. Furthermore, the discovery of new and powerful adversarial example generation methods such as BID17 BID4 BID1 BID2 BID7 BID9 BID11 BID10 BID16 goes on without cessation, indicating to a certain extent the unlimited capabilities for attackers to continuously and easily fool machine learning models. On the other hand, even state-of-the-art defense approaches such as adversarial training BID17 BID4 and defensive distillation BID12 still suffer from major limitations and can be circumvented (Section 2). Therefore, the unfortunate status quo is that attackers prevail over defenders. In this paper, from a unique angle, we propose to investigate two important research questions that concern the distinguishability of adversarial examples. Question 1: are adversarial examples distinguishable from natural examples? Question 2: are adversarial examples generated by different methods distinguishable from each other? If the answer to Question 1 will be positive, i.e., given a certain classification task such as image classification, generated adversarial examples (regardless of the objects they represent) largely belong to one class while natural examples belong to the other class, then defenders can simply discard those adversarial examples to protect the machine learning models. If the answer to Question 2 will be positive, i.e., adversarial examples generated by different methods clearly belong to different classes, defenders can better protect the machine learning models, for example, by incorporating the corresponding examples into the adversarial training process to enhance the robustness of the models. Besides such practical benefits, answering these two questions may also help researchers further identify the nature of adversarial examples. Formally, we consider a classification problem in adversarial environments as a multi-label classification problem. That is, upon seeing a new input such as an image, while the original task such as classifying the image as a certain object is important, it is also important to classify the image as a generated adversarial vs. a natural example in the first place. We formulate this multi-label classification problem in Section 3 to guide us in answering the two questions, and term the corresponding defense approach as defensive distinction, which distinguishes adversarial vs. natural examples and distinguishes adversarial examples generated by different methods to protect against the attacks. We design and perform experiments using the MNIST dataset to investigate the two research questions and evaluate the effectiveness of our defensive distinction approach. In our experiments, we consider multiple scenario-case combinations that defenders either know or do not know the neural network, source images, and methods as well as parameters used by attackers for generating adversarial examples. We obtain highly positive answers to both research questions. For example, in some typical cases, adversarial vs. natural examples can be distinguished perfectly with 100% accuracy, while adversarial examples generated by different methods can be distinguished with over 90% accuracy. Our experimental demonstrate the strong distinguishability of adversarial examples, and demonstrate the value of the defensive distinction approach. We recommend that this unique defense approach should be seriously considered to complement other defense approaches. We make four main contributions in this paper: we propose to investigate two important research questions that concern the distinguishability of adversarial examples; we formulate a classification problem in adversarial environments as a multi-label classification problem to answer the two questions; we propose and explore a unique defense approach termed as defensive distinction; we design and perform experiments to empirically demonstrate the strong distinguishability of adversarial examples and the value of our defensive distinction approach. Adversarial examples can easily fool machine learning models, and they can be generated by a number of powerful methods. One representative method is L-BFGS BID17, which formulates the adversarial example generation problem as a box-constrained optimization problem and uses line-search to identify approximated optimal solutions. Another representative method is FGSM (Fast Gradient Sign Method) BID4, which efficiently obtains an optimal max-norm constrained perturbation for generating adversarial examples. FGSM inspired researchers to pursue a few fine-grained refinements over it, ing in powerful methods such as BIM (Basic Iterative Method) by BID7, PGD (Projected Gradient Descent) by BID9, and MIM (Momentum Iterative Method) by. L-BFGS also inspired researchers to design more powerful or general methods such as C&W by BID1 and EAD (Elastic Net Method) by BID2. JSMA (Jacobian-based Saliency Map Approach) BID11 ) is another representative method, which constructs adversarial saliency maps to guide the selection of input features for perturbation in multiple iterations. On the defense side, one representative approach is adversarial training BID17 BID4, which basically uses both the generated adversarial examples and the natural (a.k.a clean) examples to train the machine learning models with better regularization. Another representative approach is defensive distillation BID12, which basically uses the same neural network architecture to train a distilled network based on the probability vectors produced by the original network, thus reducing the models' sensitivity to small input perturbations. Both approaches in essence focus on improving the generalization and thus robustness of the machine learning models that perform the original classification task. While representing the state-of-theart, both approaches still suffer from major limitations and can be circumvented. For example, defensive distillation was found to be ineffective against C&W attacks BID1 and black-box attacks BID13, while adversarial training was found to be ineffective against adversarial examples generated by many iterative methods such as BIM BID6, , and MIM.Overall, powerful adversarial example generation methods exist and will continue to be proposed, while existing defense approaches are far from being perfect yet. The unfortunate status quo is that attackers prevail over defenders. Our work is likely one step toward making a change. We now present the problem formulation and the corresponding defensive distinction protection approach; we also define a threat model that can be used for evaluating our approach. To date, a classification problem in adversarial environments is viewed as a single-label classification problem, where a function or model f: X → Y is learned to assign each input example x i ∈ X a single label y i ∈ Y, where the range Y is a set of m singletons representing m possible classes such as 10 possible digits. In other words, a single concept or semantic meaning is associated with each example. State-of-the-art defense approaches such as adversarial training and defensive distillation reviewed in Section 2 do not change this basic problem formulation. However, considering the two important distinguishability questions that we propose to investigate (Question 1: are adversarial examples distinguishable from natural examples? Question 2: are adversarial examples generated by different methods distinguishable from each other?), it is straightforward for us to formulate a classification problem in adversarial environments as a multi-label classification problem. That is, a function or model f: X → Y is learned to assign each input example x i a pair of labels represented by DISPLAYFORM0 Here, the new range Y = Y × Z is the Cartesian product of Y (i.e., the range in the original classification task) and Z, where Z is a set of n singletons representing n possible classes regarding if x i is an adversarial example. In the simplest situation, n = 2 and b i indicates either an example is natural (i.e., clean) or adversarial. In a more complex situation, n > 2 and b i can indicate either an example is natural or generated by one of n − 1 different adversarial example generation methods. Under this new problem formulation, two concepts or semantic meanings 1 are associated with each example. The solution to the problem will help us answer the two research questions, and help defenders simply discard or leverage the identified adversarial examples to better protect their machine learning models as described in Section 1. Moreover, one major advantage of a multi-label classification formulation and the corresponding multi-label learning is on exploring the intrinsic correlations between multiple labels. In real-world adversarial environments, this advantage is significant because it will allow researchers and defenders to perform in-depth analysis of the behavior of attackers. For example, do attackers create most adversarial examples targeting at some specific classes such as gibbons? do attackers use different generation methods for different targeted classes? and, do attackers use examples of some specific source classes to create adversarial examples targeting at some specific classes (Appendix A)? These and perhaps other types of label correlation analyses will be valuable for inferring the real intentions of attackers, and even for revolutionizing the design of the original machine learning systems. We simply define the defensive distinction protection approach as training a model for a multi-label classification problem formulated in adversarial environments to identify adversarial examples and protect against them. So a training set will include both natural examples and generated adversarial examples with their ground truth b i values correspondingly labeled in y i = (a i, b i) ∈ Y. The ground truth a i values for natural examples are obviously known based on some existing dataset such as MNIST. The ground truth a i values for adversarial examples can be missing BID20 or not needed (as in our solution described below) depending on specific learning algorithms. Many multi-label learning algorithms have been proposed with different strategies or considerations on the order-of-correlations among labels BID21. In this paper, we experiment with the simplest first-order strategy that basically transforms a multi-label classification problem into multiple single-label problems. More specifically, three models will be independently trained:• DDP-Model (defensive distinction primary model): a single-label binary model for distinguishing an input example x i as either natural or adversarial.• Original-Model: a single-label multi-class model for the original task on classifying the input example x i as one of multiple possible objects such as one of 10 digits.• DDS-Model (defensive distinction secondary model): a single-label multi-class model 2 for distinguishing known adversarial examples as generated by different specific methods. This simple strategy provides a baseline solution to the defensive distinction approach. It is simple in model formulation, computation, and evaluation, albeit high-order strategies would be more powerful in exploring label correlations BID15 BID21. In existing defense approaches such as adversarial training and defensive distillation, a threat model only needs to define the capabilities of attackers because those approaches focus on improving the generalization and thus robustness of the machine learning models that perform the original classification task. Typically, two types of attacks are defined: white-box attacks that have the full knowledge about the original machine learning model including its architecture, parameter values, and training data, and thus can replicate the exact model to generate adversarial examples, and black-box attacks that only have the limited knowledge about the original machine learning model such as the labels for a given set of input examples, and thus often need to create another "substitute model" to generate adversarial examples BID11.In our defensive distinction approach, a thread model needs to further consider the capabilities of defenders because the effectiveness of our approach is related to the defenders' knowledge about the machine learning model (including its architecture, parameter values, and training data), adversarial example generation methods, parameters (e.g., maximum distortion) of the generation methods, and source examples used by attackers. We refer to these factors as AdvGen-Model, AdvGen-Methods, AdvGen-Parameters, and Adv-Examples. These considerations together with the capabilities of attackers appear to complicate the overall threat model; however, starting directly from the perspective of defenders, we can still clearly define some representative scenarios and cases as shown in TAB0. Briefly, Scenarios 1 and 2 represent that an attacker's AdvGen-Model is known or unknown to a defender, respectively; Cases 1 to 4 represent the four combinations of the defender's knowledge on AdvGen-Parameters and Adv-Examples, respectively. In total, eight scenario-case combinations (from S1C1 to S2C4) or simply cases are considered in this paper. Note that we do not include AdvGen-Methods in the table because it is reasonable to assume that popular adversarial example generation methods are known to both defenders and attackers (similar to the basic assumption in cryptographic systems that algorithms should not be considered as secrets and could be known by both defenders and attackers); however, we do consider the AdvGen-Methods factor in some of our experiments in the next section by leaving out certain generation method in model training. We now evaluate the effectiveness of our defensive distinction approach by performing three sets of experiments with two for DDP-Model and one for DDS-Model. We design our experiments by using the MNIST dataset of handwritten digits and two different convolutional neural networks (CNNs). The first CNN is a slight variation of the original LeNet-5 BID8 by adding rectified linear units to the convolutional layers and using max pooling in the sub-sampling layers. We refer to this network as LeNet-5. The second CNN network has a convolutional layer with 64 filters followed by two convolutional layers with 128 filters, one 2 by 2 max pooling layer, one fully connected layer with 64 rectified linear units, and one softmax layer. We refer to this network as Basic-CNN. When we train these two CNNs for different classification tasks in our experiments, we use learning rate η=0.001, number of epochs=6, and batch size=128. For the original 10-digit classification task, LeNet-5 and Basic-CNN achieve 99.05% and 98.99% accuracy, respectively, on the 10,000 images from the MNIST test set. In the generation of adversarial examples, one LeNet-5 network trained with the 60,000 images from the MNIST training set is considered as the attacker's model (AdvGen-Model) and six generation methods including FGSM, JSMA, BIM, L-BFGS, MIM, and PGD (Section 2) are used by leveraging the v2.1.0 of the CleverHans library. Based on each source example (representing one digit) chosen from the MNIST test set, nine target adversarial example generation attempts (for the rest nine digit classes) are made. In other words, we consider targeted attacks, which are more damaging than untargeted attacks that simply misclassify source examples. We generate five adversarial example datasets using the six methods based on different method parameters and source examples. As shown in TAB1, datasets adv tr and adv C1 are generated by using the same source examples with the index range 0-299 for each digit class such as'0', i.e., the first 300 examples from each digit class in the MNIST test set are chosen as source examples. These two datasets also share the same method parameters, but they are different due to the difference in the initial randomization of their generation. Dataset adv C2 has the same method parameters but different source examples compared with adv tr. Dataset adv C3 has the same source examples but different method parameters compared with adv tr. Dataset adv C4 has different source examples and different parameters compared with adv tr. The total number of adversarial examples in each dataset is approximately 300×9×6=16,200. Adversarial examples in adv tr will be used for training our DDP-Model and DDS-Model defensive distinction models based on LeNet-5 and Basic-CNN. Models trained based on LeNet-5 represent the Scenario 1 in TAB0 because the AdvGen-Model also uses LeNet-5, while models trained based on Basic-CNN represent the Scenario 2 in TAB0. Adversarial examples in adv C1 to adv C4 will be used for testing our defensive distinction models, corresponding to the Cases C1 to C4 in TAB0. So now, our design and setup of experiments allow us to evaluate our defensive distinction approach for all the eight scenario-case combinations from S1C1 to S2C4 TAB0. The simplest DDP-Model is a binary classifier for distinguishing natural examples from adversarial examples generated by a single attack method. For each method, a classifier is trained on a mixture of 1,000 natural examples from the MNIST training set and 1,000 adversarial examples from the dataset adv tr in TAB1 corresponding to the selected method. The classifier is then evaluated on a mixture of 1,000 natural examples from the MNIST test set and 1,000 adversarial examples from one test dataset adv C1 to adv C4 in TAB1 corresponding to the selected method. Note that in this paper, examples are all randomly selected from the corresponding sets; each experiment including the training and testing is independently performed for 10 runs to present the average . Model accuracy for the four test cases with known AdvGen-Parameters (S1C1, S1C2, S2C1, S2C2) are given in FIG1. Classification accuracy is very high for all the six methods, suggesting that regardless of the defender's knowledge on AdvGen-Model (S1 vs. S2) and AdvGen-Examples (C1 vs. C2), a simple DDP-Model can be highly effective as long as AdvGen-Parameters are known. Across the four test cases, the models for distinguishing JSMA examples from natural examples struggled the most but still achieved high average accuracy at 92.35%, while the models for other methods such as MIM, PGD, FGSM, and BIM classified examples perfectly. One reason for the imperfect on the JSMA examples could be that JSMA only selects a small number such as a pair of input features (i.e., pixels) for perturbation in each iteration BID11. Model accuracy for the remaining four test cases with unknown AdvGen-Parameters (S1C3, S1C4, S2C3, S2C4) are given in FIG1. Across the four test cases, classification accuracy is still very high (> 95%) for L-BFGS, FGSM, and BIM, suggesting that a simple DDP-Model can still be highly effective for these three methods even when some AdvGen-Parameters are unknown to a defender. Models for the other three methods did not perform well. Notably, the models only achieved near 50% accuracy (i.e., like random guessing) for MIM, PGD, and JSMA examples under the test cases S1C4 and S2C4, and only performed slightly better than random guessing for MIM and PGD examples under the test cases S1C3 and S2C3; these suggest that knowing the parameters of some adversarial example generation methods is important to defenders. DISPLAYFORM0 Overall, the in FIG1 are highly informative in practice for defenders. Most importantly, it is always beneficial for defenders to train a DDP-Model by using adversarial examples generated based on a variety of parameters. Meanwhile, the exact model architecture and the exact natural examples used by attackers are not influential in the accuracy of the defenders' models. So far, we analyzed the accuracy of DDP-Model without considering whether an adversarial example successfully fooled the Original-Model for the 10-digit classification task to misclassify it as a targeted digit class. Back to our multi-label classification formulation of the problem in Section 3.1, correlating the labels of these two models on an adversarial example will be interesting and will indeed further demonstrate the value of our approach. Especially, we define a Danger Rate metric, which is the percentage of the successful adversarial examples (i.e., those that achieved their targeted attacks) that are not classified by our DDP-Model as adversarial and thus will continue to incur danger. Without using DDP-Model, the danger rate of successful adversarial examples is considered as 100%. FIG1 demonstrates that our DDP-Model can significantly reduce the danger rate of successful adversarial examples by over 50% for all the eight test cases and all the six methods. A slightly more advanced DDP-model is a binary classifier for distinguishing natural examples from a collection of adversarial examples generated by multiple attack methods. Since it is unreasonable to assume that a defender will always know in advance all the methods used by an attacker, we introduce the concept of an excluded or "left-out" method in the experiments to consider the AdvGenMethods factor. Basically, training is on adversarial examples selected from five attack methods with one method left out, while testing is on adversarial examples selected from all the six methods. We rotate the left-out attack method to explore the differences between methods, and compare the accuracy of the left-out classifiers with that of baseline classifiers which do not leave out a method. A left-out classifier is trained on a mixture of 5, 000 natural examples from the MNIST training set and 1, 000 adversarial examples by each of the five non-excluded attack methods from the dataset adv tr in TAB1. A baseline classifier is trained on a mixture of 6, 000 natural examples from the MNIST training set and 1, 000 adversarial examples by each of the six attack methods from the dataset adv tr. They are all evaluated on 6, 000 natural examples from the MNIST test set and 1, 000 adversarial examples by each of the six attack methods from one test dataset adv C1 to adv C4. Accuracy for the left-out and baseline classifiers are shown in FIG3, where the x-axis identifies the left-out method (with NONE for baseline) in the training of a classifier, and the y-axis displays the overall accuracy of a classifier on a collection of 6, 000 adversarial examples and 6, 000 natural examples. In several of the test cases (S1C1, S1C2, S1C3, S2C1, S2C2, S2C3), classification accuracy is above 90% (and often even well above 95%) across all methods. In the remaining test cases S1C4 and S2C4, classification accuracy is lower but still above 80% across all methods. FIG3 further details the recall (or sensitivity) of a given classifier on a set of 1, 000 adversarial examples generated by the left-out method. We can see that clear disparities exist between two groups of methods: L-BFGS and JSMA examples well evade the classification when the corresponding method was not explicitly considered in the training, while the examples of the other four methods are accurately classified as adversarial even when the corresponding method was not considered in the training. These disparities can be explained to a certain extent by the fact that BIM, MIM, and PGD are all fine-grained refinements over FGSM as reviewed in Section 2. FIG3 further demonstrates that our models can significantly reduce the danger rate of successful adversarial examples by over 90% (and often over 95%) for all the eight test cases and all the six methods. Overall, FIG3 clearly demonstrates that a DDP-Model trained on adversarial examples generated by multiple methods can be highly effective. While it is always beneficial for defenders to train a DDP-Model using adversarial examples generated by as many as possible methods, our models do have the strong capability to correctly classify the adversarial examples of an unknown method whose siblings are known as shown in the for the four methods FGSM, BIM, MIM, and PGD that belong to the same family. In addition, similar to what we have observed in Section 4.2, knowing the parameters of some adversarial example generation methods is still important to defenders. DDS-Model aims to distinguish known adversarial examples as generated by different specific methods. It can inform a defender about the specific techniques used by an attacker, and further help the defender exploit the weaknesses of the attack techniques. We trained a classifier using 1,000 adversarial examples generated by each of the six methods for a total of 6,000 examples from the dataset adv tr in TAB1. We then evaluated the classifier using 1,000 adversarial examples generated by each of the six methods for a total of 6,000 examples from one test dataset adv C1 to adv C4.Model accuracy for the four test cases with known AdvGen-Parameters are shown in FIG6 With the exception of the MIM and PGD methods, adversarial examples from other methods are all accurately (> 90%) classified across the four test cases. Model accuracy for the remaining four test cases with unknown AdvGen-Parameters are provided in FIG6. Across these four test cases, PGD and MIM completely evaded our classifiers and FGSM was classified with low accuracy, but the remaining three methods (except for JSMA in case S1C4) were classified accurately. FIG6 shows the full confusion matrix summed from the 10 runs of each experiment for S1C1, while the confusion matrices for all the eight cases are provided in Appendix B. The corresponding confusion matrices for the later four cases can help explain why PGD and MIM are the most difficult methods for correct classification -theirs examples are largely classified as BIM examples; it is presumably because while PGD starts with random perturbation BID9 and MIM integrates momentum to escape local maxima thus both ing in diverse examples, they still resemble their close sibling BIM BID7 Model Accuracy S1C1 S1C2 S2C1 S2C2(a) Accuracy for S1C1, S1C2, S2C1, S2C2. DISPLAYFORM0 Model Accuracy S1C3 S1C4 S2C3 S2C4(b) Accuracy for S1C3, S1C4, S2C3, S2C4. Overall, these demonstrate the strong distinguishability of competing adversarial example generation methods. Similar to what is observed for a DDP-Model in Sections 4.2 and 4.3, it is also beneficial for defenders to train a DDS-Model by using adversarial examples generated based on a variety of parameters; meanwhile, the exact model architecture and the exact natural examples used by attackers are not influential in the accuracy of the defenders' models. DISPLAYFORM0 We proposed two important research questions that concern the distinguishability of adversarial examples, and formulated a classification problem in adversarial environments as a multi-label classification problem. We proposed a defensive distinction protection approach to answer the two questions and address the problem. We designed and performed experiments using the MNIST dataset and eight representative cases. Our experimental demonstrate the strong distinguishability of adversarial examples, and the practical as well as research value of our approach. Our work also suggests many possibilities for the future work such as adopting high-order multi-label learning strategies to further explore the intrinsic correlations of labels as discussed in Section 3.2, investigating the distinguishability of adversarial examples for large tasks such as on ImageNet, and investigating the appropriate ways for integrating defensive distinction with other defense approaches. A APPENDIX: A POTENTIAL EXTENSION TO THE PROBLEM FORMULATION More labels could be added to include more concepts or semantic meanings in our multi-label classification formulation of the problem. For example, y i can be extended to a triplet (a i, b i, c i) ∈ Y where Y = Y × Z × Y is a 3-ary Cartesian product, and c i ∈ Y can indicate the source example class from which the input example x i was created. In the training set, c i can simply be a i for a natural example, and is assumed to be known for an adversarial example. This more complex version of formulation has its values on further correlating to the labels of source examples, but we do not explore it in the paper.
We propose a defensive distinction protection approach and demonstrate the strong distinguishability of adversarial examples.
1,442
scitldr
For a long time, designing neural architectures that exhibit high performance was considered a dark art that required expert hand-tuning. One of the few well-known guidelines for architecture design is the avoidance of exploding or vanishing gradients. However, even this guideline has remained relatively vague and circumstantial, because there exists no well-defined, gradient-based metric that can be computed {\it before} training begins and can robustly predict the performance of the network {\it after} training is complete. We introduce what is, to the best of our knowledge, the first such metric: the nonlinearity coefficient (NLC). Via an extensive empirical study, we show that the NLC, computed in the network's randomly initialized state, is a powerful predictor of test error and that attaining a right-sized NLC is essential for attaining an optimal test error, at least in fully-connected feedforward networks. The NLC is also conceptually simple, cheap to compute, and is robust to a range of confounders and architectural design choices that comparable metrics are not necessarily robust to. Hence, we argue the NLC is an important tool for architecture search and design, as it can robustly predict poor training outcomes before training even begins. Designing neural architectures that perform well can be a difficult process. In particular, the exploding / vanishing gradient problem has been a major challenge for building very deep neural networks at least since the advent of gradient-based parameter learning (; ; BID4 . However, there is still no consensus about which metric should be used for determining the presence of pathological exploding or vanishing gradients. Should we care about the length of the gradient vector , or about the size of individual components of the gradient vector (; ;), or about the eigenvalues of the Jacobian (; ;)? Depending on the metric used, different strategies arise for combating exploding and vanishing gradients. For example, manipulating the width of layers as suggested by e.g.; can greatly impact the size of gradient vector components but tends to leave the length of the entire gradient vector relatively unchanged. The popular He initialization for ReLU networks is designed to stabilize gradient vector length, wheareas the popular Xavier initialization for tanh networks is designed to stabilize the size of gradient vector components. While the papers cited above provide much evidence that gradient explosion / vanishing when defined according to some metrics is associated with poor performance when certain architectures are paired with certain optimization algorithms, it is often unclear how general those are. We make the following core contributions.1. We introduce the nonlinearity coefficient (NLC), a gradient-based measurement of the degree of nonlinearity of a neural network (section 3).2. We show that the NLC, computed in the networks randomly initialized state, is a powerful predictor of test error and that attaining a right-sized NLC is essential for achieving an optimal test error, at least in fully-connected feedforward networks (section 4).The NLC (defined at the top of page 3) combines the Frobenius norm of the Jacobian of a neural network with the global variability of the input data and the global variability of the network's outputs into a single metric. Despite its simplicity, it is tied to many important properties of the network. It is a remarkably accurate predictor of the network's nonlinearity as measured by the relative diameter of the regions in input space that can be well-approximated by a linear function (section 3 and figure 1). It is closely related to the nonlinearity of the individual activation functions used in the network and the degree to which they can be approximated by a linear function (section 5). It is tied to the susceptibility of the network's output to small random input perturbations. We define a neural network f as a function of the input x. Both x and the output f (x) are vectors of fixed dimensionality d in and d out respectively. We assume a prediction framework, where the output is considered to be the prediction and the goal is to minimize the value of the'error' e over this prediction and the label y, in expectation over some data distribution D. I.e., we wish to minimize E (x,y)∼D [e(f (x), y)]. In this paper, e is always classification accuracy. During training, we replace D with the training set and e with the surrogate loss function, which in this paper is always softmax plus cross-entropy. Let J (x):= df (x) dx be the Jacobian of the output with respect to the input. x(i) and f (x, j) denote the i'th component of the input vector and the j'th component of the output vector respectively, where 1 ≤ i ≤ d in and 1 ≤ j ≤ d out. Denote the component-wise standard deviation of a (possibly multivariate) random variable X as S [X]. Finally, let the'quadratic expectation' Q of a random variable X be defined as Q[X] = (E[X 2]), i.e. the generalization of the quadratic mean to random variables. Consider an arbitrary neural network f and assume it has a well-defined bounded domain D and bounded co-domain F. Then the Jacobian J taken at some x ∈ D defines a local linear approximation of the function f around x, i.e. f (x + δ) ≈ f (x) + J (x)δ for sufficiently small δ. The key insight behind the NLC and ultimately behind this paper is that there is a simple criterion for determining whether the approximation can be accurate for a given δ: If f (x) + J (x)δ is far away from F, then because f (x + δ) ∈ F, f (x) + J (x)δ is far away from f (x + δ) and thus it is inaccurate. We can use this insight to establish an approximate bound for the size of δ by referencing a well-known property of the Frobenius norm. Proposition 1. Let A be an m × n matrix and u a random n-dimensional vector of fixed length and uniformly random orientation. Then ||A|| F √ n ||u|| 2 = Q u ||Au|| 2. (See section F for the short proof.)As we also have f (x) ∈ F, we deduce that we must have DISPLAYFORM0 ||δ|| 2 diameter(F) for the local linear approximation at x to be accurate for a randomly oriented δ. In fact, an approximate bound for the diameter of the linearly approximable region at x in a random direction is DISPLAYFORM1. However, the size of this diameter by itself is not sufficient to judge the nonlinearity of f, because it does not take into account the size of the domain D. If D fits entirely within the linearly approximable region, then f is close to a linear function. If D is large compared to the linearly approximable region, we consider f highly nonlinear at x. Hence, we consider instead the approximate bound for the relative diameter of the linearly approximable region in a random direction, DISPLAYFORM2 ||J (x)|| F diameter (D). We posit the inverse of this value, DISPLAYFORM3, as our nonlinearity metric for f at x. Of course, in practice, a network might not have a well-defined, bounded domain and co-domain. Here we make a final modeling assumption. We proxy the diameter with the quadratic expectation of the distance of two random points from the data distribution. Thus our nonlinearity estimate becomes DISPLAYFORM4. And as we further show in section F, the quadratic expectation of this equals the NLC as defined below. Definition 1. The nonlinearity coefficient (NLC) of a network f and inputs x ∼ D is DISPLAYFORM5.The terms S x x(i) and S x f (x, j) denote the standard deviation of the activation of the i'th input neuron and j'th output neuron under the data distribution respectively. Q i (S x x(i)) and Q j (S x f (x, j)) denote the quadratic expectation of these values across all input / output neurons respectively. We DISPLAYFORM6 dout, where C x and C f are the covariance matrices of x and f (x) under the data distribution. Hence, the NLC can be re-written as DISPLAYFORM7. As a sanity check, let's look at the NLC of a linear network. In that case,f (x) = Ax + b for some A and b. Then the NLC becomes DISPLAYFORM8. This value equals 1, for example, if C x is a multiple of the identity or A is orthogonal. We further conjecture that this value is close to 1 for large, random matrices A as they occur in practice in randomly initialized neural networks, though this analysis goes beyond the scope of this paper. Note that an alternative interpretation of the NLC is that it represents the expected sensitivity of the network output with respect to small, randomly oriented changes to the input, normalized by the global variability of the input, output and √ d out. Finally, we refer readers interested in a pictorial illustration of the NLC to section A in the appendix. Computing the NLC It is worth noting that the NLC is cheap to (approximately) compute. Throughout this study, we compute Q i (S x x(i)) and Q j (S x f (x, j)) exactly via a single pass over the training set. Q x ||J (x)|| F can be computed stochastically by backpropagating Gaussian random vectors from the output layer. See section G for details. Now we investigate the empirical properties of the NLC through a large-scale study. Architectures used We sampled architectures at random by varying the depth of the network, the scale of initial weights, scale of initial bias, activation function, normalization method, presence of skip connections, location of skip connections and strength of skip connections. We chose from a set of 8 activation functions (table 1), which were further modified by random dilation, lateral shift and debiasing. For now, we only considered fully-connected feedforward networks, as is (perhaps unfortunately) common in analytical studies of deep networks (e.g. ; BID3). We have no reasons to suspect our will not generalize to CNNs, and we plan to investigate this point in future work. See section C for the full details of our architecture sampling scheme. Datasets used We studied three datasets: MNIST, CIFAR10 and waveform-noise. All our were highly consistent across these datasets. waveform-noise is from the UCI repository of datasets popular for evaluating fully-connected networks . See section D for further details on dataset selection and preprocessing. We sampled 250 architectures per dataset, a total of 750. DISPLAYFORM0 Points shown in blue correspond to architectures that have skip connections. Inset graphs in the bottom right are magnifications of the region 0.8 < N LC < 100. See section E.2 for details. Training protocol We trained each architecture with SGD with 40 different starting learning rates and selected the optimal one via a held-out validation set, independently for each architecture. During each run, we reduced the learning rate 10 times by a factor of 3. All training runs were conducted in 64 bit floating point. See section E for further experimental details and section B.1 for an analysis of how the learning rate search and numerical precision contributed to the outcome of our study. The of this study are shown in figures 1, 2, 3, 4, 7, 8, 9 and 10. All figures except figure 7 are scatter plots where each point corresponds to a single neural architecture. In most graphs, we show the correlation and its statistical significance between the quantities on the x and y axis at the top. Note that if any quantity is plotted in log-scale, the correlation is also using the log of that quantity. For each architecture, we only studied a single random initialization. Given a limited computational budget, we preferred studying a larger number of architectures instead. Note that all values depicted that were computed after training, such as test error or'NLC after training' in FIG2, are based on the training run with the lowest validation error, as described above. The NLC measures nonlinearity First, we verify that the NLC is indeed an accurate measure of nonlinearity. In FIG0, we plot the relative diameter of the linearly approximable regions of the network as discussed in section 3 and defined in section E.1 against the NLC in the randomly initialized state. We find a remarkably close match between both quantities. This shows empirically that our informal derivation of the NLC in section 3 leads to remarkably accurate nonlinearity estimates. The NLC predicts test error In figure 2, we plot the NLC computed in the randomly initialized state, before training, against the test error after training. We find that for all three datasets, the test error is highly related to the NLC. Further, figure 2 indicates that one must start with an NLC in a narrow range, approximately between 1 and 3, to achieve optimal test error, and the further one deviates from that range, the worse the achievable test error becomes. It is worth noting that some architectures, despite having an NLC in or close to this range, performed badly. One cause of this, high output bias, is explored later in this section. To verify that our were not dependent on using the SGD optimizer, we re-trained all 250 waveform-noise architectures with Adam using the same training protocol. In FIG2, we find that the closely match those of SGD from FIG1. A caveat is that we do not currently have a way to detect the ideal NLC range for a given dataset, except through observation, though we find this range to be consistent across our three datasets. FIG2, we plot the value of the NLC before training versus after training. Both values were computed on the training set. We find that for the vast majority of architectures, the value of the NLC decreases. However, if the NLC is very large in the beginning, it remains so. Overall, the before-training NLC significantly predicts the after-training NLC. In FIG2, we plot the after-training NLC versus test error. We find that unless the NLC lies in a narrow range, test error is close to random. Interestingly, the after-training NLC has a significantly lower correlation with test error than the before-training NLC. NLC predicts generalization, not trainability In FIG2, we show the training error achieved for 50 randomly selected architectures for waveform-noise. We re-trained those architectures without using early stopping based on the validation error and considered an even larger range of starting learning rates. We depict the lowest training classification error that was achieved across all learning rates. Points are shown in red for visibility. We find that independently of the NLC, the vast majority of architectures achieve a zero or near-zero training error. This finding is the opposite of that of and , who claim that networks which are highly sensitive to small input perturbations are untrainable. The reason we were able to train these architectures was our extensive learning rate search as well as our decision to train with 64 bit precision. In fact, we found that architectures with high NLC often require very small learning rates and very small parameter updates to train successfully. One architecture required a learning rate as small as 5e-18! See section B.1 for further analysis on this point. In summary, we find that many architectures for which the NLC correctly predicts high test error are nonetheless trainable. Of course, one might expect that a high sensitivity leads to poor generalization. As a sanity check, we conducted an experiment where we corrupted the test set with small random perturbations and measured how large these perturbations could be before the test error increased significantly. We plot this in FIG2. As expected, for the majority of high-NLC architectures, labels can be corrupted and the error increased with incredibly small perturbations. Summary We interpret the observed so far as follows. To generalize, the network must attain a critical NLC after training. This is only possible in practice if the initial NLC is already close. In that case, the networks often learns automatically to adopt a more ideal NLC. However, unless the initial NLC is itself in the critical range, we cannot attain optimal performance. Further predictors: bias and skip connections In figure 2, we mark in red all points corresponding to architectures that have a very biased output, i.e. 1000Q j (S x f (x, j)) < Q j,x f (x, j). We note that all of these architecture attain a high test error, even if their NLC is small. In FIG2, we plot the output bias before training against test error. We find that indeed, to achieve an optimal test error, a low initial bias is required. In section B.2, we further show that just as the NLC, the bias also tends to decline throughout training and that attaining a very low bias after training is even more essential. Table 1: Activation functions used in this study with important metrics. See main text for explanation and section E.4 for details. In FIG1, we plot in blue all points corresponding to architectures that have skip connections. argued that skip connections reduce the gradient growth of general architectures as well as make a further contribution to performance. Correspondingly, we find that skip connections lead to lower NLC values and to lower test errors for a given NLC level. To enable a more convenient visual comparison, we plot for architectures with and without skip connections separately in section B.3. In this section, we expose the connection between the NLC and low-level properties of the activation functions the network uses. Given a 1d activation function τ, we define DISPLAYFORM0 It is easy to check that if the input x is distributed according to a Gaussian with zero mean and covariance matrix σI, and f simply applies τ to each input component, then we have DISPLAYFORM1. Consider a randomly initialized network where each layer is made up of a fully-connected linear operation, batch normalization, and an activation function τ that is applied component-wise. It turns out that if the network is sufficiently wide, the pre-activations of τ are approximately unit Gaussian distributed. This follows from the central limit theorem . Hence, we expect each layer to contribute approximately N LC τ to the nonlinearity of the entire network. To verify this, we train a 2-layer network with batchnorm, which contains a single copy of τ at the single hidden layer. In table 1, we show N LC τ for all 8 activation functions we used (line A), as well as the median empirical NLC over 10 random initializations of the 2-layer network (line B). We indeed find a close match between the two values. We then measure the NLC of 49-layer batchnorm networks, which contain 48 activation functions. For 6 out of 8 activation functions, this NLC (line D) closely matches the exponentiated N LC τ 48 (line C). Hence, we find that nonlinearity compounds exponentially and that the NLC of a network is closely tied to which activation function is used. Note that the reason that the NLC value of the'square' and'odd square' activation functions diverge from N LC τdepth−1 at high depth is because those activation functions are unstable, which causes some inputs to growth with depth whereas the vast majority of inputs collapse to zero. We then verified that N LC τ is a meaningful measure of nonlinearity for an activation function. We computed the best linear fit for each τ given unit Gaussian input and then measured the ratio of the power of the signal filtered by this best linear fit over the power of the preserved signal. In table 1(line E), we find that for ReLU, SELU, tanh, sigmoid and Gaussian activation functions, there is a close correspondence in that this linear approximation error is around N LC τ − 1. While this relationship breaks down for the 3 most nonlinear activation functions, their linear approximation error still exceeds those of the other 5. We conclude that N LC τ is a meaningful measure of nonlinearity and that the NLC of an architecture can be calibrated by changing the linear approximability of the activation functions used. In this section, we discuss how the NLC compares against other metrics that are used for predicting test error in the deep learning field. We focus our analysis on five popular and representative examples: the gradient vector components, the size of gradient vector length, the Lipschitz constant, correlation information preservation, and depth. We find that each metric is susceptible to very simple confounders that render them unreliable in practice. The NLC, by design, is invariant to all these confounders. The first drawback is that this metric is confounded by simple multiplicative rescaling. For example, assume we are using a plain network that begins with a linear operation followed by batch normalization or layer normalization BID2. Then we can re-scale the input data with an arbitrary constant c and not only preserve the output of the network in the initialized state, but the entire trajectory of the parameter during training and therefore the final test error. Yet, multiplying the input by c causes the GVCS to shrink by c. Thus we can arbitrarily control GVCS while preserving test error, and therefore GVCS cannot be used as a direct predictor of test error. We observe a similar effect when the network output is re-scaled with a constant c. This causes GVCS to grow by c. As long as the loss function can handle large incoming values, which softmax+cross-entropy can do at least in some circumstances , we can again control GVCS arbitrarily without compromising performance. This drawback shows up in even more insidious ways. Consider a plain network with activation functions of form τ (s) = 1 + 1 k sin(ks). As k → ∞, τ eliminates all meaningful structure in the input data. The NLC converges to infinity to reflect this. Yet, GVCS is stable. Examining the NLC, we find that the increase in nonlinearity is captured by the Q i (S x f (x, i)) term, but not by the gradient. The crux is that the variability of the network output is down-scaled in proportion to the increase in nonlinearity, and this confounds the GVCS. The same effect occurs with He-initialized plain ReLU networks as depth increases. The second drawback is that the GVCS is also confounded by changing the width of the network. For example, consider a network that begins with a linear operation and has input dimension d in. Then we can increase the input dimension by an integer factor c by duplicating each input dimension c times. We can approximately maintain the learning dynamics by reducing the scale of initial weights of the first linear operator by √ c and the learning rate for that operator by c. Again, this transformation leaves the NLC unchanged but reduces GVCS by √ c, allowing us again to control GVCS without compromising performance. Gradient vector length / Lipschitz constant While less popular than GVCS, these two metrics are also used as a predictor for network performance (e.g. / respectively). Both metrics are susceptible to multiplicative scaling just as GVCS, and the same examples apply. However, in contrast to GVCS, they are not susceptible to a change in input width.;. They claim that preserving the correlation of two inputs as they pass through the network is essential for trainability, and hence also for a low test error. However, this metric also has an important confounder: additive bias. Assume we are using a network that employs batchnorm. Then biases in the features of the input do not significantly affect learning dynamics, as this bias will be removed by the first batchnorm operation. Yet, adding a constant c to the input can arbitrarily increase correlation between inputs without affecting the correlation of the outputs. So, again, the degree of correlation change through the network can be increased arbitrarily without altering network performance. ). Most of these works focus on finding specific functions which can be represented easily by deep networks, but require a prohibitively large number of neurons to represent for a shallow network. In FIG4, we plot the test error achieved by our architectures on CIFAR10 against depth. We find that there is actually a positive correlation between both quantities. We suspect this is mainly because deeper networks tend to have a larger NLC. Of course, depth cannot be used as a direct predictor of performance as it does not account for all the confounders discussed throughout this section. We introduced the nonlinearity coefficient, a measure of neural network nonlinearity that is closely tied to the relative diameter of linearly approximable regions in the input space of the network, to the sensitivity of the network output with respect to small input changes, as well as to the linear approximability of activation functions used in the network. Because of this conceptual grounding, because its value in the randomly initialized state is highly predictive of test error while also remaining somewhat stable throughout training, because it is robust to simple network changes that confound other metrics such as raw gradient size or correlation information, because it is cheap to compute and conceptually simple, we argue that the NLC is the best standalone metric for predicting test error in fully-connected feedforward networks. It has clear applications to neural architecture search and design as it allows sub-optimal architectures to be discarded before training. In addition to a right-sized NLC, we also found that avoiding excessive output bias and using skip connections play important independent roles in performance. This paper makes important contributions to several long-standing debates. We clearly show that neural networks are capable of overfitting when the output is too sensitive to small changes in the input. In fact, our random architecture sampling scheme shows that such architectures are not unlikely to arise. However, overfitting seems to be tied not to depth or the number of parameters, but rather to nonlinearity. In contrast to; , we find that a very high output sensitivity does not harm trainability, but only generalization. This difference is likely caused by our very extensive learning rate search and 64 bit precision training. While the popular guidance for architecture designers is to avoid exploding and vanishing gradients, we argue that achieving an ideal nonlinearity level is the more important criterion. While the raw gradient is susceptible to confounders and cannot be directly linked to meaningful network properties, the NLC captures what appears to be a deep and robust property. It turns out that architectures that were specifically designed to attain a stable gradient, such as He-initialized ReLU networks, in fact display a divergent NLC at great depth. It has been argued that the strength of deep networks lies in their exponential expressivity (e.g. ;). While we show that the NLC indeed exhibits exponential behavior, we find this property to be largely harmful, not helpful, as did e.g.. While very large datasets likely benefit from greater expressivity, in our study such expressivity only leads to lack of generalization rather than improved trainability. In fact, at least in fully-connected feedforward networks, we conjecture that great depth does not confer significant practical benefit. In future work, we plan to study whether the ideal range of NLC values we discovered for our three datasets (1 N LC 3) holds also for larger datasets and if not, how we might predict this ideal range a priori. We plan to investigate additional causes for why certain architectures perform badly despite a right-sized NLC, as well as extend our study to convolutional and densely-connected networks. We are interested in studying the connection of the NLC to e.g. adversarial robustness, quantizability, sample complexity, training time and training noise. Finally, unfortunately, we found the empirical measurement of the NLC to be too noisy to conclusively detect an underfitting regime. We plan to study this regime in future. The goal of this section is to provide an intuitive, graphical explanation of the NLC in addition to the mathematical derivation and analysis in section 3 for readers interested in developing a better intuition of this concept. In FIG5, we illustrate the meaning of the NLC in the case of an example function f with a single input and output dimension, and a bounded domain D and co-domain F. f is a simple sin curve, shown in blue. x 1 and x 2 are two sample inputs. We plot the location of (x 1, f (x 1)) in red and the location of (x 2, f (x 2)) in olive. The thick red and olive lines correspond to the local linear approximation of f at x 1 and x 2 respectively, which is simply the tangent line of the blue curve. The shaded olive and red regions correspond to the intervals in which the local linear approximations fall inside the co-domain F.It is easy to check that the proportion of the domain covered by the red interval and olive interval is DISPLAYFORM0 respectively. The insight behind the NLC is that both linear approximations can only be accurate while they remain inside their respective shaded area, or at least close to it. This is evidently true in both cases, as both tangent lines quickly move away from the co-domain outside the shaded region. In the case of x 2, this bound is also tight as the tangent tracks f closely everywhere in the olive region. However, in the case of x 1, the bound is loose, as the red line completely decoupled from f throughout a large part of the red region. The inverse value, DISPLAYFORM1, can be viewed as the number of shaded regions required to cover the entire domain. The NLC is simply the generalization of this concept to multiple dimensions, where the diameter is proxied with the average distance of two points, and the expectation is taken over the data distribution. It is worth noting that the NLC attempts to measure the ratio of diameter of domain and linearly approximable region, not the ratio of volumes. Informally speaking, the number of linearly approximable regions required to cover the domain behaves as N LC din. In this section, we illustrate the function computed by neural networks at varying levels of nonlinearity. Specifically, in TAB3, we depict the function computed by plain, fully-connected, Heinitialized batchnorm-ReLU networks at seven different depths in their randomly initialized state. We set d out = 3 and set the width of all other layers to 100. We then generated three 100-dimensional Gaussian random inputs x, x and x. We associated each point (a, b, c) that lies on the unit sphere in R 3, i.e. that has a 2 + b 2 + c 2 = 1, with the value ax + bx + cx. We call the sphere of points (a, b, c) associated with these inputs the "input sphere".. Note: Architectures which did not achieve a better-thanrandom test error were omitted in (A) and architectures that did not achieve a better-than-random training error were omitted in (B). We set those thresholds at 80% for CIFAR10 (10 different labels) and 50% for waveform-noise (3 different labels). We propagate each of those inputs forward through the network. We obtain a 3-dimensional output, which we divide by its length. Now the output lies on the unit sphere in R 3. Each point on that "output sphere" is associated with a color as shown in FIG7. Finally, we color each point on the input sphere according to its respective color on the output sphere. These colored input spheres are shown in table 2 as azimuthal projections. The RGB values of colors on the output sphere are chosen so that the R component is largest whenever the first output neuron is largest, the G component is largest whenever the second output neuron is largest and the B component is largest whenever the third output neuron is largest. If we imagine that the output is fed into a softmax operation for 3-class classification, then "purer" colors correspond to more confident predictions. For comparison, we show the NLC on CIFAR10 for batchnormReLU networks of the same depth (median of 10 random initializations). We find that as depth and the NLC of the network increases, the color, and thus the value of the output, change more and more quickly. Of course, this chaotic behavior of the output correspondingly implies smaller linearly approximable regions. In this section, we expand upon findings from our large-scale empirical study that were outlined in section 4. One of the hallmarks of our study was the fact that we conducted an exhaustive search over the starting learning rate for training with SGD. We trained our 750 architectures with 40 different. Note: Architectures which did not achieve a better-than-random test error were omitted in (A) and architectures that did not achieve a better-than-random training error were omitted in (B). We set those thresholds at 80% for CIFAR10 (10 different labels) and 50% for waveform-noise (3 different labels).starting learning rates each. Those learning rates formed a geometric sequence with spacing factor 3. The sequence was not the same for each architecture. In fact, the smallest of the 40 learning rates was chosen so that the weight update could still be meaningfully applied in 32 bit precision. See section E.2 for details. Of course, this was simply a heuristic, with the aim of providing a range of learning rates that would contain the ideal learning rate with very high probability. To verify that this goal was achieved, in figure 7A, we plot a histogram of the index of the training run that yielded the lowest validation error for CIFAR10. The training run with index 1 used the lowest starting learning rate, whereas the training run with index 40 used the largest starting learning rate. Note that we did not plot architectures that did not attain a test error of under 80%, i.e. a nonrandom test error, as for those architectures the learning rate was not chosen meaningfully. We find that while a wide range of training run indeces were chosen, there was a wide margin on each side of training runs that were never chosen. This is precisely what that confirms that, with high probability, we found the ideal learning rate for each architecture that has the potential to generalize. We also retrained 50 randomly chosen waveform-noise architectures without applying early stopping based on the validation error. Instead, we continued training to determine the lowest training classification error that could be achieved. The were plotted in FIG2. For this experiment, we used 60 training runs. Here, the smallest starting learning rate was chosen so that the weight updates could still be meaningfully applied in 64 bit precision. In FIG6, we find that indeed the range of training run indeces used is much wider. For 2 architectures, the chosen training run falls outside the range of the original 40 training runs. We hypothesized that architectures that have very high NLCs and cannot generalize are nonetheless trainable with very small learning rates in 64 bit precision. This is precisely what we find in practice. In FIG8, we plot the NLC in the randomly initialized state against the starting learning rate corresponding to the chosen training run. FIG8 depicts learning rates which minimized validation error on CIFAR10 and figure 8B depicts learning rates which minimized training error on waveform-noise. In other words, we show the same training runs as in FIG6, and again we removed architectures for which generalization / training failed completely, respectively. While the range of learning rates that lead to good generalization falls in a comparatively smaller range, some architectures can be trained successfully with a learning rate as small as 5e-18! In general, the reason for this trend is that a large NLC is associated with large gradients, and these gradients need to be down-scaled to keep weight updates bounded. Intriguingly, FIG8 suggests that as the NLC grows, the learning rate should decay as the square of the NLC. This observation mirrors that of , who observed that the magnitude of weight updates should scale inversely as the gradient increases, which would require the learning rate to scale with the inverse square. In FIG2, we show that high bias before training, defined as Qj,xf (x,j) Qj (Sxf (x,j)), leads to high test error. In FIG9, we investigate this quantity further. In FIG9, we find that just like the NLC, the bias decreases during training in many cases. In fact, it often reaches a near-zero value. In FIG9, we find that this is in fact necessary for the network to achieve a better-than-random test error at all. This is not entirely surprising for a dataset like CIFAR10, where each label occurs equally frequently. In FIG9, we show that many architectures (those near the bottom of the chart) attain a high bias but a low NLC. This confirms that a high makes an independent contribution to test error prediction. All bias values were computed on the training set. Finally, we note that at the time of writing, we are working on an "improved" version of SGD that can successfully train high-bias architectures and enable them to generalize. Discussing this algorithm, as well as the other signals that exist in figure 9 (many architectures cluster around 1D subspaces in all three graphs ...), unfortunately, goes beyond the scope of this paper. In figure 2, we show in blue all architectures that have skip connections, whereas we show in black architectures without skip connections. In that figure, we find that architectures with skip connections not only exhibit a lower NLC overall, but also tend to outperform architectures without skip connections that have similar NLCs. As it can be hard to distinguish colors in a scatter plot, in figure 10, we plot the for both types of architectures separately. Both the first row of graphs (A/B/C) and the second row of graphs (D/E/F) are identical to figure 2, except the top row shows only architectures without skip connections and the bottom row shows only architectures with skip connections. The difference in behavior is clear. In this section, we describe the randomly sampled architectures that we used for our large-scale study. Each network layer is composed out of a fully-connected linear operation with bias and an activation function. Some architectures have a normalization operation between the linear operation and the activation function. The last layer does not contain an activation function. Some architectures have skip connections, which always bypass two layers as in. They start after either the ). The multiplier is chosen to approximately preserve the scale of the incoming signal in the forward pass. This projection matrix is not trained and remains fixed throughout training. Each architecture was selected independently at random via the following procedure.• depth: Depth is chosen uniformly from the set of odd numbers between and including 3 and 49. We used odd numbers to avoid conflicts with our skip connections, each of which bypass two linear operations but do not bypass the first linear operation.• width: Width was chosen automatically as a function of depth so that the number of trainable parameters in the network is approximately 1 million. The width of all layers except the input and output layer, which are determined by the data, is identical.•. We used the multiplier of max(1, doutgoing dincoming) so that the scale of the signal is approximately preserved as it passes forward through the weight matrix, which is a well-accepted practice for avoiding exponential growth or decay in the forward pass used in e.g. He initialization and SELU initialization . With a probability of 50%, we initialize the all trainable bias vectors as zero vectors and with a probability of 50%, we initialize their components as independent zero mean Gaussians with a variance of 0.05. We took the 0.05 value from. If the biases are initialized as nonzero, we scale the weight matrices with a factor of √ 0.95 to approximately preserve the scale of the output of the entire linear operation. Finally, with a 25% probability, we then additionally multiply all weight matrices and biases jointly by 0.9 and with a 25% probability, we multiply them jointly by 1.1.• normalization: With a 50% probability, no normalization is used. With a 25% probability, batch normalization is used. With a 25% probability, layer normalization BID2 ) is used. Normalization operations do not use trainable bias and variance parameters.• activation function: We select one of the 8 activation functions shown in FIG0. We select ReLU, SeLU and Gaussian with probability 2 11 each and tanh, even tanh, sigmoid, square and odd square with probability 1 11 each. We downweighted the probabilities of tanh, even tanh and sigmoid as we considered them similar. The same holds for square and odd square. After choosing the initial activation function, we added additional modifications. If the initial activation function is τ (s), we replace it by cτ (ds + t) + b. First, d and t are chosen. d is 1 with a 50% probability, 1.2 with a 25% probability and 0.8 with a 25% probability. t is 0 with a 50% probability, 0.2 with a 25% probability and -0.2 with a 25% probability. Then, with a 50% probability, we set b so that if s follows a unit Gaussian distribution, τ (s) is unbiased, i.e. E s∼N τ (s) = 0. Debiasing follows the example of BID1. Finally, we always set c so that if s is a unit Gaussian, then Q s τ (s) = 1. Again, this follows the principle of avoiding exponential growth / decay in the forward pass as mentioned above. d, b, c and t are fixed throughout training.• skip connections: With a 50% probability, no skip connections are used. With a 25% probability, skip connections of strength 1 are used, as is usually done in practice. With a 25% chance, we choose a single value uniformly at random between 0 and 1 and set the strength of all skip connections to that value. With a 50% chance, all skip connections start after the linear operation. With a 50% chance, they start after the normalization operation. We introduced these variations to obtain a more diverse range of NLCs amongst networks with skip connections. Note that normalizing the signal between skip connections rather than only within a skip block reduces the gradient damping of the skip connections for reasons related to the k-dilution principle .After sampling, we apply one step of post-processing. All networks that have square or odd square activation functions, or skip connections, that also do not have normalization were assigned either batch normalization or layer normalization with 50% probability each. This is, again, to avoid exponential instability in the forward pass. This post-processing lead to the following changes in aggregate frequencies: no normalization -20.4%, batchnorm -39.8%, layer norm -39.8%.We sampled 250 architectures for each of three datasets. Results pertaining to those architectures are shown in figures 1, 2, 3, 4, 7, 8 and 9.We used softmax+cross-entropy as the loss function, as is done in the overwhelming number of practical cases. Crucially, after initializing each architecture, we measured the scale c of activations fed into the loss function, i.e. c = Q x,j f (x, j). We then had the loss function divide the incoming activations by c before applying softmax. This was done so that the loss functions, which yields very different training dynamics when presented with inputs of different sizes, did not confound the outcomes of our study. We believe that the preference of softmax+cross-entropy for outputs of a certain size has confounded the of studies in the past. c remained fixed throughout training. When designing our sampling scheme, we attempted to strike a balance between relevance and diversity. On the one hand, we did not want to include architectures that are pathological for known reasons. We initialized all architectures so that the signal could not grow or decay too quickly in the forward pass. Also, we always used orthogonal initialization. The advantages of orthogonal initialization over Gaussian initialization, at least for fully-connected layers has, in our opinion, been demonstrated to the point where we believe this should be the default going forward. On the other hand, we introduced many variations such as activation function dilation and shift, and skip connection strength that made our architectures more diverse. While those variations are not necessarily common in practice, we made sure that we never deviated from the "default case" by a large amount in any particular area. We wanted to conduct experiments on three different datasets. First, we chose MNIST and CIFAR10 as they are the two most popular datasets for evaluating deep neural networks, and are small enough so that we could conduct a very large number of training runs with the computational resources we had available. The MNIST dataset is composed of 28 by 28 black and white images of handwritten digits associated with a digit label that is between 0 and 9 (citation: MNIST-dataset). The CIFAR10 dataset is composed of 32 by 32 color images of objects from 10 categories associated with a category label (citation: CIFAR10-dataset).We decided to choose our third dataset from the UCI repository of machine learning datasets. recently validated the SELU nonlinearity, which has since become popular, on a large number of datasets from this repository. We wanted to choose a dataset that also used. To decide upon the specific dataset, we applied the following filters:• The most frequent class should not be more than 50% more frequent than the average class.• The dataset should contain between 1.000 and 100.000 datapoints.• Datapoints should contain at least 10 features.• The dataset should not be composed of images, as we already study 2 image datasets.• The dataset should not contain categorical or very sparse features.• We only considered datasets that we were actually able to find on the repository website. After applying all these filters, we were left with two datasets: waveform and waveform-noise. They are very similar. We chose the latter because of the greater number of input features. The inputs of the waveform-noise dataset are composed of wave attributes. Each input is associated with one of three category labels based on the wave type (citation: waveform-noise dataset). For waveform-noise, we normalized the mean and variance of the features. We processed CIFAR10 via the following procedure.1. We normalize the mean and variance of each datapoint.2. We normalize the mean of each feature.3. Via PCA, we determine the number of dimensions that hold 99% of the variance. That number is 810.4. We map each datapoint to an 810-dimensional vector via multiplication with a 3072 × 810 submatrix of a 3072 × 3072 uniformly random orthogonal matrix.5. Finally, we multiply the entire dataset with a single constant so that we obtain Q x,i x(i) = 1.We used the exact same procedure for MNIST, except that the number of dimensions of the final dataset was 334 instead of 810.During preliminary experiments, we found that this pre-processing scheme lead to faster training and lower error values than training on the raw data where only the features are normalized. The reason we designed this scheme in the first place was to reduce input dimensionality so that we could avoid an excessive amount of computation being allocated to the first layer, which would strain our computational budget. The MNIST dataset contains 60.000 training data points and 10.000 test data points. The training data was randomly split into a training set of size 50.000 and validation set of size 10.000. The CIFAR10 dataset contains 50.000 training data points and 10.000 test data points. The training data was randomly split into a training set of size 40.000 and a validation set of size 10.000. The waveform-noise dataset contains 5.000 data points, which were randomly split into a training set of size 3.000, a validation set of size 1.000 and a test set of size 1.000.As mentioned, for CIFAR10, our input dimensionality was 810. For MNIST, it was 334. For waveform-noise, it was 40. For CIFAR10 and MNIST, the output dimensionality / number of classes was 10. For waveform-noise, it was 3.E EXPERIMENTAL DETAILS E.1 LINEARLY APPROXIMABLE REGIONS STUDY (FIGURE 1) We begin by defining the relative diameter of a linearly approximable region for a given network f, input x from distribution D, input direction u ∈ R din and output direction v ∈ R dout. Starting from x, traversing the input space k times for some fractional value k in the direction of u yields x:= x + ku ||u||22Qi(Sxx(i)). As in the definition of the NLC, we use 2Q i (S x x(i)) as a proxy for the diameter of the input space. The radius of the linearly approximable region, relative to the size of the input space, is then the largest value of k such that the linear approximation induced by the Jacobian at x is still close to the true value of f at x. Therefore, we take the diameter of the linearly approximable region, relative to the size of the input space, as the largest value of k such that the linear approximation induced by the Jacobian at x is still close to the true value of f at x = x + ku ||u||2Qi(Sxx(i)). (Note that x is simply twice as far from x as x .) Specifically, we define "close" as DISPLAYFORM0 In plain words, the change in function value in the direction of v predicted by the local linear approximation must be at least half and at most 2 times the actual change in function value. We now generalize this quantity to minibatches, in order for it to be meaningfully defined for networks using batchnorm. Again, consider some network f. Now also consider a data batch X ∈ R din×b containing b randomly drawn datapoints from D. Also consider an input direction matrix U ∈ R din×b and output direction matrix V ∈ R dout×b. Now we define the relative diameter of the linearly approximable region as the largest k such that when setting X = X + kU ||U ||2Qi(S x a column of X x(i)), we have DISPLAYFORM1 Here f can be taken to be applied independently to each column of X if it does not use batchnorm and is taken to be applied jointly to all inputs in X if f does contain batchnorm. The "largest k" is computed by starting with k = 10 −9 and then checking the condition for increasing k until it fails. The values of k we checked formed a geometric series with spacing factor 10 1 10. We could not reliably check smaller values of k due to numerical underflow, which is why architectures with an NLC less than 10 −9 are not shown in figure 1.For each architecture, we considered a single random initialization. All values were computed in the randomly initialized state. No training was conducted. We use 10 minibatches of size 250 from the respective dataset and draw 10 random Gaussian U and 10 random Gaussian V. We obtain one relative region size value for each of 10 * 10 * 10 = 1000 configurations. Finally, in figure 1, we report the median across those 1000 values for each architecture. The NLC is computed as described in section G.E.2 PREDICTIVENESS STUDY (FIGURES 2, 3, 4, 7, 8, 9 AND 10) For each architecture, we considered a single random initialization. We trained them with SGD using minibatches of size 250. To ensure that there is no bias with regards to learning rate, we tuned the starting learning rate independently for each architecture by conducting a large number of training runs with various starting learning rates. A training run is conducted as follows. We train with the starting learning rate until the validation classification error (VCE) has not decreased for 10 epochs. Then we rewind the state of the network by 10 epochs (when the lowest VCE was achieved), divide the learning rate by 3 and continue training until the validation classification error has not improved for 5 epochs. We divide the learning rate by 3 again, rewind and continue training until the validation classification error has not improved for 5 epochs. This process continues until the step size has been divided by 3 ten times. When the VCE has again not improved for 5 epochs, we rewind one last time and terminate the training run. For each architecture we completed 40 total training runs with 40 different starting learning rates that form a geometric series with spacing factor 3. For each architecture, the smallest starting learning rate considered was computed as follows. We ran the SGD optimizer for 1 epoch with a learning rate of 1 without actually applying the updates computed. For the weight matrix in each layer, we thus obtained one update per minibatch. Let δW lb denote the update obtained for layer l and minibatch b and W l the initial value of the weight matrix in layer l. Finally, we used the value 10 DISPLAYFORM2 as our smallest starting learning rate. The rational behind this choice was that no individual weight matrix update obtained with the smallest starting learning rate would perturb any weight matrix during any iteration by more than approximately 10 −8. We chose 10 −8specifically so that our smallest starting learning rate would be less than the smallest learning rate that can be meaningfully used under 32 bit precision. Nonetheless, we trained all networks using 64 bit precision. Of course, this choice of smallest starting learning rate is merely a heuristic. We validated this heuristic by checking that no architecture that obtained a non-random test error attained its lowest validation error with either the smallest five or largest five starting learning rates. This condition was fulfilled for all architectures and datasets. Henceforth, we refer to the'trained network' as the network that was obtained during the training run that yielded the lowest validation classification error and the'initial network' as the network in the randomly initialized state. In FIG6, we show which training runs were used and in figure 8, we show which learning rates were used, plotted against the NLC of the initial network. The NLC was computed as in section G.In FIG1, we plot the test error of the trained network against the NLC of the initial network, again, computed as in section G. We mark in red all points corresponding to architectures for which 1000Q j (S x f (x, j)) < Q j,x f (x, j) for the initial network. We mark in blue all points corresponding to architectures that have skip connections. In FIG4, we plot depth versus test error of the trained network. In FIG2, we plot the bias value Qj,xf (x,j) Qj (Sxf (x,j)) of the initial network against the test error of the trained network. In FIG2, we plot the NLC of the initial network against the NLC of the trained network. If FIG2, we plot the NLC of the trained network against the test error of the trained network. In both 3B and 3C, the NLC was computed on the training set. However, the value of the NLC computed on the test set was very similar. We further compare the bias of the initial network against the bias of the trained network, against test error and against the NLC of the initial network in FIG9. The bias and NLC were always computed on the training set. In FIG0, we break down the of figure 2 into architectures with skip connections and architectures without skip connections. We then selected 50 random architectures from our 250 waveform-noise architectures. We then trained these architectures again, with two changes to the protocol: We reduced the learning rate by a factor of 3 only once the training classification error had not been reduced for 10 / 5 epochs respectively; and we considered 60 different step sizes which formed a geometric series with spacing factor 3 and start value 10 DISPLAYFORM3. Therefore, we considered even the smallest step size that was meaningful for 64 bit precision training. This change allowed us to successfully train even architectures with very high NLC. See section B.1 for an analysis on this point. The reason we only trained 50 architectures for this scenario is because training can take a very long time without using the validation set for early stopping, leading to considerable computational expense. The are presented in FIG2.Finally, for FIG2, we re-trained our 250 waveform-noise architectures with Adam instead of SGD. The protocol was the same (40 training runs), except before obtaining our measurements for δW lb, we first ran Adam for 4 epochs, again without applying updates, in order to warm-start the running averages. Only then did we run it for another epoch where we actually gathered values for δW lb. Again, we verified that the first and last 5 training runs were never used. We computed the maximal error-preserving perturbation shown in FIG2 similarly to the linearly approximable region relative diameter in section E.1. The difference is that instead of requiring that the local linear approximation be close to the true function, we required that the test error over the path from X to X be at most 5% higher than than the test error at X. The test error'over the path' is defined as the fraction of inputs in the batch that were correctly classified everywhere on the line from X to X. Again, we started with k = 10 −9 and increased it by 10 1 10 at each step, checking whether each input is correctly or incorrectly classified. We chose the 5% threshold so that architectures with a test error of around 90% on CIFAR10 / MNIST would yield finite outcomes. The values shown in FIG2 are the median over 10 * 10 = 100 values obtained from 10 random minibatches of size 250 and 10 Gaussian random direction matrices U. The random direction matrix V used in section E.1 does not come into play here. N LC τ was computed as defined in section 5. N LC τ 48 is simply the exponentiated value. The linear approximation error is computed as (DISPLAYFORM0) 2, whereτ is the best linear fit to τ for inputs drawn from N, i.e. arg minτ linear Q s∼N (τ (s) −τ (s)).NLC was computed as in section G. We show the median across 10 random initializations. The values for different initializations show little variation except for 49-layer networks with square or odd square activation functions. Let A be an m × n matrix and u a random vector of fixed length and uniformly random orientation. Then we have Further, we have DISPLAYFORM0 Here, both x and x are drawn independently from D. Both Q i (S x x(i)) and Q j (S x f (x, j)) can be computed exactly over the dataset in trivial fashion if f does not use batchnorm. If f does use batchnorm, however, Q j (S x f (x, j)) depends on the batch selection. In this case, we replace Q j (S x f (x, j)) in the definition with the NLC by Q j (S X,β f (X, j, β)). Here, X is a data batch matrix of dimensionality d in × b, where b is the minibatch size, each column of X is an independently drawn input from D and β is uniformly drawn from {1, .., b}. f (X, j, β) is the (j, β)'th entry of the output of f when X is jointly propagated through the network. In plain terms, we generalize the NLC to batchnorm networks by joining each input x with every possible minibatch X. We compute Q j (S X,β f (X, j, β)) in practice by simply dividing the dataset once into minibatches of size 250 and then taking the standard deviation of all output activation values observed during this single pass. Now we turn our attention to Q x ||J (x)|| F. Before we tackle this quantity, we show a property of the Frobenius norm similar to that shown in proposition 1. Let A be a m × n matrix and let u be an m-dimensional unit Gaussian vector. Then we have So, specifically, we have Q x ||J (x)|| F = Q x,u ||uJ (x)|| 2 for unit Gaussian u. Therefore, we can estimate Q x ||J (x)|| F stochastically by replacing the loss gradient at the output layer with Gaussian random vectors during backpropagation and taking the quadratic expectation over the input gradient. In practice, we compute Q x ||J (x)|| F by sampling 100 minibatches of size 250 and clamping independently drawn unit Gaussian random vectors at the output each time. Finally, let's look at Q x ||J (x)|| F for batchnorm networks. Again, this quantity is dependent on batch selection. In fact, in the definition of the NLC, we replace Q x ||J (x)|| F by 1 √ b Q X ||J (X)|| F, where J (X) is a bd out × bd in matrix that contains the gradient of every component of f (X) with respect to every component of X. It is easy to check that this value equals Q x ||J (x)|| F for networks without batchnorm. In that case, J (X) simply takes the form of a block-diagonal matrix with the individual J (x) from the batch forming the blocks. As before, we have Q X ||J (X)|| F = Q u,X ||uJ (X)|| 2, where u is a bd out -dimensional unit Gaussian random vector. Hence, just as before, we can stochastically compute Q X ||J (X)|| F by sampling random minibatches and backpropagating what is now effectively a d out × b-dimensional unit Gaussian matrix, and computing the quadratic expectation on the ing input gradient. As before, we sample 100 random minibatches of size 250.
We introduce the NLC, a metric that is cheap to compute in the networks randomly initialized state and is highly predictive of generalization, at least in fully-connected networks.
1,443
scitldr
This paper gives a rigorous analysis of trained Generalized Hamming Networks (GHN) proposed by and discloses an interesting finding about GHNs, i.e. stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer. The revealed equivalence, on the theoretical side, can be regarded as a constructive manifestation of the universal approximation theorem;. In practice, it has profound and multi-fold implications. For network visualization, the constructed deep epitomes at each layer provide a visualization of network internal representation that does not rely on the input data. Moreover, deep epitomes allows the direct extraction of features in just one step, without resorting to regularized optimizations used in existing visualization tools. Despite the great success in recent years, neural networks have long been criticized for their blackbox natures and the lack of comprehensive understanding of underlying mechanisms e.g. in BID3; BID12; BID30; BID29. The earliest effort to interpret neural computing in terms of logic inferencing indeed dated back to the seminal paper of BID24, followed by recent attempts to provide explanations from a multitude of perspectives (reviewed in Section 2).As an alternative approach to deciphering the mysterious neural networks, various network visualization techniques have been actively developed in recent years (e.g. BID11 ; BID28 and references therein). Such visualizations not only provide general understanding about the learning process of networks, but also disclose operational instructions on how to adjust network architecture for performance improvements. Majority of visualization approaches probe the relations between input data and neuron activations, by showing either how neurons react to some sample inputs or, reversely, how desired activations are attained or maximized with regularized reconstruction of inputs BID7; BID20; BID36; BID31; BID23; BID33; BID0. Input data are invariably used in visualization to probe how the information flow is transformed through the different layers of neural networks. Although insightful, visualization approaches as such have to face a critical open question: to what extend the drawn from the analysis of sample inputs can be safely applied to new data?In order to furnish confirmatory answer to the above-mentioned question, ideally, one would have to employ a visualization tool that is independent of input data. This ambitious mission appears impossible at a first glance -the final neuron outputs cannot be readily decomposed as the product of inputs and neuron weights because the thresholding in ReLU activations is input data dependent. By following the principle of fuzzy logic, BID8 recently demonstrated that ReLUs are not essential and can be removed from the so called generalized hamming network (GHN). This simplified network architecture, as reviewed in section 3, facilitates the analysis of neuron interplay based on connection weights only. Consequently, stacked convolution layers can be merged into a single hidden layer without taking into account of inputs from previous layers. Equivalent weights of the merged GHN, which is called deep epitome, are computed analytically without resorting to any learning or optimization processes. Moreover, deep epitomes constructed at different layers can be readily applied to new data to extract hierarchical features in just one step (section 4). Despite the great success in recent years, neural networks have long been criticized for their blackbox natures e.g. in BID3: "they capture hidden relations between inputs and outputs with a highly accurate approximation, but no definitive answer is offered for the question of how they work". The spearhead BID24 attempted to interpret neural computing in terms of logic inferencing, followed by more "recent" interpretations e.g. in terms of the universal approximation framework BID6; BID15, restricted Boltzmann machine BID14, information bottleneck theory BID30, Nevertheless the mission is far from complete and the training of neural networks (especially deep ones) is still a trail-and-error based practice. The early 1990s witnessed the birth of fuzzy neural networks (FNN) BID21; BID13 which attempted to furnish neural networks with the interpretability of fuzzy logic BID34 BID37; BID2. On the other hand, neural networks have been used as a computational tool to come up with both membership functions and fuzzy inference rules BID9 BID32. This joint force endeavour remains active in the new millennium e.g. BID27; BID22; Nauck & Nürnberger; BID19; BID16. Nevertheless, FNNs have been largely overlooked nowadays by scholars and engineers in machine learning (ML) community, partially due to the lack of convincing demonstrations on ML problems with large datasets. The exception case is the recent BID8, which re-interpreted celebrated ReLU and batch normalization with a novel Generalized Hamming Network (GHN) and demonstrated the state-of-the-art performances on a variety of machine learning tasks. While GHNs adopted deep networks with multiple convolution layers, in this paper, we will show how to merge multiple stacked convolution layers into a single yet wide convolution layer. There are abundant empirical evidences backing the belief that deep network structures is preferred to shallow ones BID10, on the other hand, it was theoretically proved by the universal approximation theorem that, a single hidden layer network with non-linear activation can well approximate any arbitrary decision functions BID6 BID15. Also, empirically, it was shown that one may reduce depth and increase width of network architecture while still attaining or outperforming the accuracies of deep and residual network BID35. Nevertheless, it was unclear how to convert a trained deep network into a shallow equivalent network. To this end, the equivalence revealed in Section 3 can be treated as a constructive manifestation of the universal approximation theorem. Various network visualization techniques have been actively developed in recent years, with BID7 interpreting high level features via maximizing activation and sampling; BID20 BID36 learning hierarchical convolutional features via energy or cost minimization; BID31 computing class saliency maps for given images; BID23 reconstructing images from CNN features with an natural image prior applied; BID33 visualizing live activations as well as deep features via regularized optimization; BID0 monitoring prediction errors of individual linear classifiers at multiple iterations. Since all these visualization methods are based on the analysis of examples, the applicability of visualization methods to new data is questionable and no confirmatory answers are provided in a principled manner. The name "deep epitome" is reminiscent of BID17; BID4; BID18; BID5, in which miniature, condensed "epitomes" consisting of the most essential elements were extracted to model and reconstruct a set of given images. During the learning process, the self-similarity of image(s), either in terms of pixel-to-pixel comparison or spatial configuration, was exploited and a "smooth" mapping between epitome and input image pixels was estimated. We briefly review generalized hamming networks (GHN) introduced in BID8 and present in great detail a method to derive the deep epitome of a trained GHN. Note that we follow notations in BID8 with minor modifications for the sake of clarity and brevity. According to BID8, the cornerstone notion of generalized hamming distance (GHD) is defined as g(a, b):= a ⊕ b = a + b − 2 · a · b for any a, b ∈ R. Then the negative GHD is used to quantify the similarity between neuron inputs x and weights w: DISPLAYFORM0 in which L denotes the length of neuron weights e.g. in convolution kernels, and g(w, x) is the arithmetic mean of generalized hamming distance between elements of w and x. By dividing the constant 2 L, becomes the common representation of neuron computing (w · x + b) provided that: DISPLAYFORM1 It was proposed by BID8 that neuron bias terms should follow the condition analytically without resorting to an optimization approach. Any networks that fulfil this requirement are thus called generalized hamming networks (GHN). In the light of fuzzy logic, the negative of GHD quantifies the degree of equivalence between inputs x and weights w, i.e. the fuzzy truth value of the statement x ↔ w where ↔ denotes a fuzzy equivalence relation. Moreover, g(x, x) leads to a measurement of fuzziness in x, which reaches the maximal fuzziness when x = 0.5 and monotonically decreases when x deviates from 0.5. Also it can be shown that GHD followed by a non-linear activation induces a fuzzy XOR connective BID8.When viewed in this GHN framework, the ReLU activation function max(0, 0.5−g(x, w)) actually sets a minimal hamming distance threshold of 0.5 on neuron outputs. BID8 then argued that the use of ReLU activation is not essential because bias terms are analytically set in GHNs. BID8 reported only negligible influences when ReLU was completely skipped for the easy MNIST classification problem. For more challenging CIFAR10/100 classifications, removing ReLUs merely prolonged the learning process but the final classification accuracies remained almost the same. To this end, we restrict our investigation in this paper to those GHNs which have no ReLUs. As illustrated below, this simplification allows for strict derivation of deep epitome from individual convolution layers in GHNs.3.2 GENERALIZED HAMMING DISTANCE AND EPITOME BID8 postulated that one may analyse the entire GHN in terms of fuzzy logic inference rules, yet no elaboration on the analysis was given. Inspired by the universal approximation framework, we show below how to unravel a deep GHN by merging multiple convolution layers into a single hidden layer. We first reformulate the convolution operation in terms of generalized hamming distance (GHD) for each layer, then illustrate how to combine multiple convolution operations across different layers. As said, this combination is only made possible with GHNs in which bias terms strictly follow condition. Without loss of generality, we illustrate derivations and proofs for 1D neuron inputs and weights (with complete proofs elaborated in appendix A). Nevertheless, it is straightforward to extend the derivation to 2D or high dimensions. And appendices B to D illustrate deep epitomes of GHNs trained for 2D MNIST and CIFAR10/100 image classifications. Definition 1. For two given tuples DISPLAYFORM2 1... L, where ⊕ denotes the generalized hamming distance operator. Then the product has following properties, DISPLAYFORM3 K but they are permutation equivalent, in the sense that there exist permutation matrices P and Q such that x DISPLAYFORM4 2. non-linear: in contrast to the standard outer product which is bilinear in each of its entry, the hamming outer product is non-linear since in general x DISPLAYFORM5 where µ ∈ R is a scalar. Therefore, the hamming outer product defined as such is a pseudo outer product. DISPLAYFORM6 M because of the associativity of GHD. This property holds for arbitrary number of tuples. iterated operation: the definition can be trivially extended to multiple tuples DISPLAYFORM0 Definition 2. The convolution of hamming outer product or hamming convolution, denoted *, of two tuples is a binary operation that sums up corresponding hamming outer product entries: DISPLAYFORM1 where the subsets DISPLAYFORM2 DISPLAYFORM3 The hamming convolution has following properties, DISPLAYFORM4 K since the partition subsets S(n) remains the same.2. non-linear: this property is inherited from the non-linearity of the hamming outer product. DISPLAYFORM5 M since the summation of GHDs is non-associative. Note this is in contrast to the associativity of the hamming outer product. iterated operation: likewise, the definition can be extended to multiple tuples x DISPLAYFORM0 Figure 1 illustrates an example in which GHDs are accumulated through two consecutive convolutions. Note that the conversion from the hamming outer products to its convolution is non-invertible, Figure 2: The hamming convolution of two banks of epitomes. Remarks: a) for the inputs A, B the number of epitomes M a must be the same as the number of channels C b; and for the output bank DISPLAYFORM1. b) the notation * refers to the hamming convolution between two banks of epitomes (see Definition 5 for details). The convolution of two single-layered epitomes is treated as a special case with all M a, C a, M b, C b = 1. c) the notation refers to the summation of multiple epitomes of the same length, which is defined in Definition 7. d) multiple (coloured) epitomes in D correspond to different (coloured) epitomes in B; and different (shaded) channels in D correspond to different (shaded) channels of inputs in A.in the sense that, it is impossible to recover individual summands x k ⊕ y l from the summation DISPLAYFORM2 As proved in proposition 4, it is possible to compute the convolution of tuples in two (or more) stacked layers without explicitly recovering individual outer product entries of each layer. Due to the non-linearity of the hamming convolutions, computing the composite of two hamming convolutions is non-trivial as elaborated in Section 3.3. In order to illustrate how to carry out this operation, let us first introduce the epitome of a hamming convolution as follows. Definition 3. An epitome consists of a set of N pairs E = (g n, s n), n = 1,..., N where g n denotes the summation of GHD entries from some hamming convolutions, s n the number of summands or the cardinality of the subset S(n) defined above, and N is called the length of the epitome. A normalized epitome is an epitome with s n = 1 for all n = 1,... N. Any epitome can then be normalized by setting (g n /s n, 1) for all elements. A normalized epitome may also refer to input data x or neuron weights w that are not yet involved in any convolution operations. In the latter case, g n is simply the input data x or neuron weights w. Remark: the summation of GHD entries g n is defined abstractly, and depending on different scenarios, the underlying outer product may operate on arbitrary number of tuples DISPLAYFORM3 Fuzzy logic interpretation: in contrast to the traditional signal processing point of view, in which neuron weights w are treated as parameters of linear transformation and bias terms b are appropriate thresholds for non-linear activations, the generalized hamming distance approach treats w as fuzzy templates and sets bias terms analytically according to. In this view, the normalization g n /s n is nothing but the mean GHD of entries in the subset S(n), which indicates a grade of fitness (or a fuzzy set) between templates w and inputs x at location n. This kind of arithmetic mean operator has been used for aggregating evidences in fuzzy sets and empirically performed quite well in decision making environments (e.g. see BID37).Still in the light of signal processing, the generalized hamming distance naturally induces an information enhancement and suppression mechanism. Since the gradient of g(x, w) with respect to x is 1 − 2w, the information in x is then either enhanced or suppressed according to w: a) the output g(x, w) is always x for w = 0 (conversely 1 − x for w = 1) with no information loss in x; b) for w = 0.5, the output g(x, w) is always 0.5 regardless of x, thus input information in x is completely suppressed; c) for w < 0.0 or w > 1.0 information in x is proportionally enhanced. It was indeed observed, during the learning process in our experiments, a small faction of prominent feature pixels in weights w gradually attain large positive or negative values, so that corresponding input pixels play decisive roles in classification. On the other hand, large majority of obscure pixels remain in the fuzzy regime near 0.5, and correspondingly, input pixels have virtually no influence on the final decision (see experimental in Section 4). This observation is also in accordance with the information compression interpretation advocated by BID30, and the connection indicates an interesting research direction for future work. This subsection only illustrates main concerning how to merge multiple hamming convolution operations in stacked layers into a single-layer of epitomes i.e. deep epitome. Detailed proofs are given in appendix A. Theorem 10. A generalized hamming network consisting of multiple convolution layers, is equivalent to a bank of epitome, called deep epitome [D], which can be computed by iteratively applying the composite hamming convolution in equation to individual layer of epitomes: DISPLAYFORM0 in which = C a is the number of channels in the first bank A, = M z is the number of epitomes in the last bank Z, and DISPLAYFORM1 is the length of composite deep epitome. Note that for the hamming convolution to be a valid operation, the number of epitomes in the previous layer and the number channels in the current layer must be the same e.g. DISPLAYFORM2 Proof. For given inputs represented as a bank of normalized epitomes DISPLAYFORM3 Ly Cx ] is obtained by recursively applying equation to outputs from the previous layers, and factoring out the input due to the associativity proved in proposition 9: DISPLAYFORM4 Remark: due to the non-linearity of underlying hamming outer products, to prove the associativity of the convolution of epitomes is by no means trivial (see proposition 9). In essence, we have to use proposition 4 to compute the convolution of two epitomes even though individual entries of the underlying hamming outer product are not directly accessible. Consequently, the updating rule outlined in equations and play the crucial role in setting due bias terms analytically for generalized hamming networks (GHN), as opposed to the optimization approach often adopted by many non-GHN deep convolution networks. Fuzzy logic inferencing with deep epitomes: Eq. can be treated as a fuzzy logic inferencing rule, with which elements of input x are compared with respect to corresponding elements of deep epitomes d. More specifically, the negative of GHD quantifies the degree of equivalence between inputs x and epitome weights d, i.e. the fuzzy truth value of the assertion x ↔ d where ↔ denotes a fuzzy logical biconditional. Therefore, output scores in y indicate the grade of fuzzy equivalences truth values between x and the shifted d at different spatial locations. This inferencing rule, in the same vein of BID8, is applicable to either a single layer neuron weights or the composite deep epitomes as proved by. Constructive manifestation of the universal approximation theorem: it was proved that a single hidden layer network with non-linear activation can well approximate any arbitrary decision functions BID6; BID15, yet it was also argued by BID10 that such a single layer may be infeasibly large and may fail to learn and generalize correctly. Theorem 10 proves that such a simplified single hidden layer network can actually be constructed from a trained GNH. In this sense Theorem 10 illustrates a concrete solution which materializes the universal approximation theorem. We illustrate below deep epitomes extracted from three generalized hamming networks trained with MNIST, CIFAR10/100 classification respectively. Detailed descriptions about the network architectures (number of layers, channels etc.) are included in the appendix. Deep epitomes derived in the previous section allows one to build up and visualize hierarchical features in an on-line manner during the learning process. This approach is in contrast to many existing approaches, which often apply additional optimization or learning processes with various type of regularizations e.g. in BID7 BID33. Figures 5, 8 and 11, 12 in appendices illustrate deep epitomes learnt by three generalized hamming networks for the MNIST and CIFAR10/100 image classification tasks. It was observed that geometrical structures of hierarchical features were formed at different layers, rather early during the learning process (e.g. 1000 out of 10000 iterations). Substantial follow up efforts were invested on refining features for improved details. The scrutinization of normalized epitome histograms in FIG2 showed that a majority of pixel values remain relatively small during the learning process, while a small fraction of epitome weights gradually accumulate large values over thousands of iterations to form prominent features. The observation of sparse features has been reported and interpreted in terms of sparse coding e.g. BID26 or the information compression mechanism as advocated by BID30. we adopt the notion of fuzziness (also reviewed in Section 3.1) to provide a fuzzy logic interpretation: prominent features correspond to neuron weights with low fuzziness. It was indeed observed in FIG4 that fuzziness of deep epitomes in general decrease during the learning process despite of fluctuations at some layers. The inclination towards reduced fuzziness seems in accord with the minimization of classification errors, although the fuzziness is not explicitly minimized. Finally we re-iterate that the internal representation of deep epitomes is input data independent. For instance in MNIST handwritten images, it is certain constellations of strokes instead of digits that are learnt at layer 3 (see Figure 5). The matching of arbitrary input data with such "fuzzy templates" is then quantified by the generalized hamming distance, and can be treated as generic fuzzy logic It must be noted that the extraction of these hierarchical salient features is not entirely new and has been reported e.g. in BID7 BID20. Nevertheless, the equivalence of deep epitomes disclosed in Theorem 10 leads to an unique characteristic of GHNs -deep layer features do not necessarily rely on features extracted from previous layers, instead, they can be extracted in one step using deep epitomes at desired layers. For extremely deep convolution networks e.g. those with over 100 layers, this simplification may bring about substantial reduction of computational and algorithmic complexities. This potential advantage is worth follow up exploration in future research. We have proposed in this paper a novel network representation, called deep epitome, which is proved to be equivalent to stacked convolution layers in generalized hamming networks (GHN). Theoretically this representation provides a constructive manifestation for the universal approximation theorem BID6 BID15, which states that a single layered network, in principle, is able to approximate any arbitrary decision functions up to any desired accuracy. On the other hand, it is a dominant belief BID10, which is supported by abundant empirical evidences, that deep structures play an indispensable role in decomposing the combinatorial optimization problem into layer-wise manageable sub-problems. We concur with the view and supplement with our demonstration that, a trained deep GHN can be converted into a simplified networks for the sake of high interpretability, reduced algorithmic and computational complexities. The success of our endeavours lies in the rigorous derivation of convolving epitomes across different layers in eq. and, which set due bias terms analytically without resorting to optimizationbased approaches. Consequently, deep epitomes at all convolution layers can be computed without using any input data. Moreover, deep epitomes can be used to extract hierarchical features in just one step at any desired layers. In the light of fuzzy logic, the normalized epitome (definition 3) encodes a grade of fitness between the learnt templates and given inputs at certain spatial locations. This fuzzy logic interpretation furnishes a refreshing perspective that, in our view, will open the black box of deep learning eventually. APPENDIX A Definition 1. For two given tuples DISPLAYFORM0.., y L }, the hamming outer product, denoted, is a set of corresponding elements x DISPLAYFORM1.. L, where ⊕ denotes the generalized hamming distance operator. Then the product has following properties, DISPLAYFORM2 K but they are permutation equivalent, in the sense that there exist permutation matrices P and Q such that x DISPLAYFORM3 2. non-linear: in contrast to the standard outer product which is bilinear in each of its entry, the hamming outer product is non-linear since in general x DISPLAYFORM4 where µ ∈ R is a scalar. Therefore, the hamming outer product defined as such is a pseudo outer product. DISPLAYFORM5 M because of the associativity of GHD. This property holds for arbitrary number of tuples. iterated operation: the definition can be trivially extended to multiple tuples DISPLAYFORM0 Proof. associativity: by definition it suffices to prove element-wise (x k ⊕y l)⊕z m = x k ⊕(y l ⊕z m) because of the associativity of the generalized hamming distance. DISPLAYFORM1, then it suffices to prove non-linearity for each element i.e. DISPLAYFORM2 Definition 2. The convolution of hamming outer product or hamming convolution, denoted *, of two tuples is a binary operation that sums up corresponding hamming outer product entries: DISPLAYFORM3 where the subsets S(n):= {(k, l) k + (L − l) = n} for n = 1,..., K + L − 1, and the union of all subsets constitute a partition of all indices n=1,...,K+L−1 DISPLAYFORM4 The hamming convolution has following properties, DISPLAYFORM5 K since the partition subsets S(n) remains the same.2. non-linear: this property is inherited from the non-linearity of the hamming outer product.3. non-associative: DISPLAYFORM6 M since the summation of GHDs is non-associative. Note this is in contrast to the associativity of the hamming outer product. iterated operation: likewise, the definition can be extended to multiple tuples x DISPLAYFORM0 Proof. non-associativity: by definition it suffices to prove element-wise in general DISPLAYFORM1 Definition 3. An epitome consists of a set of N pairs E = (g n, s n), n = 1,..., N where g n denotes the summation of GHD entries from some hamming convolutions, s n the number of summands or the cardinality of the subset S(n) defined above, and N is called the length of the epitome. A normalized epitome is an epitome with s n = 1 for all n = 1,... N. Any epitome can then be normalized by setting (g n /s n, 1) for all elements. A normalized epitome may also refer to input data x or neuron weights w that are not yet involved in any convolution operations. In the latter case, g n is simply the input data x or neuron weights w. Proposition 4. Given two tuples x = {x k |k = 1 . . . K} and y = {y l |l = 1 . . . L}, then DISPLAYFORM2 Proof. DISPLAYFORM3 Remark: eq. allows one to compute summation of all hamming outer product elements on the right hand side, even though individual elements x k and y l are unable to recover from the given summands k x k and l y l. The definition below immediately follows and illustrates how to merge elements of two epitomes. DISPLAYFORM4 the convolution of two epitomes E c = E a * E b is given by: DISPLAYFORM5 where Remark: this operation is applicable to the case when two epitomes are merged via spatial convolution (see Figure 2 for an example). Note that this merging operation is associative due to the following theorem. Theorem 6. The convolution of multiple epitomes, as defined in 5, is associative: DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 Proof. By definition 5, elements of E a * Remark: this associative property is of paramount importance for the derivation of deep epitomes, which factor out the inputs x from subsequent convolutions with neuron weights w. Definition 7. Given two epitomes of the same size E a = {(g n, s n)|n = 1,... N }, E b = {(g n, s n)|n = 1,... N }, the summation of two epitomes E c = E a E b is trivially defined by element-wise summation:E c = {(g n, s n)|n = 1,..., N }; where g n = g n + g n, s n = s n + s n. Remark: the summation operation is applicable to the case when epitomes are (iteratively) merged cross different channels (see Figure 2 for an example). Note that the size of two input epitomes must be the same, and the size of output epitome remain unchanged. Moreover, the operation is trivially extended to multiple epitomes DISPLAYFORM0 The output of this operation, in turn, is a bank with Figure 2 for an example. DISPLAYFORM1 Proposition 9. The composite convolutions of multiple epitome banks, as given in definition 8, is associative: DISPLAYFORM2 Proof. The associativity immediately follows the associativity of Theorem 6 and definition 7.Remark: this associative property, which is inherited from theorem 6, can be trivially extended to multiple banks and lead to the main theorem of the paper as follows. Theorem 10. A generalized hamming network consisting of multiple convolution layers, is equivalent to a bank of epitome, called deep epitome [D], which can be computed by iteratively applying the composite hamming convolution in equation to individual layer of epitomes: DISPLAYFORM3 in which = C a is the number of channels in the first bank A, = M z is the number of epitomes in the last bank Z, and = L a +(L b −1)+...+(L z −1) is the length of composite deep epitome. Note that for the hamming convolution to be a valid operation, the number of epitomes in the previous layer and the number channels in the current layer must be the same e.g. DISPLAYFORM4 Proof..APPENDIX B: DEEP EPITOMES WITH MNIST HANDWRITTEN RECOGNITION Figure 5: Deep epitomes at layers 1,2 and 3 for a GHN trained with MNIST classification at iterations 100 and 10000 respectively.. Pseudo colour images correspond to three channels of features outputs for input RGB colour channels.
bridge the gap in soft computing
1,444
scitldr
There are two major paradigms of white-box adversarial attacks that attempt to impose input perturbations. The first paradigm, called the fix-perturbation attack, crafts adversarial samples within a given perturbation level. The second paradigm, called the zero-confidence attack, finds the smallest perturbation needed to cause misclassification, also known as the margin of an input feature. While the former paradigm is well-resolved, the latter is not. Existing zero-confidence attacks either introduce significant approximation errors, or are too time-consuming. We therefore propose MarginAttack, a zero-confidence attack framework that is able to compute the margin with improved accuracy and efficiency. Our experiments show that MarginAttack is able to compute a smaller margin than the state-of-the-art zero-confidence attacks, and matches the state-of-the-art fix-perturbation attacks. In addition, it runs significantly faster than the Carlini-Wagner attack, currently the most accurate zero-confidence attack algorithm. Adversarial attack refers to the task of finding small and imperceptible input transformations that cause a neural network classifier to misclassify. White-box attacks are a subset of attacks that have access to gradient information of the target network. In this paper, we will focus on the white-box attacks. An important class of input transformations is adding small perturbations to the input. There are two major paradigms of adversarial attacks that attempt to impose input perturbations. The first paradigm, called the fix-perturbation attack, tries to find perturbations that are most likely to cause misclassification, with the constraint that the norm of the perturbations cannot exceed a given level. Since the perturbation level is fixed, fix-perturbation attacks may fail to find any adversarial samples for inputs that are far away from the decision boundary. The second paradigm, called the zero-confidence attack, tries to find the smallest perturbations that are guaranteed to cause misclassification, regardless of how large the perturbations are. Since they aim to minimize the perturbation norm, zero-confidence attacks usually find adversarial samples that ride right on the decision boundaries, and hence the name "zero-confidence". The ing perturbation norm is also known as the margin of an input feature to the decision boundary. Both of these paradigms are essentially constrained optimization problems. The former has a simple convex constraint (perturbation norm), but a non-convex target (classification loss or logit differences). In contrast, the latter has a non-convex constraint (classification loss or logit differences), but a simple convex target (perturbation norm).Despite their similarity as optimization problems, the two paradigms differ significantly in terms of difficulty. The fix-perturbation attack problem is easier. The state-of-the-art algorithms, including projected gradient descent (PGD) BID10 and distributional adversarial attack , can achieve both high efficiency and high success rate, and often come with theoretical convergence guarantee. On the other hand, the zero-confidence attack problem is much more challenging. Existing methods are either not strong enough or too slow. For example, DeepFool BID11 and fast gradient sign method (FGSM) BID3 BID7 b) linearizes the constraint, and solves the simplified optimization problem with a simple convex target and a linear constraint. However, due to the linearization approximation errors, the solution can be far from optimal. As another extreme, L-BFGS BID18 and Carlini-Wagner (CW) BID1 convert the optimization problem into a Lagrangian, and the Lagrangian multiplier is determined through grid search or binary search. These attacks are generally much stronger and theoretically grounded, but can be very slow. The necessity of developing a better zero-confidence attack is evident. The zero-confidence attack paradigm is a more realistic attack setting. More importantly, it aims to measure the margin of each individual token, which lends more insight into the data distribution and adversarial robustness. Motivated by this, we propose MARGINATTACK, a zero-confidence attack framework that is able to compute the margin with improved accuracy and efficiency. Specifically, MARGINATTACK iterates between two moves. The first move, called restoration move, linearizes the constraint and solves the simplified optimization problem, just like DeepFool and FGSM; the second move, called projection move, explores even smaller perturbations without changing the constraint values significantly. By construction, MARGINATTACK inherits the efficiency in DeepFool and FGSM, and improves over them in terms of accuracy with a convergence guarantee. Our experiments show that MARGINAT-TACK attack is able to compute a smaller margin than the state-of-the-art zero-confidence attacks, and matches the state-of-the-art fix-perturbation attacks. In addition, it runs significantly faster than CW, and in some cases comparable to DeepFool and FGSM. In addition to the aforementioned state-of-the-art attacks, there are a couple of other works that attempt to explore the margin. Jacobian-based saliency map attack BID13 is among the earliest works that apply gradient information to guide the crafting of adversarial examples. It chooses to perturb the input features whose gradient is consistent with the adversarial goal. Onepixel attack BID17 finds adversarial examples by perturbing only one pixel, which can be regarded as finding the 0 margin of the inputs. BID5 converts PGD into a zeroconfidence attack by searching different perturbation levels, but this again can be time-consuming because it needs to solve multiple optimization subproblems. Weng et al. proposed a metric called CLEVER , which estimates an upper-bound of the margins. Unfortunately, recent work BID2 has shown that CLEVER can overestimate the margins due to gradient masking BID14. The above are a just a small subset of white-box attack algorithms that are relevant to our work. For an overview of the field, we refer readers to BID0.The MARGINATTACK framework is inspired by the Rosen's algorithm BID15 for constraint optimization problems. However, there are several important distinctions. First, the Rosen's algorithm rests on some unrealistic assumptions for neural networks, e.g. continuously differentiable constraints, while MARGINATTACK has a convergence guarantee with a more realistic set of assumptions. Second, the Rosen's algorithm requires a step size search for each iteration, which can be time-consuming, whereas MARGINATTACK will work with a simple diminishing step size scheme. Most importantly, as will be shown later, MARGINATTACK refers to a large class of attack algorithms depending on how the two parameters, a (k) and b (k), are set, and the Rosen's algorithm only fits into one of the settings, which only works well under the 2 norm. For other norms, there exist other parameter settings that are much more effective. As another highlight, the convergence guarantee of MARGINATTACK holds for all the settings that satisfy some moderate assumptions. In this section, we will formally introduce the algorithm and discuss its convergence properties. In the paper, we will denote scalars with non-bolded letters, e.g. a or A; column vectors with lowercased, bolded letters, e.g. a; matrix with upper-cased, bolded letters, e.g. A; sets with upper-cased double-stoke letters, e.g. A; gradient of a function f (x) evaluated at x = x 0 as ∇f (x 0). Given a classifier whose output logits are denoted as l 0 (x), l 1 (x), · · ·, l C−1 (x), where C is the total number of classes, for any data token (x 0, t), where x 0 is an n-dimensional input feature vector, and t ∈ {0, · · ·, C − 1} is its label, MARGINATTACK computes DISPLAYFORM0 where d(·) is a norm. In this paper we only consider 2 and ∞ norms, but the proposed method is generalizable to other norms. For non-targeted adversarial attacks, the constraint is defined as DISPLAYFORM1 where ε is the offset parameter. As a common practice, ε is often set to a small negative number to ensure that the adversarial sample lies on the incorrect side of the decision boundary. In this paper, we will only consider non-targeted attack, but all the discussions are applicable to targeted attacks (i.e. c(x) = max i =a l i (x) − l a (x) − ε for a target class a). MARGINATTACK alternately performs the restoration move and the projection move. Specifically, denote the solution after the k-th iteration as x (k). Then the two steps are:Restoration Move: The restoration move tries to hop to the constraint boundary, i.e. c(x) = 0 with the shortest hop. Formally, it solves: DISPLAYFORM0 where α (k) is the step size within. Notice that the left hand side of the constraint in Eq. is the first-order Taylor approximation of c(z DISPLAYFORM1, so this constraint tries to move point closer to c(x) = 0 by α (k). It can be shown, from the dual-norm theory, 1 that the solution to is DISPLAYFORM2. Specifically, noticing that the dual norm of the p norm is the (1−p −1) −1 norm, we have DISPLAYFORM3 As mentioned, Eq. is similar to DeepFool under 2 norm, and to FGSM under ∞ norm. Therefore, we can expect that the restoration move should effectively hop towards the decision boundary, but the hop direction may not be optimal. That is why we need the next move. Projection Move: The projection move tries to move closer to x 0 while ensuring that c(x) will not change drastically. Formally, DISPLAYFORM4 where β (k) is the step size within; a (k) and b (k) are two scalars, which will be specified later. As an intuitive explanation on Eq., notice that the second term, which we will call the distance reduction term, reduces the distance to x 0, whereas the third term, which we will call the constraint reduction term, reduces the the constraint (because s(z (k) ) and ∇c(z (k) ) has a positive inner product). Therefore, the projection move essentially strikes a balance between reduction in distance and reduction in constraint. a (k) and b (k) can have two designs. The first design is to ensure the constraint values are roughly the same after the move, i.e. c(z DISPLAYFORM5 whose solution is DISPLAYFORM6 Another design is to ensure the perturbation norm reduces roughly by DISPLAYFORM7 . By Taylor approximation, we have DISPLAYFORM8 whose solution is DISPLAYFORM9 It should be noted that Eqs. FORMULA8 and FORMULA0 are just two specific choices for a (k) and b (k). It turns out that MARGINATTACK will work with a convergence guarantee for a wide range of bounded a (k) s and b (k) s that satisfy some conditions, as will be shown in section 3.4. Therefore, MARGINAT-TACK provides a general and flexible framework for zero-confidence adversarial attack designs. In practice, we find that Eq. works better for 2 norm, and Eq. works better for ∞ norm. FIG1 illustrates a typical convergence path of MARGINATTACK using 2 norm and Eq. FORMULA8 as an example. The red dots on the right denote the original inputs x 0 and its closest point on the decision boundary, x *. Suppose after iteration k, MARGINATTACK reaches x (k), denoted by the green dot on the left. The restoration move travels directly towards the decision boundary by finding the normal direction to the current constraint contour. Then, the projection move travels along the tangent plane of the current constraint contour to reduce the distance to x 0 while preventing the constraint value from deviating much. As intuitively expected, the iteration should eventually approach x *. FIG2 plots an empirical convergence curve of the perturbation norm and constraint value of MARGINATTACK-2 on a randomly chosen CIFAR image. Each move from a triangle to a circle dot is a restoration move, and from circle to triangle a projection move. The red line is the smoothed version. As can be seen, a restoration move reduces the constraint value while slightly increasing the constraint norm, and a projection move reduces the perturbation norm while slightly affecting the constraint value. Both curves can eventually converge. The constraint function c(x) in Eq. FORMULA1 is nonconvex, thus the convergence analysis for MARGINAT-TACK is limited to the vicinity of a unique local optimum, as stated in the following theorem. Theorem 1. Denote x * as one local optimum for Eq.. Assume ∇c(x *) exists. Define projection matrices DISPLAYFORM0 Consider the neighborhood B = {x : DISPLAYFORM1 2 ≤ X, |c(x)| ≤ C} that satisfies the following assumptions:1. (Differentiability) ∀x ∈ B, ∇c(x) exists, but can be discontinuous, i.e. all the discontinuity points of the gradient in B are jump discontinuities; DISPLAYFORM2 6. (Unique Optimality) x * is the only global optimum within B; DISPLAYFORM3 Then we have the convergence guarantee lim k→∞ DISPLAYFORM4 The proof will be presented in the appendix. Here are a few remarks. First, assumption 1 allows jump discontinuities in ∇c(x) almost everywhere, which is a very practical assumption for deep neural networks. Most neural network operations, such as ReLU and max-pooling, as well as the max operation in Eq., introduce nothing beyond jump discontinuities in gradient. Second, assumption 3 does require the constraint gradient to be lower bounded, which may lead to concerns that MARGINATTACK may fail in the presence of gradient masking BID14. However, notice that the gradient boundedness assumption is only imposed in B, which is in the vicinity of the decision boundary, whereas gradient masking is most likely to appear away from the decision boundary and where the input features are populated. Besides, as will be discussed later, a random initialization as in PGD will be adopted to bypass regions with gradient masking. Experiments on adversarially trained models also verify the robustness of MARGINATTACK.Finally, assumption 5 essentially stipulates that c(x) is convex or "not too concave" in B (and thus so is the constraint set c(x) ≤ 0), so that the first order optimality condition can readily imply local minimum instead of a local maximum. In fact, it can be shown that assumption 5 can be implied if c(x) is convex in B. There are a few additional implementation details as outlined below. Box Constraint: In many applications, each dimension of the input features should be bounded, i.e. x ∈ [x min, x max] n. To impose the box constraint, the restoration move problem as in Eq. FORMULA2 is modified as DISPLAYFORM5 whose solution is DISPLAYFORM6 Proj(·) is an operator that projects the vector in its argument onto the subset in its subscript. I is a set of indices with which the elements inz (k) satisfy the box constraint, and I C is its complement. I is determined by running Eq. iteratively and updating I after each iterations. Unlike other attack algorithms that simply project the solution onto the constraint box, MARGINAT-TACK incorporates the box constraint in a principled way, such that any local optimal solution x * will be an invariant point of the restoration move. Thus the convergence is faster. Target Scan: According to Eq., each restoration move essentially approaches the adversarial class with the highest logit, but the class with the highest logit may not be the closest. To mitigate the problem, we follow a similar approach adopted in DeepFool, which we call target scan. Target scan performs a target-specific restoration move towards each class, and chooses the move with the shortest distance. Formally, target scan introduces a set of target-specific constraints {c i (x) = l t (x) − l i (x) − ε}. A restoration move with target scan solves DISPLAYFORM7 where z (k,i) is the solution to Eqs. FORMULA2 or FORMULA0 with c(x (k) ) replaced with c i (x (k) ), and thus is equal to Eqs. or with c(DISPLAYFORM8 A is a set of candidate adversarial calsses, which can be all the incorrect classes if the number of classes is small, or which can be a subset of the adversarial classes with the highest logits otherwise. Experiments show that target scan is necessary only in the first few restoration moves, when the closest and highest adversarial classes are likely to be distinct. Therefore, the computation cost will not increase too much. Initialization: The initialization of x can be either deterministic or random as follows DISPLAYFORM9 where U{[−u, u] n } denotes the uniform random distribution in [−u, u] n. Similar to PGD, we can perform multiple trials with random initialization to find a better local optimum. Final Tuning MARGINATTACK can only cause misclassification when c(x) ≤ ε. To make sure the attack is successful, the final iterations of MARGINATTACK consists of restoration moves only, DISPLAYFORM10 and no projection moves, until a misclassification is caused. This can also ensure the final solution satisfies the box constraint (because only the restoration move incorporates the box constraint).Summary: Alg. 1 summarizes the MARGINATTACK procedure. As for the complexity, each restoration move or projection move requires only one backward propagation, and thus the computational complexity of each move is comparable to one iteration of most attack algorithms. This section compares MARGINATTACK with several state-of-the-art adversarial attack algorithms in terms of the perturbation norm and computation time on image classification benchmarks. Three regularly trained models are evaluated on.• MNIST : The classifier is a stack of two 5 × 5 convolutional layers with 32 and 64 filters respectively, followed by two fully-connected layers with 1,024 hidden units.• CIFAR10 BID6 ): The classifier is a pre-trained ResNet32 BID4 provided by TensorFlow. 5.• ImageNet BID16: The classifier is a pre-trained ResNet50 BID4 provided by TensorFlow Keras 6. Evaluation is on a validation subset containing 10,000 images. The range of each pixel is for MNIST, and for CIFAR10 and ImageNet. The settings of MARGINATTACK and baselines are listed below. Unless stated otherwise, the baseline algorithms are implemented by cleverhans BID14. The hyperparameters are set to defaults if not specifically stated.• CW BID1: The target and evaluation norm is 2. The learning rate is set to 0.05 for MNIST, 0.001 for CIFAR10 and 0.01 for ImageNet, which are tuned to its best performance. The number of binary steps for multiplier search is 10. • DeepFool : The evaluation norm is 2.• FGSM BID3: FGSM is implemented by authors. The step size is searched to achieve zero-confidence attack. The evaluation distance metric is ∞.• PGD BID10: The target and evaluation norm are ∞. The learning rate is set to 0.01 for MNIST, and 0.05 for CIFAR10 and 0.1 for ImageNet.• MARGINATTACK: Two versions of MARGINATTACK are implemented, whose target and evaluation norms are 2, and ∞, respectively. The hyperparmeters are detailed in TAB4 in the appendix. The first 10 restoration moves are with target scan, and the last 20 moves are all restoration moves. The number of iterations/moves is set to 2,000 for CW, 200 with 10 random starts for PGD and MARGINATTACK (except for ImageNet where there is only one random run), and 200 for the rest. Except for PGD, all the other attacks are zero-confidence attacks. For these attacks, we plot the CDF of the margins of the validation data, which can also be interpreted as the percentage success rate of these attacks as a function of perturbation level. FIG3 plots the success rate curves, where the upper panel shows the 2 attacks, and the lower one shows ∞ attacks. As can be observed, the MARGINATTACK curves are above all other algorithms at all perturbation levels and in all datasets. CW is very close to MARGINATTACK on MNIST and CIFAR10, but MARGINATTACK maintains a 3% advantage on MNIST and 1% on CIFAR10. It seems that CW is unable to converge well within 2,000 iterations on ImageNet, although the learning rate has been tuned to maximize its performance. MARGINATTACK, on the other hand, converges more efficiently and consistently. To obtain a success rate curve for PGD, we have to run the attack again and again for many different perturbation levels, which can be time-consuming for large datasets (this shows an advantage of zero-confidence attacks over fix-perturbation attacks). Instead, we choose four perturbation levels for each attack scenario to compare. The perturbation levels are chosen to roughly follow the 0.2, 0.4, 0.6 and 0.8 quantiles of the MARGINATTACK margins. TAB1 compares the success rates under the chosen quantiles among the ∞ attacks. We can see that MARGINATTACK outperforms PGD under all the perturbation levels, and that both significantly dominate FGSM. We also evaluate MARGINATTACK on the MNIST Adversarial Examples Challenge 7, which is a challenge of attacking an MNIST model adversarially trained using PGD with 0.3 perturbation level. Same as the PGD baseline listed, MARGINATTACK is run with 50 random starts, and the initialization perturbation range u = 0.3. The number of moves is 500. The target norm is ∞. b n = 5 and a n is set as in Eq.. The rest of the configuration is the same as in the previous experiments. Table 2 lists the success rates of different attacks under 0.3 perturbation level. The baseline algorithms are all fix-perturbation attacks, and their are excerpted from the challenge white-box attack leaderboard. As can be seen, MARGINATTACK, as the only zero-confidence attack algorithm, has the second best , which shows that it performs competitively against the state-of-the-art fix-perturbation attacks. We would like to revisit the convergence plot of the constraint value c(x) and perturbation norm d(x) of as in FIG2. We can see that MARGINATTACK converges very quickly. In the example shown in the figure, it is able to converge within 20 moves. Therefore, MARGINATTACK can be greatly accelerated. If margin accuracy is the priority, a large number of moves, e.g. 200 as in our experiment, would help. However, if efficiency is the priory, a small number of moves, e.g. 30, suffices to produce a decent attack. To further assess the efficiency of MARGINATTACK, Tab. 3 compares the running time (in seconds) of attacking one batch of images, implemented on a single NVIDIA TESLA P100 GPU. The batch size is 200 for MNIST and CIFAR10, and 100 for ImageNet. The settings are the same as stated in section 4.1, except that for a better comparison, the number of iterations of CW is cut down to 200, and PGD and MARGINATTACK runs one random pass, so that all the algorithms have the same iteration/moves. Only the 2 versions of MARGINATTACK are shown because the other versions have similar run times. As shown, running time of MARGINATTACK is much shorter than CW, and is comparable to DeepFool and PGD. CW is significantly slower that the other algorithms because it has to run multiple trials to search for the best Lagrange multiplier. Note that DeepFool and CW enable early stop, but MARGINATTACK does not. Considering MARGINATTACK's fast convergence rate, the running time can be further reduced by early stop. We have proposed MARGINATTACK, a novel zero-confidence adversarial attack algorithm that is better able to find a smaller perturbation that in misclassification. Both theoretical and empirical analyses have demonstrated that MARGINATTACK is an efficient, reliable and accurate adversarial attack algorithm, and establishes a new state-of-the-art among zero-confidence attacks. What is more, MARGINATTACK still has room for improvement. So far, only two settings of a (k) and b (k) are developed, but MARGINATTACK will work for many other settings, as long as assumption 5 is satisfied. Authors hereby encourage exploring novel and better settings for the MARGINATTACK framework, and promote MARGINATTACK as a new robustness evaluation measure or baseline in the field of adversarial attack and defense. This supplementary material aims to prove Thm. 1. Without the loss of generality, K in Eq. in set to 0. Before we prove the theorem, we need to introduce some lemmas. Lemma 1.1. If assumption 3 in Thm. 1 holds, then ∀x ∈ B DISPLAYFORM0 Proof. According to Eq., for 2 norm, DISPLAYFORM1 for ∞ norm, DISPLAYFORM2 Lemma 1.2. Given all the assumptions in Thm. 1, where DISPLAYFORM3 and assuming DISPLAYFORM4 where DISPLAYFORM5 A and B are defined in Eq..According to assumption 8, this implies DISPLAYFORM6 at the rate of at least 1/n ν.Proof. As a digression, the second term in Eq. FORMULA0 is well defined, because DISPLAYFORM7 is upper bounded by Lem. 1.1 and assumptions 3.Back to proving the lemma, we will prove that each restoration move will bring c(x (k) ) closer to 0, while each projection move will not change c(x (k) ) much. First, for the restoration move DISPLAYFORM8 The first line is from the generalization of Mean-Value Theorem with jump discontinuities, and ξ = tz (k) + (1 − t)x (k) and t is a real number in. The second line is from Eq.. The last line is from assumptions 4 and 7 and Eq..Next, for the projection move DISPLAYFORM9 The first line is from the fact that assumption 3 implies that c(x) is M -Lipschitz continuous. DISPLAYFORM10 for some M d and M s. To see this, for 2 norm DISPLAYFORM11 where b is defined as the maximum perturbation norm within B, i.e. DISPLAYFORM12 which is well defined because B is a tight set. For ∞ norm, DISPLAYFORM13 Note that Eq. also holds for other norms. With Eq. and assumption 8, Eq. FORMULA1 becomes DISPLAYFORM14 Combining Eqs. FORMULA1 and FORMULA2 we have DISPLAYFORM15 where DISPLAYFORM16 According to assumption 7, 0 < A < 1. Also, according to Eq. FORMULA1, DISPLAYFORM17 and thus DISPLAYFORM18 If DISPLAYFORM19 Otherwise, Eq. implies DISPLAYFORM20 This concludes the proof. Lemma 1.3. Given all the assumptions in Thm. 1, and assuming DISPLAYFORM21 Proof. First, for restoration move DISPLAYFORM22 δm 2 Line 4 is given by Eq.. Line 5 is derived from Lem. 1.1. The last line is from Lem. 1.2. δm 2 where DISPLAYFORM0 It can easily be shown that DISPLAYFORM1 Therefore DISPLAYFORM2 Combining Eqs. FORMULA2 and FORMULA1, we have DISPLAYFORM3 Step Case: Assume Eq. holds ∀k ≤ K, then Eqs. and holds ∀k ≤ K.• Proving |c(x (K+1) )| ≤ C: DISPLAYFORM4 where the last inequality is given by Eq.. DISPLAYFORM5 Notice that from Eq., 0 DISPLAYFORM6 where the last inequality is given by Eq.. DISPLAYFORM7 Lemma 1.5. Under the assumptions in Thm. 1 DISPLAYFORM8 Proof. From Thm. 2, a solution, denoted as x, to min DISPLAYFORM9 would satisfy DISPLAYFORM10 If P [x * − x 0] = 0, there are two possibilities. The first possibility is that x * is not a solution to Eq., which contradicts with the first order optimality condition that x * must satisfy. The second possibility is there are multiple solutions to the problem in Eq. FORMULA1, and x and x * are both its solutions. This can happen if d(·) is 1 or ∞ norm. By definition DISPLAYFORM11 * is a local minimum to Eq., ∃j ∈ I, ε < 1, ∀δ < ε, DISPLAYFORM12 65) Otherwise, if c(x δ) ≤ 0, then x δ is a feasible solution to the problem in Eq. FORMULA0 and DISPLAYFORM13 which contradicts with the assumption that x * is a unique local optimum in B. DISPLAYFORM14 takes discrete values. Therefore, to satisfy assumption 2, s j (x δ) = s j (x *), which implies DISPLAYFORM15 The first inequality is because DISPLAYFORM16 Eqs. FORMULA6 and FORMULA6 cause a contradiction. Now we are ready to prove Thm. 1.Proof of Thm. 1. From Lems. 1.2, 1.3 and 1.4, we can established that Eqs. FORMULA0 and FORMULA2 holds under all the assumptions in Thm. 1. The only thing we need to prove is that Eqs. FORMULA0 and FORMULA2 necessarily implies lim k→ x (k) − x 0 2 = 0.First, from Lem. 1.5 DISPLAYFORM17 Then, ∀x ∈ B s.t. P [x − x 0] 2 2 = 0, we have x − x = λs(x). From assumption 4, we know that c(x) is monotonic along x − x = λs(x). Therefore, x * is the only point in B that satisfies P [x − x 0] 2 2 = 0 and c(x) = 0. Also, notice that P [x − x 0] and c(x) are both continuous mappings. This concludes the proof. Notice that the product of λ and ∇ T c(x) y is constant, so if λ is to be minimized, then ∇ T c(x)y needs to be maximized. Namely, y can be determined by solving which is the definition of dual norm. Therefore y = s(x) Plug Eq. into the constraint in Eq., we can solve for λ. This concludes the proof. As a remark, Thm. 2 is applicable to the optimization problems in Eqs. FORMULA2 and FORMULA1 Proof. Since c(x) is convex in B, we have ∀x DISPLAYFORM18 Further, assume x satisfies ∇ T c(x *)(x − x *) = 0 Then we have P T P (x − x *) = P (x − x *) = x − x * where the first equality is from the fact that P is an orthogonal projection matrix under 2 norm; the second equality is from the fact that the projection subspace of P is orthogonal to ∇c(x *) by construction. Also, from Lem. 1.5, we have DISPLAYFORM19 Plug Eqs. FORMULA7 to FORMULA7 into FORMULA6, we have DISPLAYFORM20 On the other hand, let γ = m a /2 x * − x 0 2, then ∀x satisfying Eq. and DISPLAYFORM21 where the second line comes from Eq. and the fact that x * is the optimal solution to the problem in Eq.. Combining Eqs. FORMULA8 and FORMULA0, we know that assumption 5 holds with strict inequality for x satisfying Eq. and x = x *.∇ T c(x)P T P (x − x *), ∇ T d(x − x 0)P T P (x − x 0) and γ(x − x 0) T P T P (x − x 0) are continuous functions, and therefore ∃B where assumption 5 also holds. This concludes the proof.
This paper introduces MarginAttack, a stronger and faster zero-confidence adversarial attack.
1,445
scitldr
Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field. Rather than enumerate variations across filter channels or pyramid levels, dynamic models locally predict scale and adapt receptive fields accordingly. The degree of variation and diversity of inputs makes this a difficult task. Existing methods either learn a feedforward predictor, which is not itself totally immune to the scale variation it is meant to counter, or select scales by a fixed algorithm, which cannot learn from the given task and data. We extend dynamic scale inference from feedforward prediction to iterative optimization for further adaptivity. We propose a novel entropy minimization objective for inference and optimize over task and structure parameters to tune the model to each input. Optimization during inference improves semantic segmentation accuracy and generalizes better to extreme scale variations that cause feedforward dynamic inference to falter. The world is infinite in its variations, but our models are finite. While inputs differ in many dimensions and degrees, a deep network is only so deep and wide. To nevertheless cope with variation, there are two main strategies: static enumeration and dynamic adaptation. Static enumeration defines a set of variations, processes them all, and combines the . For example, pyramids enumerate scales and group-structured filters enumerate orientations . Dynamic adaptation selects a single variation, conditioned on the input, and transforms processing accordingly. For example, scale-space search selects a scale transformation from input statistics and end-to-end dynamic networks select geometric transformations , parameter transformations , and feature transformations directly from the input. Enumeration and adaptation both help, but are limited by computation and supervision, because the sets enumerated and ranges selected are bounded by model size and training data. Deep networks for vision exploit enumeration and adaptation, but generalization is still limited. Networks are enumerative, by convolving with a set of filters to cover different variations then summing across them to pool the variants (; ;). For scale variation, image pyramids and feature pyramids (; enumerate scales, process each, and combine the outputs. However, static models have only so many filters and scales, and may lack the capacity or supervision for the full data distribution. Dynamic models instead adapt to each input . The landmark scale invariant feature transform extracts a representation adapted to scales and orientations predicted from input statistics. Dynamic networks, including spatial transformers and deformable convolution , make these predictions and transformations end-to-end. Predictive dynamic inference is however insufficient: the predictor may be imperfect in its architecture or parameters, or may not generalize to data it was not designed or optimized for. Bottom-up prediction, with only one step of adaptation, can struggle to counter variations in scale and other factors that are too large or unfamiliar. To further address the kinds and degrees of variations, including extreme out-of-distribution shifts, we devise a complementary third strategy: unsupervised optimization during inference. We define an unsupervised objective and a constrained set of variables for effective gradient optimization. Our novel inference objective minimizes the entropy of the model output to optimize for confidence. The variables optimized over are task parameters for pixel-wise classification and structure parameters Accuracy is high and prediction entropy is low for training and testing at the same scale (left). Accuracy drops and entropy rises when tested at 3x the training scale, even when the network is equipped with dynamic receptive fields to adapt to scale variation (middle). Previous approaches are limited to one-step, feedforward scale prediction, and are unable to handle a 3x shift. In contrast our iterative gradient optimization approach is able to adapt further (right), and achieve higher accuracy by minimizing entropy with respect to task and scale parameters. for receptive field adaptation, which are updated together to compensate for scale shifts. This optimization functions as top-down feedback to iteratively adjust feedforward inference. In effect, we update the trained model parameters to tune a custom model for each test input. Optimization during inference extends dynamic adaptation past the present limits of supervision and computation. Unsupervised optimization boosts generalization beyond training by top-down tuning during testing. Iterative updates decouple the amount of computation, and thus degree of adaptation, from the network architecture. Our main is to demonstrate that adaptation by entropy optimization improves accuracy and generalization beyond adaptation by prediction (see Figure 1), which we show for semantic segmentation by inference time optimization of a dynamic Gaussian receptive field model on the PASCAL VOC dataset. Our approach extends dynamic scale inference from one-step prediction to multi-step iteration through optimization. For optimization during inference, we require an objective to optimize and variables to optimize over. Lacking task or scale supervision during inference, the objective must be unsupervised. For variables, there are many choices among parameters and features. Our main contribution is an unsupervised approach for adapting task and structure parameters via gradient optimization to minimize prediction entropy. Note that our inference optimization is distinct from the training optimization. We do not alter training in any way: the task loss, optimizer, and model are entirely unchanged. In the following, optimization refers to our inference optimization scheme, and not the usual training optimization. To optimize inference, a base dynamic inference method is needed. For scale, we choose local receptive field adaptation , because scale varies locally even within a single image. In particular, we adopt dynamic Gaussian receptive fields that combine Gaussian scale-space structure with standard "free-form" filters for parameter-efficient spatial adaptation. These methods rely on feedforward regression to infer receptive fields that we further optimize. Figure 2 illustrates the approach. Optimization is initialized by feedforward dynamic inference of Gaussian receptive fields . At each following step, the model prediction and its entropy are computed, and the objective is taken as the sum of pixel-wise entropies. Model parameters are iteratively updated by the gradient of the objective, ing in updated predictions and entropy. Optimization of the parameters for the Gaussian receptive fields is instrumental for adapting to scale. (top) is optimized according to the output (bottom) at test time. We optimize receptive field scales and filter parameters to minimize the output entropy (middle). Optimizing during inference makes iterative updates shown from left to right: receptive field scale adapts, entropy is reduced, and accuracy is improved. This gives a modest refinement for training and testing at the same scale, and generalization improves for testing at different scales. Unsupervised inference objectives can be bottom-up, based on the input, or top-down, based on the output. To augment already bottom-up prediction, we choose the top-down objective of entropy minimization. In essence, the objective is to reduce model uncertainty. More precisely, for the pixel-wise outputŶ ∈ C×H×W for C classes and an image of height H and width W, we measure uncertainty by the Shannon entropy : for each pixel at index i, j to yield pixel-wise entropy of the same spatial dimensions as the output. Entropy is theoretically motivated and empirically supported. By inspection, we see that networks tend to be confident on in-distribution data from the training regime. (Studying the probabilistic calibration of networks confirms this.) In our case, this holds for testing scales similar to the training scales, with high entropy on segment contours. On out-of-distribution data, such as scale shifts, the output entropy is higher and less structured. This objective is severe, in that its optimum demands perfect certainty (that is, zero entropy). As a more stable alternative, we consider adaptively thresholding the objective by the average entropy across output pixels. We calculate the mean entropy at each iteration, and only take the gradient of pixels with above-average entropy. This mildly improves accuracy. Our final objective is then: where S is the set of pixels with entropy above the average H µ. At each step, we re-calculate the average entropy and re-select the set of violating pixels. In this way, optimization is focused on updating predictions where the model is the most uncertain. We need to pick the variables to optimize over so that there are enough degrees of freedom to adapt, but not so many that overfitting sets in. Furthermore, computation time and memory demand a minimal set of variables for efficiency. Choosing parameters in the deepest layers of the network satisfy these needs: capacity is constrained by keeping most of the model fixed, and computation is reduced by only updating a few layers. The alterantive of choosing all the parameters, and optimizing end-to-end during inference, is ineffective and inefficient: inference is slower and less accurate than feedforward prediction. We select the task parameters θ score of the output classification filter, for mapping from features to classes, and the structure parameters θ scale of the scale regression filter, for mapping from features to receptive field scales. Optimizing over these parameters indirectly optimizes over the local predictions for classification scoresŶ and scalesΣ. Why indirectly optimize the outputs and scales via these parameters, instead of direct optimization? First, dimensionality is reduced for regularization and efficiency: the parameters are shared across the local predictions for the input image and have fixed dimension. Additionally, this preserves dependence on the data: optimizing directly over the classification predictions admits degenerate solutions that are independent of the input. Initialization The unaltered forward pass of the base network gives scoresŶ and scalesΣ. Iteration For each step t, the loss is the sum of thresholded entropies of the pixel-wise predictionŝ Y (t). The gradient of the loss is taken for the parameters θ scale. Given the new parameters, a partial forward pass re-infers the local scales and predictions forŶ (t+1) andΣ (t+1). This efficient computation is a small fraction of the initialization forward pass. Termination The number of iterations is set and fixed to control the amount of inference computation. We do so for simplicity, but note that in principle convergence rules such as relative tolerance could be used with the loss, output, or parameter changes each iteration for further adaptivity. Figure 3 shows the progress of our inference optimization across iterations. We experiment with extending from predictive to iterative dynamic inference for semantic segmentation, because this task has a high degree of appearance and scale variation. In particular, we show for iterative optimization of classifier and scale parameters in a dynamic Gaussian receptive field model on the PASCAL VOC dataset. By adapting both task and structure parameters, our approach improves accuracy on in-distribution inputs and generalizes better on out-of-distribution scale shifts. We ablate which variables to optimize and for how many steps, and analyze our choices by oracle and adversary . These experiments establish the efficacy of entropy minimization during inference for scale adaptation, while oracle show opportunity for further progress. Data and Metric PASCAL VOC is a well-established semantic segmentation benchmark with 20 semantic classes and a class. The original dataset only has 1,464, 1,449 and 1,456 images with segmentation annotations for training, validation, and testing, respectively. As is standard practice, we include the additional 9,118 images and annotations from , giving 10,582 training samples in total. We measure accuracy by the usual metric of mean intersection-over-union (IoU). We report our on the validation set. Architecture We choose deep layer aggregation (DLA) as a strong, representative fully convolutional network architecture. DLA exploits the feature pyramid inside the network via iterative and hierarchical aggregation across layers. We will release code and the reference models implemented in PyTorch . Training We train our model on the original scale of the dataset. We optimize via stochastic gradient descent (SGD) with batch size 64, initial learning rate 0.01, momentum 0.9, and weight decay 0.0001 for 500 epochs. We use the "poly" learning rate schedule with power 0.9. For the model with no data augmentation ("w/o aug"), the input images are padded to 512 × 512. As for the "w/ aug" model, data augmentation includes cropping to 512 × 512, scaling in [0.5, 2], rotation in [−10 •, 10 •], color distortion , and horizontal flipping. Testing We test our model on different scales of the dataset in the [1.5, 4.0] range. We optimize the model parameters for adaptation via Adam , batching all image pixels together, and setting the learning rate to 0.001. The model is optimized episodically to each input, and the parameters are reset between inputs. No data augmentation is used during inference to isolate the role of dynamic inference by the model. We compare the semantic segmentation accuracy of our optimization with a prediction baseline and optimization by oracle and adversary. The baseline is a one-step dynamic model using feedforward scale regression to adapt receptive fields following . We train on a narrow range of scales and test on a broader range of scales to measure refinement, the improvement for the training scales, and generalization, the improvement for the new scales. This baseline is the initialization for our iterative optimization approach: the output and scale predictions for the initial iteration are inferred by the one-step model. For analysis , the oracle and adversary optimize during inference to respectively minimize/maximize the cross-entropy loss of the output and the truth. As reported in Table 1, our method consistently improves on the baseline by ∼2 points for all scales, which indicates that our unsupervised optimization for iterative inference helps the model generalize better across scales. When the scale shift is larger, there is likewise a larger gap. To evaluate the effect of data augmentation, we experiment with ("w/ aug") and without ("w/o aug"). Data augmentation significantly improves generalization across scales. Note that our optimization during inference still improves the model with data augmentation by the same amount. Results are scored by intersection-over-union (higher is better). "w/o aug" excludes data augmentation, where "w/ aug" includes scaling, rotation, and other augmentation. Even though data augmentation reduces the effect of scale variation, our method further improves accuracy for all scales. Table 2: Ablation of the number of iterations: entropy minimization saturates after 32 steps. We ablate the choice of parameters to optimize and the number of updates to make. We optimize during inference to adapt the task parameters (score) of the classifier and structure parameters (scale) of the scale regressor. The task parameters map between the visual features and the classification outputs. Updates to the task parameters are the most direct way to alter the pixelwise output distributions. Updates to the structure parameters address scale differences by adjusting receptive fields past the limits of the feedforward scale regressor. From the experiments in Table 3, both are helpful for refining accuracy and reducing the generalization gap between different scales. Optimizing end-to-end, over all parameters, fails to achieve better than baseline . Iterative optimization gives a simple control over the amount of computation: the number of updates. This is a trade-off, because enough updates are needed for adaptation, but too many requires excessive computation. Table 2 shows that 32 steps are enough for improvement without too much computation. Therefore, we set the number of steps as 32 for all experiments in this paper. For our network, one step of inference optimization takes ∼ 1 10 the time of a full forward pass. We analyze the distribution of scales in Figure 4 and show qualitative segmentation in Figure 5. While better compensating for scale shift is our main goal, our method also refines inference on in-distribution data. The in Table 3 for 1× training and testing show improvement of ∼1 point. We analyze our approach from an adversarial perspective by maximizing the entropy instead of minimizing. To measure the importance of a parameter, we consider how much accuracy degrades when adversarially optimizing it. The more performance degrades, the more it matters. Dynamic Inference Dynamic inference adapts the model to each input . Many approaches, designed and learned (; ; ; ;), rely on bottom-up prediction from the input. Our method extends bottom-up prediction with top-down optimization to iteratively update the model from the output. Recurrent approaches to iterative inference require changing the architecture and training more parameters. Our optimization updates parameters without architectural alteration. Entropy Objective We minimize entropy during testing, not training, in effect tuning a different model to each input. The entropy objectives in existing work are optimized during training, especially for regularization. Entropy is maximized/minimized for domain adaptation (; ; ;) and semi-supervised learning . In reinforcement learning, maximum entropy regularization improves policy optimization . We optimize entropy locally for each input during testing, while existing use cases optimize globally for a dataset during training. We optimize an unsupervised objective on output statistics to update model parameters for each test input. Energy minimization models and probabilistic graphical models learn model parameters during training then optimize over outputs during inference. The parameters of deep energy models and graphical models are fixed during testing, while our model is further optimized on the test distribution. Alternative schemes for learning during testing, like transduction and meta-learning, differ in their requirements. Transductive learning optimizes jointly over the training and testing sets, which can be impractical at deep learning scale. We optimize over each test input independently, hence scalably, without sustained need for the (potentially massive) training set. Meta-learning by gradients updates model parameters during inference, but requires supervision during testing and more costly optimization during meta-training. Dynamic inference by optimization iteratively adapts the model to each input. Our show that optimization to minimize entropy with respect to score and scale parameters extends adaptivity for semantic segmentation beyond feedforward dynamic inference. Generalization improves when the training and testing scales differ substantially, and modest refinement is achieved even when the training and testing scales are the same. While we focus on entropy minimization and scale inference, more optimization for dynamic inference schemes are potentially possible through the choice of objective and variables. is the out-of-distribution prediction for our iterative optimization method. Our method corrects noisy, over-segmented fragments and false negatives in true segments.
Unsupervised optimization during inference gives top-down feedback to iteratively adjust feedforward prediction of scale variation for more equivariant recognition.
1,446
scitldr
The Deep Image Prior is a fascinating recent approach for recovering images which appear natural, yet is not fully understood. This work aims at shedding some further light on this approach by investigating the properties of the early outputs of the DIP. First, we show that these early iterations demonstrate invariance to adversarial perturbations by classifying progressive DIP outputs and using a novel saliency map approach. Next we explore using DIP as a defence against adversaries, showing good potential. Finally, we examine the adversarial invariancy of the early DIP outputs, and hypothesize that these outputs may remove non-robust image features. By comparing classification confidence values we show some evidence confirming this hypothesis. 1 surprisingly showed that just the structure of a convolutional neural network is capable of capturing a good portion of images' statistics. They demonstrated that starting with an untrained network, and then training to guide the output towards a specific target image for image restoration tasks such as denoising, super-resolution, and in-painting achieved performance which is comparable to state-of-the-art approaches. Their approach, termed the Deep Image Prior (DIP), shows that the architecture of a network can act as a powerful prior. The same network has been found to have excellent performance as a natural image prior . The ability to detect natural images poses great significance in recent years, especially with the increasing security concerns raised over natural-looking images that are not correctly classified, called adversarial examples . These adversarial examples can be thought of as incremental, non-natural perturbations. As such, using the Deep Image Prior as a recovery method can indicate its ability to work as a natural denoiser, a hypothesis that will initially be tested. Furthermore, we use the Deep Image Prior to develop an adversarial defence, thereby investigating its potential. Then, we investigate the early iterations of the network by producing saliency maps of the Deep Image Prior outputs (DIP outputs). Saliency maps show the pixels which are most salient (relevant) in reaching a target classification. We hope to show that the salient pixels gather to display more clear, distinct, and robust features of the images. showed that adversarial examples are a of the existence of nonrobust features in the images, which are highly predictive, yet incomprehensible to humans. The successful performance of the Deep Image Prior in recovering the original classes from adversarial examples , raises the argument that the Deep Image Prior produces images that have'dropped' their non-robust features and are left with the robust image features. To test this theory we directly use the dataset from consisting of robust and non-robust image features, and passing these through the Deep Image Prior. By comparing the DIP outputs of the robust and non-robust image datasets, we hope to see evidence towards the ability of the Deep Image Prior to select robust images. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. As a prerequisite we first show the ability of the Deep Image Prior to recover from adversarial perturbations. For this investigation, three methods for generating adversarial examples will be considered: the Fast Gradient-Sign Method (FGSM) , the Basic Iterative method (BI), and the Least-Likely Class Iterative method (LLCI) . Adversarial examples were generated for 20 images using the three methods for various adversarial strengths. The DIP output was collected and classified every 100 iterations and the values of the true class confidence were obtained. The classifier used was ResNet18 (a). The are shown in Figure 1. More accurate classifiers were also briefly considered, such as Inception-v3, but no qualitative differences were found. It is apparent that the DIP outputs in the earlier iterations allow the classifier to recover the true class, as evident from the peaks in the confidence values. Then, the network output converges back to the adversary observed through the decrease in confidence values. The number of iterations at which peak confidence occurs appears to be different among adversaries and also among adversarial strengths. Nevertheless the theme is similar; exhibiting peaks at the earlier DIP iterations, showing the ability of the network to recover from adversarial perturbations. We have shown that the DIP can be an interesting and effective tool for recovering from adversarial perturbations. However, details about the iterative process and the transformation of the input into the various DIP outputs are still unknown. To test the nature of these outputs we introduce a novel saliency map approach, termed MIG-SG. This method is a variation of the integrated gradients approach while using SmoothGrad . More information about this method can be found in the Appendix. Figure 2 shows, on a step-by-step basis, how the salient features of the image change for progressive DIP outputs. While the image is very blurry after 300-400 iterations, the saliency map already shows that all the key features of the knife have already been identified. This is confirmed by the confidence of the DIP output, which increased to > 0:5 after just 200 iterations. On the contrary, observing the outputs at 2000 and 4000 iterations shows that salient features have become more focused on the blade of the knife. Previously, the salient features focused on the handle and the butt of the blade as observed from the bottom row of images in Figure 2. Furthermore, it is no longer clear what the salient features represent, a fact also illustrated in the decreasing value of the true class confidence. Overall, the salient maps "lost" various salient features as the DIP output was converging towards the adversarial example. Overall, with the clearest saliency maps observed at low iteration numbers, we observe evidence that the early DIP outputs are invariant towards adversarial effects. To mount a defence using the Deep Image Prior we aim to transform the input of the classifier to a state where the adversarial perturbations are not detectable. The classifier used for this investigation was ResNet18 (a). By using a simple classifier to make this defence, we are able to evaluate the potential of the Deep Image Prior to recover the original class from adversarial perturbations individually. Our are compared against the defence from that uses randomisation to defend against adversarial perturbations, and which also uses a similar evaluation method. Understandably, using the Deep Image Prior decreases the accuracy of the network on clean images. From Table 1, there is a noticeable decrease in top-1 accuracy, especially when using fewer DIP iterations. As the number of iterations is increased, the top-1 accuracy increases with it, at a loss of computational speed. Since the computational costs are already very high, the defence was not tested for larger iteration numbers, as that would make it slow and impractical. The of using the Deep Image Prior on adversarial examples are shown in Table 2 and display a very competitive performance with the reference defence method, having a higher accuracy across all three adversaries used in that comparison. The average accuracy is highest after 1000 iterations, however, this is not best for all the adversaries as observed from the FGSM adversary with = 10. Overall, we see a decreased ability to correctly classify images, combined with an increased ability to defend against adversaries. This , is similar to the one described by , where the classifier trained on the robust image dataset, highlighting the ability of the Deep Image Prior to select these robust features. 5 Using the robust image dataset For this test, We used the pre-generated robust and non-robust image datasets from. The architecture used for the Deep Image Prior had to be altered since CIFAR-10 images were used. Details can be found in the Appendix. The outputs were evaluated through the classification confidence of the original class of the image. Both figures in 3 show the difference between the classification of robust and non-robust datasets to an external classifier, where robust images hold more information about the true class compared to the non-robust images. Regarding the response at the earlier iteration numbers, it is very subtle, yet we see some evidence to support our hypothesis. The non-robust image datasets show a trough before converging to their final classification confidence, while the robust image datasets shows a peak in confidence, indicating that the the convergence of the network towards the robust images was faster than the convergence on the non-robust ones. 6 Discussion on architecture & showed that the DIP achieved remarkable , apparently due to the prior encoded in the network architecture. To test this, we evaluated the sensitivity of DIP to changes in network architecture. Surprisingly, we found that while some sensitivity exists, it is not high, with various architecture changes showing little to no changes in performance. However, some changes showed great influence on the performance of the network. In particular, the network was found to fail when no skip connections were used, or when a very shallow network was used. Nevertheless, no evidence was found that can describe this sensitivity as a "resonance", as stated in the original paper. See Appendix for details. We observed the network's ability to successfully recover from adversarial perturbations, caused by the resistance of the early DIP outputs to adversarial perturbations. This was further observed from looking at appropriate saliency maps, where we introduced a new method. Consequently, the network was found to create a promising adversarial defence. Lastly, we provided evidence for the ability of the Deep Image Prior to select robust image features over non-robust features in its early iterations, as defined by. As the name suggests, this method performs integration between a baseline and our image, numerically, by calculating the forward derivative of the network. SmoothGrad calculates this forward derivative, by performing the differentiation on a noisy version of the input, and averaging the derivative over multiple samples . As a , combining the two methods appears to yield significantly improved saliency maps. Since we are performing integration, solely taking the absolute value of the of the grad function, failed to produce . However, a small modification was made to the algorithm in an attempt to stop the method from failing. By also taking the absolute value of the final , the method produced very promising . Using the absolute values of the gradients for coloured images, enables negative gradients to also be contribute to the salient features of the image. Mathematically our saliency method can be expressed as: where x is the input image, x0 is the baseline image and is only used for comparative purposes . Additionally, m is the number of integration steps and N is the number of samples used in the computation of SmoothGrad . Lastly, Ft(x) returns the classification of class t before the cost function is calculated. Common saliency maps have been generated for a panda image, shown in Figure 4. The MIG-SG saliency map is observed in Figure 4e and while it can definitely appear as a scary image, it provides very interesting information about the panda. This saliency map, instead of picking up all the panda in the image, has instead focused on its characteristic features, the eyes and the mouth. This makes it a very useful tool to visually interpret images. For the Deep Image Prior, the original architecture was used . The number of iterations was left at a low value, as the of this work suggested that the DIP output is less sensitive to adversarial perturbations at earlier iterations. Three iteration numbers were tested: 500, 750 and 1000. The tests were conducted on a dataset of images from 200 randomly selected classes from the ImageNet database. From this dataset, 500 images correctly classified using the ResNet18 classifier were then chosen to test the performance of our defence. The diagram of the defence is shown in Figure 5. Two architectures were considered, the first is the one used in the original paper of the Deep Image Prior but with the encoder depth changed from 5 to 3, to allow for the decreased dimensionality of the CIFAR-10 images. The second architecture, uses only 16 feature maps in each layer, compared with the original number which was 128, while also the encoder depth was kept at 3. Architecture-1 and architecture-2 can be found in Tables 3 and 4 respectively.
We investigate properties of the recently introduced Deep Image Prior (Ulyanov et al, 2017)
1,447
scitldr
In this paper, we address the challenge of limited labeled data and class imbalance problem for machine learning-based rumor detection on social media. We present an offline data augmentation method based on semantic relatedness for rumor detection. To this end, unlabeled social media data is exploited to augment limited labeled data. A context-aware neural language model and a large credibility-focused Twitter corpus are employed to learn effective representations of rumor tweets for semantic relatedness measurement. A language model fine-tuned with the a large domain-specific corpus shows a dramatic improvement on training data augmentation for rumor detection over pretrained language models. We conduct experiments on six different real-world events based on five publicly available data sets and one augmented data set. Our experiments show that the proposed method allows us to generate a larger training data with reasonable quality via weak supervision. We present preliminary achieved using a state-of-the-art neural network model with augmented data for rumor detection. Research areas that have recently been received much attention in using Machine Learning (ML) and Natural Language Processing for automated rumor and fake news detection BID5 BID11 BID19 BID25 BID22 and fact-checking BID2 BID21 BID10. One major bottleneck of state-of-the-art (SoA) ML methods is that they require a vast amount of labeled data to be trained and manual labeling of rumors source on social media requires special skills and time-consuming BID26. Due to limited labeled training data, existing neural networks (NNs) for rumor detection usually have shallow architecture BID3 BID13. The scarcity of labeled data is a major challenge of studying rumors on social media BID0. Another problem is that publicly available data sets for rumor-related tasks such as PHEME data BID10 suffer from imbalanced class distributions. Existing methods for handling the class imbalance problem (e.g., oversampling and the use of synthetic data BID24) may cause over-fitting and poor generalization performance. A methodology for rumor data augmentation with the minimum of human supervision is necessary. Previous studies presented that rumors can evolve into many variants which share similar propagation patterns in their early stage BID14 BID3 BID1 BID4. Based on this hypothesis, we argue that enriching existing labeled data with unlabeled source tweets conveying the same or similar meanings is a promising attempt for rumor detection methods that rely on the structure of rumor propagation in social media. In this work, we propose a novel data augmentation method for automatic rumor detection based on semantic relatedness. We exploit a publicly available paraphrase identification corpus as well as context-sensitive embeddings of labeled references and unlabeled candidate source tweets. Pairwise similarity is used to guide the assignment of pseudolabels to unlabeled tweets. ELMo BID18, a SoA context-sensitive neural language model (NLM), is fine-tuned on a large credibility-focused social media corpus and used to encode tweets. Our show that data augmentation can contribute to rumor detection with deep learning with increased training data size and a reasonable level of quality. This has potential for further performance improvements using deeper NNs. We present data augmentation for three events and the performance of a SoA DNN model for rumor detection with augmented data in Section 5. Four publicly available data sets covering a wide range of real-world events on social media as well as a Twitter paraphrase corpus are used for this project. SemEval-2015 task 1 data BID23 This data is built for paraphrase identification and semantic similarity measurement. This data set is employed in our semantic relatedness method in order to fine-tune optimum relatedness thresholds through pairwise comparisons between the tweet embeddings of labeled reference and unlabeled candidates (see details in Section 4). PHEME data set BID8 The latest PHEME data is used as reference set for data augmentation covering 9 manually labeled rumor events. CrisisLexT26 BID16 This data comprises tweets associated with 26 hazardous events happened between 2012 and 2013. "2013 Boston bombings" data from this data set is used as reference set in this experiment. Twitter event data BID25 This data consists of over 147 million tweets associated with 30 real-world events unfolded between February 2012 and May 2016, among which six events are selected as a pool of candidates source tweets. This covers'Ferguson unrest','Sydney siege','Ottawa shooting','Charliehebdo attacks','Germanwings plane crash', and'Boston marathon bombings'. We refer to the first five events with reference set generated from PHEME data as'PHEME5'. For the'Boston bombings' event, we generate references from CrisisLexT26 and a fact-checking website'Snopes.com' 1 (refer to Section 3). CREDBANK BID15 This large corpus comprises more than 60M tweets grouped into 1049 events, each of which were manually annotated with credibility ratings. This is leveraged to fine-tune ELMo in order to provide better representations for rumor-related tasks (see Section 3). An overview of our data augmentation method is presented in Figure 1. We exploit a limited amount of labeled data as weak supervision (i.e., references). References are generated separately for PHEME5 and "Boston bombings" data from different data with varying annotations schemes (as mentioned in Section 2). Due to space constraints, we omit the detailed process of reference generation here. Candidate tweets refer to any tweets that report an event of interest. The leftmost box shows how semantic similarity is computed between a given pair of reference and candidate tweets. Firstly, the contextual embedding model (ELMo) is fine-tuned with a domain-specific corpus to learn representations of rumors. Given corpora that contain pairs of tweets, we apply language-based filtering and perform linguistic preprocessing. The preprocessing includes lowercasing, removing'rt @', URLs 2, and non-alphabetic characters, and tokenization. Tweets with at least 4 tokens are considered to reduce noise BID6. Then, we compute ELMo embeddings of tweets for subsequent semantic relatedness measurement. Cosine similarity between each embeding pair is used as a relatedness measure. We use SemEval-2015 task 1 data set as a benchmark for relatedness threshold fine-tuning (see Section 4). Having optimum thresholds, semantic similarity computation is performed for reference-candidate pairs. Rumor and non-rumor source tweets are selected from the candidate pool using the fine-tuned thresholds. In the final step, data collection is performed to retrieve social-temporal context data (typically retweets and replies) for the selected source tweets. Source tweets without contexts are filtered out. We download source tweets for six selected events in the Twitter event and CREDBANK using Twitter API 3. For the CREDBANK, we downloaded 77,954,446 tweets (i.e., 97.1% of the original data). After deduplication, the train corpus contains 6,157,180 tweets with 146,340,647 tokens and 2,235,075 vocabularies. Rumor-specific Embedding (ELMo) Previous research shows that fine-tuning NLMs with in-domain data allows them to learn more meaningful word representations and provides a performance gain BID7 BID18. To fine-tune a pretrained ELMo, we generate a data set using the CREDBANK. Sentences are shuffled and spit into training and hold-out sets (with a ratio of around 0.02%). We also generate a test set that consists of 6,162 tweets in total using the PHEME data. TAB1 shows the number of tweets, tokens and vocabularies in the CREDBANK after deduplication. Following the practice in , a linear combination of the states of each LSTM layer and the token embeddings is adopted to encode tweets. The training corpus is split into small batches with a maximum of 5000 tweets for each batch. Training time took more than 800 hours on a NVIDIA Kepler K40M GPU with less than 10 GiB GPU memory. Since the CREDBANK training set is still a relatively small for NLMs, we only fine-tune a pretrained ELMo with 1 epoch to avoid over-fitting. The shows a large improvement in perplexity on both hold-out and test sets (See TAB2). Semantic Relatedness Fine-Tuning TAB3 compares different models for word representation on the SemEval-2015 data. We show the based on the maximum F-score each model achieves. It shows the effectiveness of our CREDBANK fine-tuned ELMo over a pretrained ELMo ("Original (5.5B)") and other SoA word embedding models. To ensure higher quality, we argue that a higher precision is required. Therefore, relatedness thresholds are fine-tuned based on precision achieved by the best-performing model. We highlight some statistics in Tabel 4.Data Augmentation We follow our data augmentation procedure described in Section 3. After performing pairwise similarity computation, a relatedness threshold (0.8) is adopted to select rumor source tweets from a pool of candidates. We randomly sample 3*(# of rumor source tweets) non-rumor source tweets if the score of a candidate tweet is less than 0.3. Sampling more negative examples is an attempt to balance class distributions after source tweets without contexts are removed. This is based on a hypothesis that non-rumors are less likely to have reactions than rumors Table 4: Results of fine-tuning thresholds based on precision. as they generally draw less attention. In terms of computational performance, ELMo embedding computation and semantic relatedness measurement are performed with CPU. Tweet encoding takes around 10 per second and pairwise comparison takes around 869 pairs per second on average. Data Augmentation Before filtering out source tweets without replies, 1,238 rumors and 3,714 non-rumors are collected for "bostonbombings". After filtering, 165 rumors and 228 non-rumors remain. Although the augmented data size is very limited for "bostonbombings", experiments on "sydneysiege" and "ottawashooting" show encouraging . A total of 25,486 rumors and 76,106 non-rumors are additionally obtained for "sydneysiege", and 21,519 rumors and 62,590 non-rumors are additionally obtained for "ottawashooting". We make our augmented data publicly available 4. Rumor Detection We conduct rumor detection experiments using two different data sets: PHEME5, PHEME5 with the "bostonbombings" data ("PHEME5+Boston"). We employ BID10's method as a SoA baseline model for rumor detection with slight modifications. For the sake of simplicity, we modify the implementation of "MTL2 Veracity+Detection" for rumor detection only. We construct input by using a source tweet and the top (i.e., most recent) 24 replies in this task. We perform leave-one-out cross-validation (LOOCV) on the PHEME5 and augmented data sets. The overall experimental for rumor detection are presented in TAB4. TAB5 shows LOOCV . We observe that overall performance decreases with the augmented data (i.e., PHEME5+Boston). The "fergusonunrest" is the most difficult event for a rumor detection model as it has a unique class distribution distinguished from all other events BID10. It is worth noting that our data augmentation improves the performance of rumor detection on the "fergusonunrest". The completion of data augmentation for events other than "'bostonbombings" has potential to boost overall and per event performance of rumor detection. We present a methodology of data augmentation for rumor detection that exploits semantic relatedness between limited labeled data and unlabeled data. This study is part of further research that aims to use a massive amount of publicly available unlabeled Twitter data and the potential of DNNs in a wide range of tasks related to rumors on social media. Our current research has demonstrated the potential efficiency and effectiveness of semantically augmented data in combating the labeled data scarcity and class imbalance problems of publicly available rumor data sets. In future work, we plan to augment data for more events to build comprehensive data sets for rumor detection, and conduct experiments on rumor detection via deep learning. We will evaluate the effectiveness of augmented data in alleviating over-fitting and its usefulness in facilitating deeper NNs for rumor detection. Further experiments will be conducted to examine the generalization of rumor detection models on unseen rumors.
We propose a methodology of augmenting publicly available data for rumor studies based on samantic relatedness between limited labeled and unlabeled data.
1,448
scitldr
Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention . Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU. There has been much recent progress in sequence modeling through recurrent neural networks (RNN; BID54, convolutional networks (CNN; BID28 BID14 BID7) and self-attention models BID40 BID58. RNNs integrate context information by updating a hidden state at every time-step, CNNs summarize a fixed size context through multiple layers, while as self-attention directly summarizes all context. Attention assigns context elements attention weights which define a weighted sum over context representations BID52 BID8 BID36. Source-target attention summarizes information from another sequence such as in machine translation while as self-attention operates over the current sequence. Self-attention has been formulated as content-based where attention weights are computed by comparing the current time-step to all elements in the context FIG0 ). The ability to compute comparisons over such unrestricted context sizes are seen as a key characteristic of self-attention BID58. However, the ability of self-attention to model long-range dependencies has recently come into question BID57 and the unlimited context size is computationally very challenging due to the quadratic complexity in the input length. Furthermore, in practice long sequences require the introduction of hierarchies.In this paper, we introduce lightweight convolutions which are depth-wise separable BID51 BID7, softmax-normalized and share weights over the channel dimension. The is a convolution with several orders of magnitude fewer weights than a standard nonseparable convolution. Different to self-attention, lightweight convolutions reuse the same weights for context elements, regardless of the current time-step. Dynamic convolutions build on lightweight convolutions by predicting a different convolution kernel at every time-step. The kernel is a function of the current time-step only as opposed to the entire context as in self-attention (FIG0 . Dynamic convolutions are similar to locally connected layers in the sense that the weights change at every position, however, the difference is that weights are dynamically generated by the model rather than fixed after training BID30 BID56 BID6 . Our approach also bears similarity to location-based attention which does not access the context to determine attention weights, however, we do not directly take the attention weights from the previous time-step into account BID8 BID36 . BID49 reduce complexity by performing attention within blocks of the input sequence and BID48 BID50 perform more fine-grained attention over each feature. BID47 and BID17 use input-dependent filters for text classification tasks. Our experiments show that lightweight convolutions perform competitively to strong self-attention and that dynamic convolutions can perform even better. On WMT English-German translation dynamic convolutions achieve a new state of the art of 29.7 BLEU, on WMT English-French they match the best reported in the literature, and on IWSLT German-English dynamic convolutions outperform self-attention by 0.8 BLEU. Dynamic convolutions achieve 20% faster runtime than a highly-optimized self-attention baseline. For language modeling on the Billion word benchmark dynamic convolutions perform as well as or better than self-attention and on CNN-DailyMail abstractive document summarization we outperform a strong self-attention model. We first outline sequence to sequence learning and self-attention. Our work builds on non-separable convolutions as well as depthwise separable convolutions. Sequence to sequence learning maps a source sequence to a target sequence via two separate networks such as in machine translation BID54 . The encoder network computes representations for the source sequence such as an English sentence and the decoder network autoregressively generates a target sequence based on the encoder output. The self-attention module of BID58 applies three projections to the input X ∈ R n×d to obtain key (K), query (Q), and value (V) representations, where n is the number of time steps, d the input/output dimension (FIG1 . It also defines a number of heads H where each head can learn separate attention weights over d k features and attend to different positions. The module computes dot-products between key/query pairs, scales to stabilize training, and then softmax normalizes the . Finally, it computes a weighted sum using the output of the value projection (V): DISPLAYFORM0 Depthwise convolutions perform a convolution independently over every channel. The number of parameters can be reduced from d 2 k to dk where k is the kernel width. The output O ∈ R n×d of a depthwise convolution with weight W ∈ R d×k for element i and output dimension c is defined as: DISPLAYFORM1 In this section, we introduce LightConv, a depthwise convolution which shares certain output channels and whose weights are normalized across the temporal dimension using a softmax. Compared to self-attention, LightConv has a fixed context window and it determines the importance of context elements with a set of weights that do not change over time steps. We will show that models equipped with lightweight convolutions show better generalization compared to regular convolutions and that they can be competitive to state-of-the-art self-attention models (§6). This is surprising because the common belief is that content-based self-attention mechanisms are necessary to obtaining stateof-the-art in natural language processing applications. Furthermore, the low computational profile of LightConv enables us to formulate efficient dynamic convolutions (§4).LightConv computes the following for the i-th element in the sequence and output channel c: DISPLAYFORM0 Weight sharing. We tie the parameters of every subsequent number of d H channels, which reduces the number of parameters by a factor of d H. As illustration, a regular convolution requires 7,340,032 (d 2 × k) weights for d = 1024 and k = 7, a depthwise separable convolution has 7,168 weights (d × k), and with weight sharing, H = 16, we have only 112 (H × k) weights. We will see that this vast reduction in the number of parameters is crucial to make dynamic convolutions possible on current hardware. BID60 ties the weights of all channels (H = 1). H×k across the temporal dimension k using a softmax operation: DISPLAYFORM0 Module. FIG1 shows the architecture of the module where we integrate LightConv. We first apply an input projection mapping from dimension d to 2d, followed by a gated linear unit (GLU;, and the actual lightweight convolution. The GLU uses half of the inputs as gates by applying sigmoid units and then computes a pointwise product with the other inputs. We also apply an output projection of size W O ∈ R d×d to the output of LightConv. We found DropConnect to be a good regularizer for the LightConv module BID59 . Specifically, we drop every entry of the normalized weights sof tmax(W) with probability p and divide it by 1 − p during training. This amounts to removing some of the temporal information within a channel. Implementation. Existing CUDA primitives for convolutions did not perform very well to implement LightConv and we found the following solution faster on short sequences: We copy and expand the normalized weights W ∈ R H×k to a band matrix of size BH × n × n, where B is the batch size. We then reshape and transpose the inputs to size BH × n × d H, and perform a batch matrix multiplication to get the outputs. We expect a dedicated CUDA kernel to be much more efficient. A dynamic convolution has kernels that vary over time as a learned function of the individual time steps. A dynamic version of standard convolutions would be impractical for current GPUs due to their large memory requirements. We address this problem by building on LightConv which drastically reduces the number of parameters (§3).DynamicConv takes the same form as LightConv but uses a time-step dependent kernel that is computed using a function f: DISPLAYFORM0 we model f with a simple linear module with learned weights DISPLAYFORM1. Similar to self-attention, DynamicConv changes the weights assigned to context elements over time. However, the weights of DynamicConv do not depend on the entire context, they are a function of the current time-step only. Self-attention requires a quadratic number of operations in the sentence length to compute attention weights, while the computation of dynamic kernels for DynamicConv scales linearly in the sequence length. Our experiments (§6) show that models using DynamicConv match or exceed the performance of state-of-the-art models that use context-based self-attention. This challenges the typical intuitions about the importance of content-based self-attention in natural language processing applications. We use an encoder-decoder architecture for sequence to sequence learning BID54 and we closely follow the architectural choices presented in BID58. Our self-attention baseline is the fairseq re-implementation of the Transformer Big architecture. The encoder and decoder networks have N blocks each. Encoder blocks contain two sub-blocks: The first is a self-attention module (§2), a LightConv module, or a DynamicConv module (§4). The second sub-block is a feed-forward module: DISPLAYFORM0 unless otherwise stated. Sub-blocks are surrounded by residual connections BID22 and layer normalization BID1.Decoder blocks are identical except that they have an additional source-target attention sub-block between the self-attention and feed-forward module. The source-target attention is equivalent to the self-attention module, except that the values and keys are projections over the encoder output for each source word. Words are fed to the encoder and decoder networks in d dimensional embeddings. We add sinusoidal position embeddings to encode the absolute position of each word in the sequence BID58. The model computes a distribution over vocabulary V by transforming the decoder output via a linear layer with weights W V ∈ R d×V followed by softmax normalization. LightConv and DynamicConv are identical to Transformer Big, except that self-attention modules are swapped with either fixed or dynamic convolutions. These models also use fewer parameters per block (cf. FIG1 and FIG1) and we therefore increase the number of blocks to N = 7 for the encoder to roughly match the parameter count of Transformer Big. We generally set H = 16. Both LightConv and DynamicConv set the the encoder and decoder kernel sizes to 3, 7, 15, 31x4 for each block respectively; except for the decoder where we have only three top layers with kernel size 31. To get a thorough understanding of the limitations of LightConv and DynamicConv we evaluate on three different tasks: machine translation, language modeling and abstractive summarization. Machine Translation. We report on four benchmarks: For WMT English to German (EnDe) we replicate the setup of BID58, based on WMT'16 training data with 4.5M sentence pairs, we validate on newstest2013 and test on newstest2014. 3 The vocabulary is a 32K joint source and target byte pair encoding (BPE; BID44 . For WMT English to French (EnFr), we borrow the setup of BID15 with 36M training sentence pairs from WMT'14, validate on newstest2012+2013 and test on newstest2014. The 40K vocabulary is based on a joint source and target BPE factorization. For WMT English to Chinese (Zh-En), we pre-process the WMT'17 training data following BID21 ing in 20M sentence pairs. We develop on devtest2017 and test on newstest2017. For IWSLT'14 German-English (De-En) we replicate the setup of for 160K training sentence pairs and 10K joint BPE vocabulary. For this benchmark only, data is lowercased. For WMT En-De, WMT En-Fr, we measure case-sensitive tokenized BLEU. 4 For WMT En-De only we apply compound splitting similar to BID58. For WMT Zh-En we measure detokenized BLEU to be comparable to BID21. 5 We train three random initializations of a each configuration and report test accuracy of the seed which ed in the highest validation BLEU. Ablations are conducted on the validation set and we report the mean BLEU and standard deviation on this set. WMT En-De, WMT En-Fr are based on beam search with a beam width of 5, IWSLT uses beam 4, and WMT Zh-En beam 8 following BID21. For all datasets, we tune a length penalty as well as the number of checkpoints to average on the validation set. Language Modeling. We evaluate on the large-scale Billion word dataset BID4 which contains 768M tokens and has a vocabulary of nearly 800K types. Sentences in this dataset are shuffled and we batch sentences independently of each other. Models are evaluated in terms of perplexity on the valid and test portions. Summarization. We test the model's ability to process long documents on the CNN-DailyMail summarization task BID23 BID37 comprising over 280K news articles paired with multi-sentence summaries. Articles are truncated to 400 tokens BID43 and we use a BPE vocabulary of 30K types. We evaluate in terms of F1-Rouge, that is Rouge-1, Rouge-2 and Rouge-L BID33. 6 When generating summaries, we follow standard practice in tuning the maximum output length, disallowing repeating the same trigram, and we apply a stepwise length penalty BID40. Translation. We use a dropout rate of 0.3 for WMT En-De and IWSLT De-En, 0.1 for WMT EnFr, and 0.25 for WMT Zh-En. WMT models are optimized with Adam and a cosine learning rate schedule BID29 BID35 where the learning rate is first linearly warmed up for 10K steps from 10 −7 to 10 −3 and then annealed following a cosine rate with a single cycle. For IWSLT'14 De-En, we use a schedule based on the inverse square root of the current step BID58. We train the WMT models on 8 NVIDIA V100 GPUs for a total of 30K steps on WMT En-De, 40K steps for WMT Zh-En and 80K steps for WMT En-Fr. For IWSLT De-En we train for 50K steps on a single GPU.We use floating point 16 precision and accumulate the gradients for 16 batches before applying an update, except for IWSLT where we do not accumulate gradients. Batches contain up to 459K source tokens and the same number of target tokens for both WMT En-De and WMT Zh-En, 655K for En-Fr, and 4K for IWSLT De-En. We use label smoothing with 0.1 weight for the uniform prior distribution over the vocabulary BID55 BID41.Language Modeling. We follow the same setup as for translation but remove the encoder module. For the Billion word benchmark we use an adaptive softmax output layer to reduce the computational burden of the large vocabulary BID18 BID42 We train on 32 GPUs with batches of 65K tokens for 975K updates. As optimizer we use Nesterov's accelerated gradient method BID53 ) with a momentum value of 0.99 and we renormalize gradients if their norm exceeds 0.1 BID39. The learning rate is linearly warmed up from 10 −7 to 1 for 16K steps and then annealed using a cosine learning rate schedule BID35 with one cycle. Summarization. We train with Adam using the cosine learning rate schedule with a warmup of 10K steps and a period of 20K updates. We use weight decay 1e-3 and dropout 0.3. We first report on WMT En-De and WMT En-Fr where we compare to the best in the literature, most of which are based on self-attention. Table 1 shows that LightConv performs very competitively and only trails the state of the art by 0.1 BLEU on WMT En-Fr; the state of the art is based on self-attention. This is despite the simplicity of LightConv which operates with a very small number of fixed weights over all time steps whereas self-attention computes dot-products with all context elements at every time-step. DynamicConv outperforms the best known on WMT En-De by 0.4 BLEU and achieves a new state of the art, whereas on WMT En-Fr it matches the state of the art. This shows that content-based self-attention is not necessary to achieve good accuracy on large translation benchmarks. IWSLT is a much smaller benchmark and we therefore switch to a smaller architecture: d f f = 1024, d = 512, and H = 4. The self-attention baseline on this dataset is the best reported in the literature TAB1. Table 3: Ablation on WMT English-German newstest2013. (+) indicates that a includes all preceding features. Speed based on beam size 4, batch size 256 on an NVIDIA P100 GPU. In this section we evaluate the impact of the various choices we made for LightConv (§3) and DynamicConv (§4). We first show that limiting the maximum context size of self-attention has no impact on validation accuracy (Table 3). Note that our baseline is stronger than the original of BID58. Next, we replace self-attention blocks with non-separable convolutions (CNN) with kernel size 3 and input/output dimension d = 1024. The CNN block has no input and output projections compared to the baseline and we add one more encoder layer to assimilate the parameter count. This CNN with a narrow kernel trails self-attention by 1 BLEU.We improve this by switching to a depthwise separable convolution (CNN Depthwise) with input and output projections of size d = 1024. When we progressively increase the kernel width from lower to higher layers then this further improves accuracy. This narrows the gap to self-attention to only 0.5 BLEU. DropConnect gives a slight performance improvement and weight sharing does not decrease performance. Adding softmax normalization to the weights is only 0.3 BLEU below the accuracy of the baseline. This corresponds to LightConv. In Appendix A we compare softmaxnormalization to various alternatives. Finally, dynamic convolutions (DynamicConv) achieve the same validation accuracy as self-attention with slightly fewer parameters and at 20% higher inference speed. Softmax-normalization is important for DynamicConv since training diverged in our experiments when removing it. To make the models more comparable, we do not introduce GLU after the input projection. For comparison, we re-implemented averaged attention networks (AAN;) which compute a uniform average over past model states instead of a weighted average as in self-attention. Our re-implementation is efficient: we measure 129 sentences/sec for a base transformer-AAN on newstest2014 compared to 20 sentences/sec for. Table 3 shows that our models outperform this approach. Note that AANs still use self-attention in the encoder network while as our approach does away with self-attention both in the encoder and decoder. As second task we consider language modeling on the Billion word benchmark. The self-attention baseline has N = 16 blocks, each with a self-attention module and a feed-forward module using d f f = 4096 and d = 1024. DynamicConv uses N = 17 blocks to assimilate the parameter count and we use kernel sizes 15x2, 31x4 and 63x11. TAB3 shows that DynamicConv achieves slightly better perplexity than our self-attention baseline which is very competitive. BID58. TAB4 shows that LightConv outperforms the self-attention baseline as well as comparable previous work and DynamicConv performs even better. We also show for a reinforcement learning approach BID3 and note that RL is equally applicable to our architecture. We presented lightweight convolutions which perform competitively to the best reported in the literature despite their simplicity. They have a very small parameter footprint and the kernel does not change over time-steps. This demonstrates that self-attention is not critical to achieve good accuracy on the language tasks we considered. Dynamic convolutions build on lightweight convolutions by predicting a different kernel at every time-step, similar to the attention weights computed by self-attention. The dynamic weights are a function of the current time-step only rather than the entire context. Our experiments show that lightweight convolutions can outperform a strong self-attention baseline on WMT'17 Chinese-English translation, IWSLT'14 German-English translation and CNNDailyMail summarization. Dynamic convolutions improve further and achieve a new state of the art on the test set of WMT'14 English-German. Both lightweight convolution and dynamic convolution are 20% faster at runtime than self-attention. On Billion word language modeling we achieve comparable to self-attention. We are excited about the future of dynamic convolutions and plan to apply them to other tasks such as question answering and computer vision where inputs are even larger than the tasks we considered in this paper. We compare our proposed softmax-normalization of weights to other alternatives in Table 6. For each setting, we use three seeds and report the mean and the standard deviation of the BLEU score on WMT English-German newstest2013. The softmax and norms are computed over the kernel dimension. Simply using the absolute value of the weights or squaring them does not make the training more stable, which shows that having all non-negative weights is not critical. Dividing the weights by the 2 -norm or bounding the weights with sigmoid or the hyperbolic tangent function also stablizes the training procedure; however, the softmax-normalization performs best. 26.7 ± 0.2 Table 6: Alternatives to softmax-normalization in DynamicConv on WMT English-German newstest2013 (= 10 −6). In this section we compare DynamicConv to current non-autoregressive models in the literature. We measured generation speed for DynamicConv on a P100 GPU using batch size one to be comparable with other . Results in the literature are based on either NVIDIA GTX-1080 GPUs or P100 GPUs. The effects of different GPU types is likely negligible because GPUs are vastly underutilized with batch size one. Table 7 shows that DynamicConv with a single decoder layer outperforms all previously reported non-autoregressive both in terms of speed as well as accuracy. Only two non-autoregressive concurrent efforts BID20 BID32 achieve a speedup over DynamicConv with a small drop in BLEU. Notably, both BID20 and BID32 distill autoregressive models into non-autoregressive models BID24, in order to improve their . Model (batch size = 1, beam size = 1) Param BLEU Sent/sec Tok/sec NAT (+ FT) -17.7 25.6 -NAT (+ FT + NPD=10) BID19 -18.7 12.7 -NAT (+ FT + NPD=100) BID19 (6-decoder layers (k=3,7,15,31,31,31) ) 200M 28.5 3.9 110.9 Table 7: Inference speed of non-autoregressive models and small decoder versions of DynamicConv on WMT English-German newstest2014. For some models, the decoding speed (sent/sec) is derived by taking the inverse of the sentence generation latency in the literature.
Dynamic lightweight convolutions are competitive to self-attention on language tasks.
1,449
scitldr
Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave (or even linear) problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework. To make piecemeal progress along these lines, we analyze the behavior of mirror descent (MD) in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality – a property which we call coherence. We first show that ordinary, “vanilla” MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution. We then show that this deficiency is mitigated by optimism: by taking an “extra-gradient” step, optimistic mirror descent (OMD) converges in all coherent problems. Our analysis generalizes and extends the of Daskalakis et al. for optimistic gradient descent (OGD) in bilinear problems, and makes concrete headway for provable convergence beyond convex-concave games. We also provide stochastic analogues of these , and we validate our analysis by numerical experiments in a wide array of GAN models (including Gaussian mixture models, and the CelebA and CIFAR-10 datasets). The surge of recent breakthroughs in deep learning has sparked significant interest in solving optimization problems that are universally considered hard. Accordingly, the need for an effective theory has two different sides: first, a deeper understanding would help demystify the reasons behind the success and/or failures of different training algorithms; second, theoretical advances can inspire effective algorithmic tweaks leading to concrete performance gains. For instance, using tools from the theory of dynamical systems, BID28 BID29 and Panageas & Piliouras showed that a wide variety of first-order methods (including gradient descent and mirror descent) almost always avoid saddle points. More generally, the optimization and machine learning communities alike have dedicated significant effort in understanding non-convex landscapes by searching for properties which could be leveraged for efficient training. As an example, the "strict saddle" property was shown to hold in a wide range of salient objective functions ranging from low-rank matrix factorization BID8 BID20 and dictionary learning [a,b], to principal component analysis BID19, and many other models. On the other hand, adversarial deep learning is nowhere near as well understood, especially in the case of generative adversarial networks (GANs) BID22. Despite an immense amount of recent scrutiny, our theoretical understanding cannot boast similar breakthroughs as in "single-agent" deep learning. Because of this, a considerable corpus of work has been devoted to exploring and enhancing the stability of GANs, including techniques as diverse as the use of Wasserstein metrics, critic gradient penalties BID23, feature matching, minibatch discrimination, etc. [;].Even before the advent of GANs, work on adaptive dynamics in general bilinear zero-sum games (e.g. Rock-Paper-Scissors) established that they lead to persistent, chaotic, recurrent (i.e. cycle-like) behavior [; ;]. Recently, simple specific instances of cycle-like behavior in bilinear games have been revisited mainly through the lens of GANs BID15; ]. Two important recent have established unified pictures about the behavior of continuous and discrete-time first order methods in bilinear games: First, established that continuous-time descent methods in zero-sum games (e.g., gradient descent, follow-the-regularized-leader and the like) are Poincaré recurrent, returning arbitrarily closely to their initial conditions infinitely many times. Second, BID4 examined the discrete-time analogues (gradient descent, multiplicative weights and follow-the-regularized-leader) showing that orbits spiral slowly outwards. These recurrent systems have formal connections to Hamiltonian dynamics and do not behave in a gradient-like fashion BID6; BID5. This is a critical failure of descent methods, but one which BID15 showed can be overcome through "optimism", interpreted in this context as an "extra-gradient" step that pushes the training process further along the incumbent gradient -as a , optimistic gradient descent (OGD) succeeds in cases where vanilla gradient descent (GD) fails (specifically, unconstrained bilinear saddle-point problems).A common theme in the above is that, to obtain a principled methodology for training GANs, it is beneficial to first establish improvements in a more restricted setting, and then test whether these gains carry over to more demanding learning environments. Following these theoretical breadcrumbs, we focus on a class of non-monotone problems whose solutions are related to those of a naturally associated variational inequality, a property which we call coherence. Then, hoping to overcome the shortcomings of ordinary descent methods by exploiting the problem's geometry, we examine the convergence of MD in coherent problems. On the positive side, we show that if a problem is strictly coherent (a condition satisfied by all strictly convex-concave problems), MD converges almost surely, even in stochastic problems (Theorem 3.1). However, under null coherence (the "saturated" opposite to strict coherence), MD spirals outwards from the problem's solutions and may cycle in perpetuity. The null coherence property covers all bilinear models, so this encompasses fully the analysis of BID4 for GD and follow-the-regularized-leader (FTRL) in general bilinear zero-sum games within our coherence framework. Thus, in and by themselves, gradient/mirror descent methods do not suffice for training convoluted, adversarial deep learning models. To mitigate this deficiency, we consider the addition of an extra-gradient step which looks ahead and takes an additional step along a "future" gradient. This technique was first introduced by BID27 and subsequently gained great popularity as the basis of the mirror-prox algorithm of Nemirovski which achieves an optimal O(1/n) convergence rate in Lipschitz monotone variational inequalities (see also , for a primal-dual variant of the method and BID25, for an extension to stochastic variational inequalities and saddle-point problems).In the learning literature, the extra-gradient technique (or, sometimes, a variant thereof) is often referred to as optimistic mirror descent (OMD) [] and its effectiveness in GAN training was recently examined by BID15 and Yadav et al. (the latter involving a damping mechanism for only one of the players). More recently, BID21 considered a variant method which incorporates a mechanism that "extrapolates from the past" in order to circumvent the need for a second oracle call in the extra-gradient step. Specifically, BID21 showed that the extra-gradient algorithm with gradient reuse converges a) geometrically in strongly monotone, deterministic variational inequalities; and b) ergodically in general stochastic variational inequalities, achieving in that case an oracle complexity bound that is √ 13/7/2 ≈ 68% of a bound previously established by BID25 for the mirror-prox algorithm. However, beyond convex-concave problems, averaging offers no tangible benefits because there is no way to relate the value of the ergodic average to the value of the iterates. As a , moving closer to GAN training requires changing both the algorithm's output as well as the accompanying analysis. With this as our guiding principle, we first show that the last iterate of OMD converges in all coherent problems, including null-coherent ones. As a special case, this generalizes and extends the of Noor et al. for OGD in pseudo-monotone problems, and also settles in the affirmative an open question of BID15 concerning the convergence of the last iterate of OGD in nonlinear problems. Going beyond deterministic problems, we also show that OMD converges with probability 1 even in stochastic saddle-point problems that are strictly coherent. These complement the existing literature on the topic by showing that a cheap extra-gradient add-on can lead to significant performance gains when applied to state-of-the-art methods (such as Adam). We validate this prediction for a wide array of standard GAN models in Section 5. Saddle-point problems. Consider a saddle-point problem of the general form min DISPLAYFORM0 where each feasible region X i, i = 1, 2, is a compact convex subset of a finite-dimensional normed space V i ≡ d i, and f: X ≡ X 1 × X 2 → denotes the problem's value function. From a gametheoretic standpoint, (SP) can be seen as a zero-sum game between two optimizing agents (or players): Player 1 (the minimizer) seeks to incur the least possible loss, while Player 2 (the maximizer) seeks to obtain the highest possible reward -both determined by f (x 1, x 2).To solve (SP), we will focus on incremental processes that exploit the individual loss/reward gradients of f (assumed throughout to be at least C 1 -smooth). Since the individual gradients of f will play a key role in our analysis, we will encode them in a single vector as DISPLAYFORM1 and, following standard conventions, we will treat g(x) as an element of Y ≡ V *, the dual of the ambient space V ≡ V 1 × V 2, assumed to be endowed with the product norm x 2 = x 1 2 + x 2 2.Variational inequalities and coherence. Most of the literature on saddle-point problems has focused on the monotone case, i.e., when f is convex-concave. In such problems, it is well known that solutions of (SP) can be characterized equivalently as solutions of the Stampacchia variational inequality g(x *), x − x * ≥ 0 for all x ∈ X (SVI) or, in Minty form: DISPLAYFORM2 The equivalence between solutions of (SP), (SVI) and (MVI) extends well beyond the realm of monotone problems: it trivially includes all bilinear problems (f (x 1, x 2) = x 1 Mx 2 ), pseudo-monotone objectives as in Noor et al., etc. For a concrete example which is not even pseudo-monotone, consider the problem min DISPLAYFORM3 The only saddle-point of f is x * =: it is easy to check that x * is also the unique solution of the corresponding variational inequality (VI) problems, despite the fact that f is not even pseudomonotone.1 This shows that this equivalence encompasses a wide range of phenomena that are innately incompatible with convexity/monotonicity, even in the lowest possible dimension; for an in-depth discussion we refer the reader to BID16.Motivated by all this, we introduce below the following notion of coherence:Definition 2.1. We say that (SP) is coherent if:1. Every solution of (SVI) also solves (SP).2. There exists a solution p of (SP) that satisfies (MVI). * of (SP) satisfies (MVI) locally, i.e., for all x sufficiently close to x *.In the above, if (MVI) holds as a strict inequality whenever x is not a solution thereof, (SP) will be called strictly coherent; by contrast, if (MVI) holds as an equality for all x ∈ X, we will say that (SP) is null-coherent. The notion of coherence will play a central part in our considerations, so a few remarks are in order. To the best of our knowledge, its first antecedent is a gradient condition examined by BID9 in the context of nonlinear programming; we borrow the term "coherence" from the more recent paper of Zhou et al. [2017b] who used the term "variational coherence" for a stronger variant of the above definition. We should also note here that the set of solutions of a coherent problem does not need to be convex: for instance, if Player 1 controls (x, y), and the objective function is f (x, y) = x 2 y 2 (i.e., Player 2 has no impact in the game), the set of solutions is the non-convex set X * = {(x, y): x = 0 or y = 0}. Moreover, regarding the distinction between coherence and strict coherence, we show in Appendix A that (SP) is strictly coherent when f is strictly convex-concave. At the other end of the spectrum, typical examples of problems that are null-coherent are bilinear objectives with an interior solution: for instance, f (x 1, x 2) = x 1 x 2 with x 1, x 2 ∈ [−1, 1] has g(x), x = x 1 x 2 − x 2 x 1 = 0 for all x 1, x 2 ∈ [−1, 1], so it is null-coherent. Finally, neither strict, nor null coherence imply a unique solution to (SP), a property which is particularly relevant for GANs (the first example above is strictly coherent, but does not admit a unique solution). The method. Motivated by its prolific success in convex programming, our starting point will be the well-known mirror descent (MD) method of Nemirovski & Yudin, suitably adapted to our saddle-point context. Several variants of the method exist, ranging from dual averaging [] to follow-the-regularized-leader; for a survey, we refer the reader to BID12.The basic idea of mirror descent is to generate a new state variable x + from some starting state x by taking a "mirror step" along a gradient-like vector y. To do this, let h: X → be a continuous and K-strongly convex distance-generating function (DGF) on X, i.e., DISPLAYFORM0 for all x, x ∈ X and all t ∈. In terms of smoothness (and in a slight abuse of notation), we also assume that the subdifferential of h admits a continuous selection, i.e., a continuous function ∇h: dom ∂h → Y such that ∇h(x) ∈ ∂h(x) for all x ∈ dom ∂h. 2 Then, following BID11, h generates a pseudo-distance on X via the relation DISPLAYFORM1 This pseudo-distance is known as the Bregman divergence. As we show in Appendix B, we have DISPLAYFORM2, so the convergence of a sequence X n to some target point p can be verified by showing that D(p, X n) → 0. On the other hand, D(p, x) typically fais to be symmetric and/or satisfy Require: K-strongly convex regularizer h: X →, step-size sequence γ n > 0 1: choose X ∈ dom ∂h # initialization 2: for n = 1, 2,... do 3:oracle query at X returns g # gradient feedback 4: set X ← P X (−γ n g) # new state 5: end for 6: return X the triangle inequality, so it is not a true distance function per se. Moreover, the level sets of D(p, x) may fail to form a neighborhood basis of p, so the convergence of X n to p does not necessarily imply that D(p, X n) → 0; we provide an example of this behavior in Appendix B. For technical reasons, it will be convenient to assume that such phenomena do not occur, i.e., that D(p, X n) → 0 whenever X n → p. This mild regularity condition is known in the literature as "Bregman reciprocity" BID13 BID0 BID30 BID10, and it will be our standing assumption in what follows (note also that it holds trivially for both Examples 3.1 and 3.2 below). Now, as with standard Euclidean distances, the Bregman divergence generates an associated proxmapping defined as P x (y) = arg min DISPLAYFORM3 In analogy with the Euclidean case (discussed below), the prox-mapping (3.3) produces a feasible point x + = P x (y) by starting from x ∈ dom ∂h and taking a step along a dual (gradient-like) vector y ∈ Y. In this way, we obtain the mirror descent (MD) algorithm DISPLAYFORM4 where γ n is a variable step-size sequence andĝ n is the calculated value of the gradient vector g(X n) at the n-th stage of the algorithm (for a pseudocode implementation, see Section 3).For concreteness, two widely used examples of prox-mappings are as follows:Example 3.1 (Euclidean projections). When X is endowed with the L 2 norm · 2, the archetypal prox-function is the (square of the) norm itself, i.e., h(x) = and the induced prox-mapping is P x (y) = Π(x + y), (3.4) with Π(x) = arg min x ∈X x − x 2 denoting the ordinary Euclidean projection onto X.Example 3.2 (Entropic regularization). When X is a d-dimensional simplex, a widely used DGF is the (negative) Gibbs-Shannon entropy h(x) = d j=1 x j log x j. This function is 1-strongly convex with respect to the L 1 norm [] and the associated pseudo-distance is the Kullback-Leibler divergence D KL (p, x) = d j=1 p j log(p j /x j); in turn, this yields the prox-mapping DISPLAYFORM5 The update rule x ← P x (y) is known in the literature as the multiplicative weights (MW) algorithm BID2, and is one of the centerpieces for learning in games BID18 BID17 BID14, adversarial bandits BID3, etc. Regarding the gradient input sequenceĝ n of (MD), we assume that it is obtained by querying a first-order oracle which outputs an estimate of g(X n) when called at X n. This oracle could be either perfect, returningĝ n = g(X n) for all n, or imperfect, providing noisy gradient estimations.3 By that token, we will make the following standard assumptions for the gradient feedback sequenceĝ n [; ; BID25 : a) Unbiasedness: DISPLAYFORM6 In the above, y * ≡ sup{y, x : x ∈ V, x ≤ 1} denotes the dual norm on Y while F n represents the history (natural filtration) of the generating sequence X n up to stage n (inclusive). Sinceĝ n is generated randomly from X n at stage n, it is obviously not F n -measurable, i.e.,ĝ n = g(X n) + U n+1, where U n is an adapted martingale difference sequence with ¾[U n+1 2 * | F n] ≤ σ 2 for some finite σ ≥ 0. Clearly, when σ = 0, we recover the exact gradient feedback frameworkĝ n = g(X n).Convergence analysis. When (SP) is convex-concave, it is customary to take as the output of (MD) the so-called ergodic averageX DISPLAYFORM7 or some other average of the sequence X n where the objective is sampled. The reason for this is that convexity guarantees -via Jensen's inequality and gradient monotonicity -that a regret-based analysis of (MD) can lead to explicit rates for the convergence ofX n to the solution set of (SP) [;]. However, when the problem is not convex-concave, the standard proof techniques for establishing convergence of the method's ergodic average no longer apply; instead, we need to examine the convergence properties of the generating sequence X n of (MD) directly. With all this in mind, our main for (MD) may be stated is as follows: Theorem 3.1. Suppose that (MD) is run with a gradient oracle satisfying (3.6) and a variable step-size sequence γ n such that DISPLAYFORM8 This establishes an important dichotomy between strict and null coherence: in strictly coherent problems, X n is attracted to the solution set of (SP); in null-coherent problems, X n drifts away and cycles without converging. In particular, this dichotomy leads to the following immediate corollaries: Corollary 3.2. Suppose that f is strictly convex-concave. Then, with assumptions as above, X n converges (a.s.) to the (necessarily unique) solution of (SP). Corollary 3.3. Suppose that f is bilinear and admits an interior saddle-point x * ∈ X •. If X 1 x * and (MD) is run with exact gradient input (σ = 0), we have lim n→∞ D(x *, X n) > 0.Since bilinear models include all finite two-player, zero-sum games, Corollary 3.3 also encapsulates the non-convergence of BID15 and Bailey & Piliouras for gradient descent and FTRL respectively (for a more comprehensive formulation, see Proposition C.3 in Appendix C). The failure of (MD) to converge in this case is due to the fact that, witout a mitigating mechanism in place, a "blind" first-order step could overshoot and spiral outwards, even with a vanishing step-size. This becomes even more pronounced in GANs where it can lead to mode collapse and/or cycles between different modes; the next two sections address precisely these issues.4 Extra-gradient analysisThe method. In convex-concave problems, taking an average of the algorithm's generated samples as in (3.7) may resolve cycling phenomena by inducing an auxiliary sequence that gravitates towards the "center of mass" of the driving sequence X n (which orbits interior solutions). However, this technique cannot be employed in problems that are not convex-concave because the structure of f cannot be leveraged to establish convergence of the ergodic average of the process. In view of this, we replace averaging with an optimistic "extra-gradient" step which uses the obtained information to amortize the next prox step (possibly by exiting the convex hull of generated states).The seed of this "extra-gradient" idea dates back to BID27 and Nemirovski, and has since found wide applications in optimization theory and beyond -for a survey, see BID12 and references therein. In a nutshell, given a state x, the extra-gradient method first generates an intermediate, "waiting" statê x = P x (−γg(x)) by taking a prox step as usual. However, instead of continuing fromx, the method samples g(x) and goes back to the original state x in order to generate a new state x + = P x (−γg(x)). Based on this heuristic, we obtain the optimistic mirror descent (OMD) algorithm DISPLAYFORM9 Algorithm 2: optimistic mirror descent (OMD) for saddle-point problemsRequire: K-strongly convex regularizer h: X →, step-size sequence γ n > 0 1: choose X ∈ dom ∂h # initialization 2: for n = 1, 2,... do 3:oracle query at X returns g # gradient feedback 4:set X + ← P X (−γ n g) # waiting state 5:oracle query at X + returns g + # gradient feedback 6: set X ← P X (−γ n g +) # new state 7: end for 8: return X where, in obvious notation,ĝ n andĝ n+1/2 represent gradient oracle queries at the incumbent and intermediate states X n and X n+1/2 respectively. For a pseudocode implementation, see Algorithm 2; see also Rakhlin & Sridharan and BID15 for a variant of the method with a "momentum" step, and BID21 for a gradient reuse mechanism that replaces a second oracle call with a past gradient. Convergence analysis. In his original analysis, Nemirovski considered the ergodic average (3.7) of the algorithm's iterates and established an O(1/n) convergence rate in monotone problems. However, as we explained above, even though this kind of averaging is helpful in convex-concave problems, it does not provide any tangible benefits beyond this class: in more general problems, X n appears to be the most natural solution candidate. Our first below justifies this choice in the class of coherent problems: Theorem 4.1. Suppose that (SP) is coherent and g is L-Lipschitz continuous. If (OMD) is run with exact gradient input (σ = 0) and γ n such that 0 < inf n γ n ≤ sup n γ n < K/L, the sequence X n converges monotonically to a solution x * of (SP), i.e., D(x *, X n) decreases monotonically to 0.Corollary 4.2. Suppose that f is bilinear. If (OMD) is run with assumptions as above, the sequence X n converges monotonically to a solution of (SP). for pseudo-monotone problems. Importantly, Theorem 4.1 shows that the extra-gradient step plays a crucial role in stabilizing (MD): not only does (OMD) converge in problems where (MD) provably fails, but this convergence is, in fact, monotonic. In other words, at each iteration, (OMD) comes closer to a solution of (SP), whereas (MD) may spiral outwards, ultimately converging to a limit cycle. This phenomenon is seen clearly in Fig. 1, and also in the detailed analysis of Appendix C.Of course, except for very special cases, the monotonic convergence of X n cannot hold when the gradient input to (OMD) is imperfect: a single "bad" sample ofĝ n would suffice to throw X n off-track. In this case, we have: Theorem 4.3. Suppose that (SP) is strictly coherent and (OMD) is run with a gradient oracle satisfying (3.6) and a variable step-size sequence γ n such that DISPLAYFORM10 Then, with probability 1, X n converges to a solution of (SP).It is worth noting here that the step-size policy in Theorem 4.3 is different than that of Theorem 4.1. This is due to a) the lack of randomness (which obviates the summability requirement ∞ n=1 γ 2 n < ∞ in Theorem 4.1); and b) the lack of Lipschitz continuity assumption (which, in the case of Theorem 4.1 guarantees monotonic decrease at each step, provided the step-size is not too big). Importantly, the maximum allowable step-size is also controlled by the strong convexity modulus of h, suggesting that the choice of distance-generating function can be fine-tuned further to allow for more aggressive step-size policies -a key benefit of mirror descent methods. Gaussian mixture models. For the experimental validation of our theoretical , we began by evaluating the extra-gradient add-on in a highly multi-modal mixture of 16 Gaussians arranged in a 4 × 4 grid as in Metz et al.. The generator and discriminator have 6 fully connected layers with 384 neurons and Relu activations (plus an additional layer for data space projection), and the generator generates 2-dimensional vectors. The output after {4000, 8000, 12000, 16000, 20000} iterations is shown in FIG3. The networks were trained with RMSprop [] and Adam BID26, and the are compared to the corresponding extra-gradient variant (for an explicit pseudocode representation in the case of Adam, see BID15 and Appendix E). Learning rates and hyperparameters were chosen by an inspection of grid search so as to enable a fair comparison between each method and its look-ahead version. Overall, the different optimization strategies without look-ahead exhibit mode collapse or oscillations throughout the training period (we ran all models for at least 20000 iterations in order to evaluate the hopping behavior of the generator). In all cases, the extra-gradient add-on performs consistently better in learning the multi-modal distribution and greatly reduces occurrences of oscillatory behavior. Experiments with standard datasets. In our experiments with Gaussian mixture models (GMMs), the most promising training method was Adam with an extra-gradient step (a concrete pseudocode implementation is provided in Appendix E). Motivated by this, we trained a Wasserstein-GAN on the CelebA and CIFAR-10 datasets using Adam, both with and without an extra-gradient step. The architecture employed was a standard DCGAN; hyperparameters and network architecture details may be found in Appendix E. Subsequently, to quantify the gains of the extra-gradient step, we employed the widely used inception score and Fréchet distance metrics, for which we report the in FIG4. Under both metrics, the extra-gradient add-on provides consistently higher scores after an initial warm-up period (and is considerably more stable). For visualization purposes, we also present in FIG5 an ensemble of samples generated at the end of the training period. Overall, the generated samples provide accurate feature representation and low distortion (especially in CelebA). Our suggest that the implementation of an optimistic, extra-gradient step is a flexible add-on that can be easily attached to a wide variety of GAN training methods (RMSProp, Adam, SGA, etc.), and provides noticeable gains in performance and stability. From a theoretical standpoint, the dichotomy between strict and null coherence provides a justification of why this is so: optimism eliminates cycles and, in so doing, stabilizes the method. We find this property particularly appealing because it paves the way to a local analysis with provable convergence guarantees in multi-modal settings, and beyond zero-sum games; we intend to examine this question in future work. We begin our discussion with some basic on coherence:Proposition A.1. If f is convex-concave, (SP) is coherent. In addition, if f is strictly convex-concave, (SP) is strictly coherent. Proof. Let x * be a solution point of (SP). Since f is convex-concave, first-order optimality gives DISPLAYFORM0 and DISPLAYFORM1 Combining the two, we readily obtain the (Stampacchia) variational inequality DISPLAYFORM2 In addition to the above, the fact that f is convex-concave also implies that g(x) is monotone in the sense that DISPLAYFORM3 for all x, x ∈ X []. Thus, setting x ← x * in (A.3) and invoking (SVI), we get DISPLAYFORM4 i.e., (MVI) is satisfied. To establish the converse implication, focus for concreteness on the minimizer, and note that (MVI) implies that DISPLAYFORM5 Now, if we fix some x 1 ∈ X 1 and consider the function φ(t) = f (x * 1 + t(x 1 − x * 1), x * 2 ), the inequality (A.5) yields DISPLAYFORM6 2 ). The maximizing component follows similarly, showing that x * is a solution of (SP) and, in turn, establishing that (SP) is coherent. For the strict part of the claim, the same line of reasoning shows that if g(x), x − x * = 0 for some x that is not a saddle-point of f, the function φ(t) defined above must be constant on, indicating in turn that f cannot be strictly convex-concave, a contradiction. We proceed to show that the solution set of a coherent saddle-point problem is closed (we will need this regularity in the convergence analysis of Appendix C):Lemma A.2. Let X * denote the solution set of (SP). If (SP) is coherent, X * is closed. Proof. Let x * n, n = 1, 2,..., be a sequence of solutions of (SP) converging to some limit point x * ∈ X. To show that X * is closed, it suffices to show that x * ∈ X.Indeed, given that (SP) is coherent, every solution thereof satisfies (MVI), so we have g(x), x−x * n ≥ 0 for all x ∈ X. With x * n → x * as n → ∞, it follows that DISPLAYFORM7 i.e., x * satisfies (MVI). By coherence, this implies that x * is a solution of (SP), as claimed. In this appendix, we provide some auxiliary and estimates that are used throughout the convergence analysis of Appendix C. Some of the we present here (or close variants thereof) are not new [see e.g., ; BID25 . However, the hypotheses used to obtain them vary wildly in the literature, so we provide all the necessary details for completeness. To begin, recall that the Bregman divergence associated to a K-strongly convex distance-generating function h : X → is defined as DISPLAYFORM0 with ∇h(x) denoting a continuous selection of ∂h(x). The induced prox-mapping is then given by DISPLAYFORM1 and is defined for all x ∈ dom ∂h, y ∈ Y (recall here that Y ≡ V * denotes the dual of the ambient vector space V). In what follows, we will also make frequent use of the convex conjugate h DISPLAYFORM2 By standard in convex analysis [, Chap. 26], h * is differentiable on Y and its gradient satisfies the identity ∇h * (y) = arg max DISPLAYFORM3 For notational convenience, we will also write Q(y) = ∇h * (y) (B.5) and we will refer to Q: Y → X as the mirror map generated by h. All these notions are related as follows: Lemma B.1. Let h be a distance-generating function on X. Then, for all x ∈ dom ∂h, y ∈ Y, we have: DISPLAYFORM4 Finally, if x = Q(y) and p ∈ X, we have DISPLAYFORM5 Remark. By (B.6b), we have ∂h(x +) ∅, i.e., x + ∈ dom ∂h. As a , the update rule x ← P x (y) is well-posed, i.e., it can be iterated in perpetuity. For (B.7), by a simple continuity argument, it suffices to show that the inequality holds for interior p ∈ X •. To establish this, let DISPLAYFORM6 Since h is strongly convex and y ∈ ∂h(x) by (B.6a), it follows that φ(t) ≥ 0 with equality if and only if t = 0. Since ψ(t) = ∇h(x + t(p − x)) − y, p − x is a continuous selection of subgradients of φ and both φ and ψ are continuous on, it follows that φ is continuously differentiable with φ = ψ on. Hence, with φ convex and φ(t) ≥ 0 = φ for all t ∈, we conclude that φ = ∇h(x) − y, p − x ≥ 0, which proves our assertion. We continue with some basic bounds on the Bregman divergence before and after a prox step. The basic ingredient for these bounds is a generalization of the (Euclidean) law of cosines which is known in the literature as the "three-point identity" BID13:Lemma B.2. Let h be a distance-generating function on X. Then, for all p ∈ X and all x, x ∈ dom ∂h, we have DISPLAYFORM7 Proof. By definition, we have: DISPLAYFORM8 (B.10)Our claim then follows by adding the last two lines and subtracting the first. With this identity at hand, we have the following series of upper and lower bounds: Proposition B.3. Let h be a K-strongly convex distance-generating function on X, fix some p ∈ X, and let x + = P x (y) for x ∈ dom ∂h, y ∈ Y. We then have: DISPLAYFORM9 Proof of (B.11a). By the strong convexity of h, we get DISPLAYFORM10 Proof of (B.11b) and (B.11c). By the three-point identity (B.9), we readily obtain DISPLAYFORM0 In turn, this gives DISPLAYFORM1 where, in the last step, we used (B.7) and the fact that x + = P x (y), so ∇h(x) + y ∈ ∂h(x +). The above is just (B.11b), so the first part of our proof is complete. For (B.11c), the bound (B.14) gives DISPLAYFORM2 Therefore, by Young's inequality [], we get DISPLAYFORM3 and hence (B.17) with the last step following from Lemma B.1 applied to x in place of p. DISPLAYFORM4 The first part of Proposition B.3 shows that X n converges to p if D(p, X n) → 0. However, as we mentioned in the main body of the paper, the converse may fail: in particular, we could have lim inf n→∞ D(p, X n) > 0 even if X n → p. To see this, let X be the L 2 ball of d and take h(x) = − 1 − x 2 2. Then, a straightforward calculation gives DISPLAYFORM5 (B.19) which admits p as a solution for all c ≥ 0 (so p belongs to the closure of L c (p) even though D(p, p) = 0 by definition). As a , under this distance-generating function, it is possible to have X n → p even when lim inf n→∞ D(p, X n) > 0 (simply take a sequence X n that converges to p while remaining on the same level set of D). As we discussed in the main body of the paper, such pathologies are discarded by the Bregman reciprocity condition D(p, X n) → 0 whenever X n → p. (B.20) This condition comes into play at the very last part of the proofs of Theorems 3.1 and 4.1; other than that, we will not need it in the rest of our analysis. Finally, for the analysis of the OMD algorithm, we will need to relate prox steps taken along different directions: Proposition B.4. Let h be a K-strongly convex distance-generating function on X and fix some p ∈ X, x ∈ dom ∂h. Then: a) For all y 1, y 2 ∈ Y, we have: DISPLAYFORM6 i.e., P x is (1/K)-Lipschitz.b) In addition, letting x + 1 = P x (y 1) and x + 2 = P x (y 2), we have: DISPLAYFORM7 Proof. We begin with the proof of the Lipschitz property of P x. Indeed, for all p ∈ X, (B.7) gives ∇h(x DISPLAYFORM8) and rearranging, we obtain ∇h(x DISPLAYFORM9 (B.24) By the strong convexity of h, we also have K x DISPLAYFORM10 (B.25) Hence, combining (B.24) and (B.25), we get K x DISPLAYFORM11 26) and our assertion follows. For the second part of our claim, the bound (B.11b) of Proposition B.3 applied to x + 2 = P x (y 2) readily gives DISPLAYFORM12 (B.27) thus proving (B.22a). To complete our proof, note that (B.11b) with DISPLAYFORM13 where we used Young's inequality and (B.11a) in the second inequality. The bound (B.22b) then follows by substituting (B.30) in (B.27). We begin by recalling the definition of the mirror descent algorithm. With notation as in the previous section, the algorithm is defined via the recursive scheme DISPLAYFORM0 where γ n is a variable step-size sequence andĝ n is the calculated value of the gradient vector g(X n) at the n-th stage of the algorithm. As we discussed in the main body of the paper, the gradient input sequenceĝ n of (MD) is assumed to satisfy the standard oracle assumptions a) Unbiasedness: DISPLAYFORM1 where F n represents the history (natural filtration) of the generating sequence X n up to stage n (inclusive).With this preliminaries at hand, our convergence proof for (MD) under strict coherence will hinge on the following : Proposition C.1. Suppose that (SP) is coherent and (MD) is run with a gradient oracle satisfying (3.6) and a variable step-size γ n such that DISPLAYFORM2 Proposition C.2. Suppose that (SP) is strictly coherent and (MD) is run with a gradient oracle satisfying (3.6) and a step-size γ n such that DISPLAYFORM3 n < ∞. Then, with probability 1, there exists a (possibly random) solution x * of (SP) such that lim inf n→∞ D(x *, X n) = 0.Proposition C.1 can be seen as a "dichotomy" : it shows that the Bregman divergence is an asymptotic constant of motion, so (MD) either converges to a saddle-point x * (if D(x *) = 0) or to some nonzero level set of the Bregman divergence (with respect to x *). In this way, Proposition C.1 rules out more complicated chaotic or aperiodic behaviors that may arise in general -for instance, as in the analysis of Palaiopanos et al. for the long-run behavior of the multiplicative weights algorithm in two-player games. However, unless this limit value can be somehow predicted (or estimated) in advance, this cannot be easily applied. This is the main role of Proposition C.2: it shows that (MD) admits a subsequence converging to a solution of (SP) so, by (B.20), the limit of D(x *, X n) must be zero. Our first step is to prove Proposition C.2. To do this, we first recall the following law of large numbers for L 2 martingales:Theorem (, Theorem 2.18). Let Y n = n k=1 ζ k be a martingale and T n a nondecreasing sequence such that lim n→∞ τ n = ∞. Then, DISPLAYFORM4 on the set DISPLAYFORM5 Proof of Proposition C.2. We begin with the technical observation that the solution set X * of (SP) is closed -and hence, compact (cf. Lemma A.2 in Appendix A). Clearly, if X * = X, there is nothing to show; hence, without loss of generality, we may assume in what follows that X * X.Assume now ad absurdum that, with positive probability, the sequence X n generated by (MD) admits no limit points in X *. Conditioning on this event, and given that X * is compact, there exists a (nonempty) compact set C ⊂ X such that C ∩ X * = ∅ and X n ∈ C for all sufficiently large n. Moreover, letting p be as in Definition 2.1, we have g(x), x − p > 0 whenever x ∈ C. Therefore, by the continuity of g and the compactness of X * and C, there exists some a > 0 such that DISPLAYFORM0 To proceed, let D n = D(p, X n). Then, by Proposition B.3, we have DISPLAYFORM1 where, in the last line, we set U n+1 =ĝ n − g(X n), ξ n+1 = − U n+1, X n − p, and we invoked the assumption that (SP) is coherent. Hence, telescoping (C.3) yields the estimate DISPLAYFORM2 Subsequently, letting τ n = n k=1 γ k and using (C.2), we obtain DISPLAYFORM3 By the unbiasedness hypothesis of (3.6) for U n, we have DISPLAYFORM4 Moreover, since U n is bounded in L 2 and γ n is 2 summable (by assumption), it follows that DISPLAYFORM5 Therefore, by the law of large numbers for L 2 martingales stated above [, Theorem 2 .18], we conclude that τ −1 n n k=1 γ k ξ k+1 converges to 0 with probability 1. Finally, for the last term of (C.4), let DISPLAYFORM6 i.e., S n is a submartingale with respect to F n. Furthermore, by the law of total expectation, we also have DISPLAYFORM7 Hence, by Doob's submartingale convergence theorem [, Theorem 2 .5], we conclude that S n converges to some (almost surely finite) random variable S ∞ with ¾[S ∞] < ∞, implying in turn that lim n→∞ S n+1 /τ n = 0 (a.s.).Applying all of the above, the estimate (C.4) gives D n+1 ≤ D 1 − aτ n /2 for sufficiently large n, so D(p, X n) → −∞, a contradiction. Going back to our original assumption, this shows that, at least one of the limit points of X n must lie in X * (a.s.), as claimed. We now turn to the proof of Proposition C.1:Proof of Proposition C.1. Let x * ∈ X * be a limit point of X n, as guaranteed by Proposition C.2, and let D n = D(x *, X n). Then, by Proposition B.3, we have DISPLAYFORM8 and hence, for large enough n: DISPLAYFORM9 where we used the ansatz that g(X n), X n − x * ≤ 0 for sufficiently large n (to be proved below), and, as in the proof of Proposition C.2, we set U n+1 =ĝ n − g(X n), ξ n+1 = − U n+1, X n − x *. Thus, conditioning on F n and taking expectations, we get DISPLAYFORM10 where we used the oracle assumptions (3.6) and the fact that X n is F n -measurable (by definition). DISPLAYFORM11 i.e., R n is an F n -adapted supermartingale. Since DISPLAYFORM12 i.e., R n is uniformly bounded in L 1. Thus, by Doob's convergence theorem for supermartingales [, Theorem 2.5], it follows that R n converges (a.s.) to some finite random variable R ∞ with ¾[R ∞] < ∞. In turn, by inverting the definition of R n, this shows that D n converges (a.s.) to some random variable D(x *) with ¾[D(x *)] < ∞, as claimed. It remains to be shown that g(X n), X n − x * ≥ 0 for sufficiently large n. By Definition 2.1, this amounts to showing that, for all large enough n, X n lies in a neighborhood U of x * such that (MVI) holds. Since x * has been chosen so that lim inf D(x *, X n) = 0, it follows that, for all ε > 0, there exists some n 0 such that ∞ n=n 0 γ 2 n < ε and X n 0 ∈ U. Hence, arguing in the same way as in the proof of Theorem 5.2 of Zhou et al.[2017a], we conclude that (X n ∈ U for all n ≥ n 0) = 1, implying in turn that g(X n), X n − x * ≥ 0 for all n ≥ n 0. This proves our last claim and concludes our proof. With all this at hand, we are finally in a position to prove our main for (MD):Proof of Theorem 3.1(a). Proposition C.2 shows that, with probability 1, there exists a (possibly random) solution x * of (SP) such that lim inf n→∞ X n − x * = 0 and, hence, lim inf n→∞ D(x *, X n) = 0 (by Bregman reciprocity). Since lim n→∞ D(x *, X n) exists with probability 1 (by Proposition C.1), it follows that lim n→∞ D(x *, X n) = lim inf n→∞ D(x *, X n) = 0, i.e., X n converges to x *.We proceed with the negative hinted at in the main body of the paper, namely the failure of (MD) to converge under null coherence:Proof of Theorem 3.1 (b). The evolution of the Bregman divergence under (MD) satisfies the identity DISPLAYFORM13 where, in the last line, we used the null coherence assumption g(x), x − x * = 0 for all x ∈ X. Since D(X n, X n+1) ≥ 0, taking expecations above shows that D(x *, X n) is nondecreasing, as claimed. With Theorem 3.1 at hand, the proof of Corollary 3.2 is an immediate consequence of the fact that strictly convex-concave problems satisfy strict coherence (Proposition A.1). As for Corollary 3.3, we provide below a more general for two-player, zero-sum finite games. To state it, let A i = {1, . . ., A i}, i = 1, 2, be two finite sets of pure strategies, and let X i = ∆(A i) denote the set of mixed strategies of player i. A finite, two-player zero-sum game is then defined by a matrix M ∈ A 1 ×A 2 so that the loss of Player 1 and the reward of Player 2 in the mixed strategy profile x = (x 1, x 2) ∈ X are concurrently given by DISPLAYFORM14 Then, writing Γ ≡ Γ(A 1, A 2, M) for the ing game, we have: Proposition C.3. Let Γ be a two-player zero-sum game with an interior Nash equilibrium x *. If X 1 x * and (MD) is run with exact gradient input (σ DISPLAYFORM15 DISPLAYFORM16 Remark. Note that non-convergence does not require any summability assumptions on γ n .In words, Proposition C.3 states that (MD) does not converge in finite zero-sum games with a unique interior equilibrium and exact gradient input: instead, X n cycles at positive Bregman distance from the game's Nash equilibrium. Heuristically, the reason for this behavior is that, for small γ → 0, the incremental step V γ (x) = P x (−γg(x)) − x of (MD) is essentially tangent to the level set of D(x *, ·) that passes through x.4 For finite γ > 0, things are even worse because V γ (x) points noticeably away from x, i.e., towards higher level sets of D. As a , the "best-case scenario" for (MD) is to orbit x * (when γ → 0); in practice, for finite γ, the algorithm takes small outward steps throughout its runtime, eventually converging to some limit cycle farther away from x *.We make this intuition precise below (for a schematic illustration, see also Fig. 1 Proof of Proposition C.3. Write v 1 (x) = −Mx 2 and v 2 (x) = x 1 M for the players' payoff vectors under the mixed strategy profile x = (x 1, x 2). By construction, we have g(x) = −(v 1 (x), v 2 (x)). Furthermore, since x * is an interior equilibrium of f, elementary game-theoretic considerations show that v 1 (x *) and v 2 (x *) are both proportional to the constant vector of ones. We thus get g(x), x − x * = v 1 (x), x 1 − x * 1 + v 2 (x), x 2 − x * 2 = −x 1 Mx 2 + (x * 1) Mx 2 + x 1 Mx 2 − x 1 Mx * 2 = 0, (C.16) where, in the last line, we used the fact that x * is interior. This shows that f satisfies null coherence, so our claim follows from Theorem 3.1(b).For our second claim, arguing as above and using (B.11c), we get D(x *, X n+1) ≤ D(x *, X n) + γ n g(X n), X n − x * + γ where we used the fact that g is L-Lipschitz and that p is a solution of (SP) such that (MVI) holds for all x ∈ X.We are now finally in a position to prove Theorem 4.1 (reproduced below for convenience): Theorem. Suppose that (SP) is coherent and g is L-Lipschitz continuous. If (OMD) is run with exact gradient input and a step-size sequence γ n such that 0 < lim n→∞ γ n ≤ sup n γ n < K/L, (D.3) the sequence X n converges monotonically to a solution x * of (SP), i.e., D(x *, X n) is non-increasing and converges to 0.Proof. Let p be a solution of (SP) such that (MVI) holds for all x ∈ X (that such a solution exists is a consequence of Definition 2.1). Then, by the stated assumptions for γ n, Lemma D.1 yields DISPLAYFORM0 where α ∈ is such that γ 2 n < αK/L for all n (that such an α exists is a consequence of the assumption that sup n γ n < K/L). Now, telescoping (D.1), we obtain 5) and hence: DISPLAYFORM1 DISPLAYFORM2 With sup n γ n < K/L, the above estimate readily yields ∞ n=1 X n+1/2 − X n 2 < ∞, which in turn implies that X n+1/2 − X n → 0 as n → ∞.By the compactness of X, we further infer that X n admits an accumulation point x In this section we present the of our image experiments using OMD training techniques. Inception and FID scores obtained by our model during training were reported in FIG4: as can be seen there, the extra-gradient add-on improves the performance of GAN training and efficiently stabilizes the model; without the extra-gradient step, performance tends to drop noticeably after approximately 100k steps. For ease of comparison, we provide below a collection of samples generated by Adam and optimistic Adam in the CelebA and CIFAR-10 datasets. Especially in the case of CelebA, the generated samples are consistently more representative and faithful to the target data distribution. For the reproducibility of our experiments, we provide TAB2 the network architectures and the hyperparameters of the GANs that we used. The architecture employed is a standard DCGAN architecture with a 5-layer generator with batchnorm, and an 8-layer discriminator. The generated samples were 32×32×3 RGB images.
We show how the inclusion of an extra-gradient step in first-order GAN training methods can improve stability and lead to improved convergence results.
1,450
scitldr
Batch normalization (BN) is often used in an attempt to stabilize and accelerate training in deep neural networks. In many cases it indeed decreases the number of parameter updates required to achieve low training error. However, it also reduces robustness to small adversarial input perturbations and common corruptions by double-digit percentages, as we show on five standard datasets. Furthermore, we find that substituting weight decay for BN is sufficient to nullify a relationship between adversarial vulnerability and the input dimension. A recent mean-field analysis found that BN induces gradient explosion when used on multiple layers, but this cannot fully explain the vulnerability we observe, given that it occurs already for a single BN layer. We argue that the actual cause is the tilting of the decision boundary with respect to the nearest-centroid classifier along input dimensions of low variance. As a , the constant introduced for numerical stability in the BN step acts as an important hyperparameter that can be tuned to recover some robustness at the cost of standard test accuracy. We explain this mechanism explicitly on a linear ``toy model and show in experiments that it still holds for nonlinear ``real-world models. BN is a standard component of modern deep neural networks, and tends to make the training process less sensitive to the choice of hyperparameters in many cases . While ease of training is desirable for model developers, an important concern among stakeholders is that of model robustness during deployment to plausible, previously unseen inputs. The adversarial examples phenomenon has exposed unstable predictions across state-of-the-art models . This has led to a variety of methods that aim to improve robustness, but doing so effectively remains a challenge (; ; ; a). We believe that a prerequisite to developing methods that increase robustness is an understanding of factors that reduce it. Approaches for improving robustness often begin with existing neural network architectures-that use BN-and patch them against specific attacks, e.g., through inclusion of adversarial examples during training (; ; ;). An implicit assumption is that BN itself does not reduce robustness, however, recent initialization-time analyses have shown that it causes exploding gradients, and increased sensitivity to input perturbations as the network depth increases . In this work, we consider the impact of BN in practical scenarios in terms of robustness to common corruptions and adversarial examples , finding that BN induced sensitivity remains a concern even in cases where its use appears benign on the basis of clean test accuracy, and when only one BN layer is used. The frequently made observation that adversarial vulnerability can scale with the input dimension (; ;) highlights the importance of identifying regularizers as more than merely a way to improve test accuracy. In particular, BN was a confounding factor in , making the of their initialization-time analysis hold after training. By adding 2 regularization and removing BN, we show that there is no inherent relationship between adversarial vulnerability and the input dimension. We briefly review how BN modifies the hidden layers' pre-activations h of a neural network. We use the notation of , where α is an index for units in a layer l, and i for a mini-batch of B samples from the dataset; N l denotes the number of units in layer l, W l is the matrix of weights and b l is the vector of biases that parametrize layer l. The batch mean is defined as µ α = 1 B i h αi, and the variance is σ 2. In the BN procedure, the mean µ α is subtracted from the pre-activation of each unit h l αi (consistent with), the is divided by the standard deviation σ α plus a small constant c to prevent division by zero, then scaled and shifted by the learned parameters γ α and β α, respectively. This is described in equation 1, where a per-unit nonlinearity φ, e.g., ReLU, is applied after the normalization. This procedure introduces complications, however. Consider two mini-batches that differ by only a single example: due to the induced batch-wise nonlinearity, they will have different representations of all examples. These differences are amplified by stacking BN layers, and were shown to cause exploding gradients at initialization . Conversely, normalization of intermediate representations for two different training inputs impairs the ability to distinguish definite examples that ought to be classified with a large prediction margin (as judged by an "oracle"), from more ambiguous instances. The last layer of a discriminative neural network, in particular, is typically a linear decoding of class label-homogeneous clusters, and thus makes use of information contained in the mean and variance at this stage for classification. In light of these observations, we begin in our analysis by adding a single BN layer to models trained by gradient descent (GD). This is the most favorable scenario according to the analysis of , where more layers and a smaller mini-batch size exacerbate the exploding gradients. 3 relate the adversarial vulnerability of linear classifiers to the tilting angle θ of the decision boundary w.r.t. the nearest-centroid classifier. Following their setup, we examine how BN affects this angle in a simple linear model, and then show that increasing model complexity cannot "undo" this vulnerability. and one task-irrelevant dimension (α = 2). Normalization aligns the decision boundary with the Bayes solution (indicated by arrows in "BNGD"), but this minimizes the averaged distance between the points and the boundary, maximizing adversarial vulnerability. Compared with the decision boundary of a linear model (θ ≈ 0 •), the batchnormalized model has θ = 66.7 •. On the right is the dataset seen by the BNGD classifier. We use Σ 11 = 1, Σ 22 = 0.01, Σ 12 = Σ 21 = 0.05, ν 0 = [−5, 0], and ν 1 =. Consider the binary classification task of identifying two different types of input x subject to Gaussian noise with a linear classifier w x + b. This can be modeled by the class-conditional distribution p(x|y = j) = N (ν j, Σ) with label y ∼ Ber(0.5). The Bayes-optimal solution to this problem is given by the weight vector w = Σ −1 ν 0 − ν 1, and b =, where p(y) denotes the marginal probability for the label y (see e.g. ), while the nearestcentroid classifier is defined by w We analyze the effect of batch-normalizing the input to the classifier for this problem (i.e., h αi = x αi), first in the simplest setting where γ α = 1, β α = 0 ∀α. We select the class distribution means ν j to be symmetric around zero, so that the batch mean computed by BN is µ α = 0 ∀α. The batch-normalized linear classifier is thus defined as: f (x) = w x+b √ σ 2 +c. By construction of our synthetic dataset, the variance of the batch can be deduced from the data Table 1: As predicted by the theory, batch-normalized gradient descent (BNGD) yields a tilted decision boundary w.r.t. the nearest-centroid classifier, regardless of the affine parameters being learned or fixed. We report the tilting angle (θ) and accuracies of linear models trained on MNIST 3 vs. 7 for vanilla GD, GD with L2 weight decay "WD"(λ = 0.1), and BNGD. Affine = "F" indicates γ = 1 and β = 0, whereas "T" means they are randomly initialized and learnable. AWGN = N, FGSM used with = 1 /10. Entries are the mean and its standard error over five random seeds. The tilting angle θ of the batch-normalized decision boundary w.r.t. the one given by w * (note that the boundary is perpendicular to w) is therefore approximately equal to the angle between the datasets before and after normalization. To compute θ, we divide the weights w by √ σ 2 + c, and then normalize w/ w 2, such that θ = cos −1 (w w *). From this analysis it follows that the order of magnitude of c is important relative to the data variance: if c > σ 2 α then the effective weight value w α is reduced, and if c < σ 2 α and σ 2 α is small, then w α increases greatly, causing boundary tilting along direction α. We depict simulations of the toy model in Figure 1. We use constant learning rate GD, which is known to converge to the max-margin solution-equivalent to the nearest centroid classifier in this case-for linear models on separable data . Batch-normalized GD (BNGD) converges for arbitrary learning rates for linear models ; we use a value of 0.1 for 1000 epochs. Next, we train linear models on the MNIST 3 vs. 7 dataset with 5000 training samples (drawn uniformly per class) using a learning rate of 0.1 for 50 epochs. We compute the angle θ w.r.t. the nearest-centroid classifier, which is obtained by subtracting the "average 3" from the "average 7" of the full training set. Although this may seem like a crude reference point, the nearest-centroid classifier is much more robust than the linear model of , achieving 40% accuracy for the fast gradient sign method (FGSM) at = 1 /4 vs. ≈ 0%. Results consistent with the boundary tilting theory are shown in Table 1, which not only shows that BN causes tilting, but that this is unaffected by the parameters γ and β. Post-normalization, there is no signal to γ and β about the variances of the original dataset. This is consistent with other works that observe γ and β do not influence the studied effect (van ; a;) Increasing the numerical stability constant c increases robustness in terms of absolute test accuracy for additive white Gaussian noise (AWGN) on MNIST and CIFAR-10 datasets by 33% and 41% respectively (at the cost of standard accuracy). We defer the details of this experiment to Appendix A. This loss of accuracy, and the effect of c are consistent with , which remarks that under BN, directions of high signal variance are dampened, while directions of low signal variance are amplified. This preferential exploration of low signal directions reduces the signal-to-noise ratio and increases sensitivity w.r.t. the input. Increasing c reduces the sensitivity along "low signal directions". For the main practical on MNIST, SVHN, CIFAR-10, and ImageNet, we evaluate the robustness-measured as a drop in test accuracy under various input perturbations-of convolutional networks with and without BN. As a white-box adversarial attack we use projected gradient descent (PGD) in ∞ -and 2 -norm variants, for its simplicity and ability to degrade performance with little change to the input . The PGD implementation details are provided in Appendix B. We report the test VGG 87.9 ± 0.1 79 ± 1 52.9 ± 0.6 65.6 ± 0.3 75.3 ± 0.2 66 ± 1 VGG 88.7 ± 0.1 73 ± 1 35.7 ± 0.3 59.7 ± 0.3 77.3 ± 0.2 60 ± 2 WRN F 94.6 ± 0.1 69 ± 1 20.3 ± 0.3 9.4 ± 0.2 87.5 ± 0.3 68 ± 1 WRN 95.9 ± 0.1 58 ± 2 14.9 ± 0.6 8.3 ± 0.3 89.6 ± 0.2 58 ± 1 accuracy for additive Gaussian noise of zero mean and variance 1 /16, denoted as "Noise", as well as using the CIFAR-10-C common corruption benchmark . We found these methods were sufficient to demonstrate a considerable disparity in robustness due to BN, but this is not intended as a complete security evaluation. Standard meta-parameters from the literature were used to train models with and without BN from scratch over several random seeds. All uncertainties are the standard error of the mean accuracy. For SVHN and CIFAR-10, we examine two different learning rate schemes, given that it has been suggested that one of the primary mechanisms of BN is to facilitate training with a larger learning rate : 1. A fixed "small" learning rate of 1e-2 (SVHN, CIFAR-10). 2. An initial "large" learning rate of 1e-1, with subsequent drops by a factor of ten (CIFAR-10). In the SVHN experiment, VGG variants are trained using using five random seeds. BN increased clean test accuracy by 1.86 ± 0.05%, but reduced test accuracy for additive noise by 5.5 ± 0.6%, for PGD-∞ by 17 ± 1%, and for PGD-2 by 20 ± 1%. We defer the full meta-parameters and to Appendix E. For the CIFAR-10 experiments we trained models with a similar procedure as for SVHN, but with random 32 × 32 crops using four-pixel padding, and horizontal flips. We evaluate two families of contemporary models: one without skip connections (VGG) and a WideResNets (WRN) using "Fixup" initialization (b) to reduce the use of BN. Results of the first experiment are summarized in Table 2. In this case, inclusion of BN for VGG reduces the clean generalization gap (difference between training and test accuracy) by 1.1 ± 0.2%. For additive noise, test accuracy drops by 6 ± 1%, and for PGD perturbations by 17.3 ± 0.7% and 5.9 ± 0.4% for ∞ and 2 variants, respectively. Table 2 for brevity). In the second "large" learning rate experiment summarize in 3 None of the deeper batch-normalized models recover the robustness of the most shallow, or same-depth equivalents, nor does the higher learning rate recover the performance of the baselines. Results for deeper models are in Appendix E. Other learning rate schedules, and robustness vs. training epoch curves are in Appendix K. Next, we evaluate robustness on the common corruption benchmark comprising 19 types of realworld distortions from four high-level categories: "noise", "blur", "weather", and "digital" effects . Each corruption has five "severity" or intensity levels. We report the mean error on the corrupted test set (mCE) by averaging over all intensity levels and corruptions. We summarize the for two VGG variants and a WideResNet on CIFAR-10-C, trained from scratch on the default training set for three and five random seeds, respectively. Accuracy for the "noise" corruptions, causing the largest difference in accuracy with BN, are outlined in Table 4. The key takeaway is: For all models tested, the batch-normalized variant has a higher error rate for all corruptions of the "noise" category, at every intensity level. Averaging over all 19 corruptions, BN increases mCE by 1.9 ± 0.9% for VGG8, 2.0 ± 0.3% for VGG13, and 1.6 ± 0.4% for WRN. There is a large disparity in accuracy when modulating BN for different corruption categories, we examine these in more detail in Appendix G. Interestingly, some corruptions that led to a positive gap for VGG8 show a negative gap for the WRN, i.e., BN improved accuracy to: Contrast-4.9 ± 1.1%, Snow-2.8 ± 0.4%, Spatter-2.3 ± 0.8%. These are the same corruptions for which VGG13 loses, or does not improve its robustness when BN is removed. We suspect accuracy for these corruptions correlates with standard test accuracy, which is highest for the WRN. Visually, these corruptions appear to preserve texture information. Conversely, noise is applied in a spatially global way that disproportionately degrades these textures, emphasizing shapes and edges. It is now known that modern CNNs trained on standard image datasets have a propensity to rely heavily on texture in addition to shape and edge cues for object recognition . We evaluate pre- −3. The dashed line is the theoretical maximum trainable depth of batch-normalized networks as a function of the batch size. We report the clean test accuracy, and that for additive Gaussian noise and BIM perturbations. The batch-normalized models were trained for 10 epochs, while the unnormalized ones were trained for 40 epochs as they took longer to converge. The 40 epoch batch-normalized plot was qualitatively similar with dark blue bands for BIM for shallow and deep variants. The dark blue patch for 55 and 60 layer unnormalized models at large batch sizes depicts a total failure to train. These networks were trainable by reducing η, but for consistency we keep η the same in both cases. trained bag-of-local-feature models (BagNets) on ImageNet with an architecture that discards spatial information between patches and is thus considered to make extensive use of texture patterns for classification . For patch sizes {9, 17, 33}, the top-5 accuracies of the BagNets are reduced to just 1.25%, 5.09%, and 14.62% for AWGN, respectively. Compared with Table 5, where all models obtain over 40%, these figures suggest that robustness to Gaussian noise is a good proxy for the use of texture for ImageNet classification. These support the hypothesis that BN may be exacerbating this tendency to use superficial texture-like information. Next, we evaluate the robustness of pre-trained ImageNet models from the torchvision.models repository, which conveniently provides models with and without BN. 4 Results are shown in Table 5, where BN improves top-5 accuracy on noise in some cases, but consistently reduces it by 8.54% to 11.00% (absolute) for PGD. The trends are the same for top-1 accuracy, only the absolute values are smaller; the degradation varies from 2.38% to 4.17%. Given the discrepancy between noise and PGD for ImageNet, we include a black-box transfer analysis in the Appendix E.2 that is consistent with the white-box analysis. Finally, we explore the role of batch size and depth in Figure 2. We find that BN limits the maximum trainable depth, which increases with the batch size, but quickly plateaus as predicted by Theorem 3.10 of . Robustness decreases with the batch size for depths that maintain a reasonable test accuracy, at around 25 or fewer layers. This tension between clean accuracy and robustness as a function of the batch size is not observed in unnormalized networks. A recent work analyzes adversarial vulnerability of batch-normalized networks at initialization time and conjectures based on a scaling analysis that, under the commonly used initialization scheme, adversarial vulnerability scales as ∼ √ d. 97.95 ± 0.08 93.0 ± 0.4 66.7 ± 0.9 97.88 ± 0.09 76.6 ± 0.7 22.9 ± 0.7 56 98.19 ± 0.04 93.8 ± 0.1 53.2 ± 0.7 98.22 ± 0.02 79.3 ± 0.6 8.6 ± 0.8 84 98.27 ± 0.04 94.3 ± 0.1 47.6 ± 0.8 98.28 ± 0.05 80.5 ± 0.6 6.1 ± 0.5 They also show in experiments that independence between vulnerability and the input dimension can be approximately recovered through adversarial training by projected gradient descent (PGD) , with a modest trade-off of clean accuracy. Intuitively, the input dimension should be irrelevant as it does not affect the image complexity . We show that this can be achieved by simpler means and with little to no trade-off through 2 weight decay, where the regularization constant λ corrects the loss scaling as the norm of the input increases with d. We increase the MNIST image width √ d from 28 to 56, 84, and 112 pixels. The loss L is predicted to grow like √ d for -sized attacks by Thm. 4 of. We confirm that without regularization the loss does scale roughly as predicted (see Table 13 of Appendix F). Training with 2 weight decay, however, we obtain adversarial test accuracy ratios of 0.98±0.01, 0.96±0.04, and 1.00±0.03 and clean accuracy ratios of 0.999±0.002, 0.996 ± 0.003, and 0.987 ± 0.004 for √ d of 56, 84, and 112, respectively, relative to the original √ d = 28 dataset. A more detailed explanation and are provided in Appendix F. Next, we repeat this experiment with a twohidden-layer ReLU MLP, with the number of hidden units equal to the half the input dimension, and optionally use one hidden layer with batch norm. 5 To evaluate robustness, 100 iterations of BIM-∞ were used with a step size of 1e-3, and ∞ = 0.1. We also report test accuracy with additive Gaussian noise of zero mean and unit variance, the same first two moments as the clean images. 6 Despite a difference in clean accuracy of only 0.08 ± 0.05%, Table 6 shows that for the original image resolution, batch norm reduced accuracy for noise by 16.4 ± 0.4%, and for BIM-∞ by 43.8±0.5%. Robustness keeps decreasing as the image size increases, with the batch-normalized network having ∼ 40% less robustness to BIM and 13 − 16% less to noise at all sizes. We then apply the 2 regularization constants tuned for the respective input dimensions on the linear model to the ReLU MLP with no further adjustments. Table 7 shows that by adding sufficient 2 regularization (λ = 0.01) to recover the original (, no BN) accuracy for BIM of ≈ 66% when using batch norm, we induce a test error increase of 1.69 ± 0.01%, which is substantial on MNIST. Furthermore, using the same regularization constant and no batch norm increases clean test accuracy by 1.39 ± 0.04%, and for the BIM-∞ perturbation by 21.7 ± 0.4%. Finally, following the guidance in the original work on batch norm to the extreme (λ = 0): when we reduce weight decay when using batch norm, accuracy for the ∞ = 0.1 perturbation is degraded by 79.3 ± 0.3% for In all cases, using batch norm greatly reduced test accuracy for noisy and adversarially perturbed inputs, while weight decay increased accuracy for such inputs. As supplementary evidence, we contrast the "Fooling images" and examples of BN vs. L2-regularized models on MNIST and SVHN in Appendix J. Our work examines the effect of batch norm on model robustness at test time. References with an immediate connection to our work were discussed in the previous sections; here we briefly mention other works that do not have a direct relationship to our experiments, but are relevant to batch norm in general. The original work that introduced batch norm as a technique for improving neural network training and test performance motivated it in terms of "internal covariate shift" -referring to the changing distribution of layer outputs, an effect that requires subsequent layers to steadily adapt to the new distribution and thus slows down the training process. Several follow-up works started from the empirical observation that batch norm usually accelerates and stabilizes training, and attempted to clarify the mechanism behind this effect. One argument is that batchnormalized networks have a smoother optimization landscape due to smaller gradients immediately before the batch-normalized layer . study the effect of stacking many batch-normalized layers and prove that this causes gradient explosion that is exponential in network depth for networks without skip connections and holds for any non-linearity. In practice, relatively shallow batch-normalized networks seem to benefit from the "helpful smoothing" of the loss surface property , while very deep networks are not trainable . In our work, we found that a single batch-normalized layer suffices to induce severe adversarial vulnerability. A concurrent submission suggests that BN induced vulnerability may be due to a mismatch between the tracked mean and variance values at training versus test time . We investigate this hypothesis in Appendix I and find that the use of tracked statistics can play a similar role as tuning the numerical stability constant c, thus this does not completely account for the vulnerability. Weight decay's loss scaling mechanism is complementary to other mechanisms identified in the literature, for instance that it increases the effective learning rate (van ; a). Our are consistent with these works in that weight decay reduces the generalization gap (between training and test error), even in batch-normalized networks where it is presumed to have no effect. Given that batch norm is not typically used on all layers, the loss scaling mechanism persists, although to a lesser degree in this case. performed similar input dimension scaling experiments as in this work and came to a similar that the input dimension is irrelevant to adversarial vulnerability. However, like , they use PGD rather than weight decay to prevent vulnerability from increasing with input dimension. Although it can be shown that robust optimization is equivalent to parameter norm regularization for linear models if we allow the -ball (aka disturbance δ) to vary with each example , we maintain that the latter is a more efficient approach. We found that there is no free lunch with batch norm when model robustness is a concern: the accelerated training properties and occasionally higher clean test accuracy come at the cost of increased vulnerability, both to additive noise and for adversarial perturbations. We have shown that there is no inherent relationship between the input dimension and vulnerability. Our highlight the importance of identifying the disparate mechanisms of regularization techniques. The constant c originally added to the mini-batch variance in the denominator for numerical stability (named in) turns out to be an important hyperparameter in terms of robustness. It acts as a threshold on the variance of all input dimensions or neurons. When c is much less than the minimum variance over dimensions, it induces boundary tilting along the lowvariance dimensions. In Figure 3 we sweep c for MNIST 3 vs. 7 and CIFAR-10, and compare the corresponding clean test accuracy with FGSM and AWGN accuracy for MNIST, and AWGN for CIFAR-10. For MNIST, increasing c allows us to trade-off clean accuracy for robustness to FGSM, but is suboptimal compared to L2 weight decay. For these experiments we fixed γ α = 1 and β α = 0. For CIFAR-10, eight-layer VGG models were trained with a constant learning rate of 0.01 with no drops, momentum of 0.9, a batch size of 128, and 50 epochs (for computational reasons) over four random seeds. As for BNGD, for this particular experiment we apply BN only to the input layer. A consistent trend is observed where robustness to noise increases greatly as c is increased, but we note that this occurs for c several orders of magnitude greater than default settings. [1e-6, 3e+3] for the VGG8 architecture. Increasing either c or λ has a similar effect to trade clean test accuracy for increased robustness, until the effect is too large and both accuracies degrade. The absolute accuracies are consistently higher without BN. Error bars indicate standard error of the mean over four and five random seeds for MNIST and CIFAR-10, respectively. We recommend following each curve from right to left to be consistent with our description above. The default setting (highest clean test accuracy, lowest robustness) starts in the bottom right corner and the initial trade-off between clean test accuracy and robustness is traced up and leftwards until the curves inflect. We used the PGD implementation from , with an example code snippet below. Because we normalize the data to zero mean and unit variance for SVHN, CIFAR-10, and ImageNet, the pixel range was clipped to {±1}, {±2}, and {±2} respectively. 7 These ranges were obtained by visual inspection of the image histograms and confirming that most of the pixels fall within this range. As a sanity check, we set the perturbation magnitude = 0 to confirm that the clipping itself had a negligible impact on test accuracy. As a , absolute accuracy comparisons with other works where the input is not centered, e.g. x ∈, should be compared to = 4 /255 in those works for = 8 /255 in this work. During the evaluation of robustness with PGD, we use ∞ = 0.03 and a step size of ∞ /10 for SVHN, CIFAR-10, and ∞ = 0.01 for ImageNet. For PGD-2 we set 2 = ∞ √ d, where d is the input dimension. To reduce the random error of the evaluation no random start was used at test-time, i.e. rand_init=False. A code snippet for the PGD-∞ evaluation on SVHN is shown below: from advertorch.attacks import LinfPGDAttack adversary = LinfPGDAttack(net, loss_fn=nn. CrossEntropyLoss(reduction="sum"), eps=0.03, nb_iter=20, eps_iter=0.003, rand_init=False, clip_min=-1.0, clip_max=1.0, targeted=False) We report 20 iterations in most cases unless stated otherwise, but confirmed that using 40 iterations did not significantly improve upon this to within the measurement random error, i.e., the accuracy was not degraded further. For example, for VGG16 on CIFAR-10 evaluated using 40 iterations of PGD with ∞ = 0.03 and a step size of ∞ /20, instead of 20 iterations with ∞ /10, accuracy changed from 28.9 ± 0.2% to 28.5 ± 0.3%, which is a difference of only 0.4 ± 0.5%, less than the random (standard) error. It is natural to wonder if the degradation in robustness arising from the use of BN is simply due to BN increasing the standard test accuracy, given a known trade-off between the two (; ; ;). Note that if the relationship between input X and label Y is free of noise, e.g., as in , then there is no such trade-off and increasing accuracy corresponds to increasing robustness. For the toy problem we studied in § 3, BN actually aligned the decision boundary with the Bayes-optimal solution, so increasing standard accuracy may be intrinsic to the normalization itself in some cases. Given that BN does typically increase clean test accuracy by some small amount on commonly used datasets, we thought it was most representative to not intentionally limit the performance of BN. We did, however, find natural cases where BN did not improve clean test accuracy. We trained ResNets{20,32,44,56,110} using Fixup initialization on CIFAR-10: all consistently obtain about 0.5% higher clean test accuracy than their batch-normalized equivalent, and are also more robust to noise (≈ 15%) and PGD ∞ and 2 perturbations (≈ 30%), as shown in Figure 4. For MNIST, the of Tables 6 & 7 also show compatible clean accuracy irrespective of BN, and yet vastly different robustness. Thus, the vulnerability induced by BN is not merely a consequence of increasing standard test accuracy. We believe that it is most informative to evaluate on corruptions or perturbations that are not presented to models during training, which is why we did not use PGD adversarial training in the main text given that we evaluate on the same. In a practical setting we cannot assume awareness of potential attacks a priori. For brevity, we opted to report accuracy for an arbitrary small value of in the main text. In general, however, it is more informative to plot accuracy vs. to ensure the accuracy reaches zero for reasonably large to help rule out gradient masking issues . This also shows that was not cherry-picked. Figure 5 (b) shows that PGD-∞ training recovers much of the BN-vulnerability gap when tested on PGD-∞ perturbations only, although there is a still a non-trivial improvement at = 8 /255 from 38.84% to 41.57% (recall that we only trained with max = 4 /255, so absolute accuracy is slightly lower than in). Table 8 shows that BN reduces the mean test accuracy on MNIST-C by 11 ± 2%; the accuracy is reduced from 87.0 ± 0.5% (Per-img) to 76 ± 2% (BN). This is compatible with , which reported an 11.15% degradation in absolute test accuracy for PGD training, although they did not report the variance over multiple runs. Note that use a slightly different CNN architecture that obtains 91.21% mean test accuracy, slightly higher than our 87.0 ± 0.5% baseline. In particular, the performance of BN and the PGD variants is decimated by altering the brightness of the images, whereas the baseline shows little performance degradation. It is known that PGD training yields a highly sparse network that implements a thresholding operation in the first layer, see the Appendix C of , which forces the model to be invariant to perturbations with ∞ -norm <. This excessive invariance is itself another cause of adversarial vulnerability (a;b). The PGD trained batch-normalized model (PGD BN) performs worse than the baseline by double digit percentages for each of the: "Brightness", "Fog", "Impulse Noise", "Stripe", and "Zigzag" corruptions, with 7 ± 2% higher mCE. Table 8: MNIST-C corruption benchmark for four models: A naturally trained baseline using per-image standardization, "Per-img", a naturally trained batch-normalized baseline, "BN", a PGD trained model, "PGD", and a PGD trained batch-normalized model, "PGD BN". See text for PGD details. Values are the accuracy after applying each corruption to the test set, and the standard error over three random initializations of each model, to one or two significant figures. Cells for which the "Per-img" baseline dramatically outperforms all other columns are emphasized in grey. For "BN", "PGD", and "PGD BN", corruption accuracies can fluctuate considerably with the random seed, despite low variance in clean test accuracy. Conversely, the accuracy of Per-img has low variance in all cases. Fixup 94.6 ± 0.1 69.1 ± 1.1 20.3 ± 0.3 9.4 ± 0.2 87.5 ± 0.3 67.8 ± 0.9 BN 95.9 ± 0.1 57.6 ± 1.5 14.9 ± 0.6 8.3 ± 0.3 89.6 ± 0.2 58.3 ± 1.2 This section contains supplementary explanations and to those of Section 4. BN Clean Noise PGD-∞ PGD-2 92.60 ± 0.04 83.6 ± 0.2 27.1 ± 0.3 22.0 ± 0.8 94.46 ± 0.02 78.1 ± 0.6 10 ± 1 1.6 ± 0.3 For SVHN, models were trained by stochastic gradient descent (SGD) with momentum 0.9 for 50 epochs, with a batch size of 128 and initial learning rate of 0.01, which was dropped by a factor of ten at epochs 25 and 40. Trials were repeated over five random seeds. We show the of this experiment in Table 9, finding that BN increased clean test accuracy by 1.86 ± 0.05%, and reduced test accuracy for additive noise by 5.5 ± 0.6%, for PGD-∞ by 17 ± 1%, and for PGD-2 by 20 ± 1%. Our first attempt to train VGG models on SVHN with more than eight layers failed, therefore for a fair comparison we report the robustness of the deeper models that were only trainable by using BN in Table 10. None of these models obtained much better robustness in terms of PGD-2, although they did better for PGD-∞. Fixup initialization was recently proposed to reduce the use of normalization layers in deep residual networks (b). As a natural test we compare a WideResNet (28 layers, width factor 10) with Fixup versus the default architecture with BN. Note that the Fixup variant still contains one BN layer before the classification layer, but the number of BN layers is still greatly reduced. We train WideResNets (WRN) with five unique seeds and report their test accuracies in Table 11. Consistent with , higher clean test accuracy on CIFAR-10, i.e. obtained by the WRN compared to VGG, translated to higher clean accuracy on CIFAR-10.1. However, these gains were wiped out by moderate Gaussian noise. VGG8 dramatically outperforms both WideResNet variants subject to noise, achieving 78.9±0.6 vs. 69.1±1.1. Unlike for VGG8, the WRN showed little generalization gap between noisy CIFAR-10 and 10.1 variants: 69.1 ± 1.1 is reasonably comparable with 67.8 ± 0.9, and 57.6 ± 1.5 with 58.3 ± 1.2. The Fixup variant improves accuracy by 11.6 ± 1.9% for noisy CIFAR-10, 9.5 ± 1.5% for noisy CIFAR-10.1, 5.4 ± 0.6% for PGD-∞, and 1.1 ± 0.4% for PGD-2. We believe our work serves as a compelling motivation for Fixup and other techniques that aim to reduce usage of BN. The role of skip-connections should be isolated in future work since absolute values were consistently lower for residual networks. The discrepancy between the in additive noise and for white-box BIM perturbations for ImageNet in Section 3 raises a natural question: Is gradient masking a factor influencing the success of the white-box on ImageNet? No, consistent with the white-box , when the target is unnormalized but the source is, top 1 accuracy is 10.5% − 16.4% higher, while top 5 accuracy is 5.3% − 7.5% higher, than vice versa. This can be observed in Table 12 by comparing the diagonals from lower left to upper right. When targeting an unnormalized model, we reduce top 1 accuracy by 16.5% − 20.4% using a source that is also unnormalized, compared to a difference of only 2.1% − 4.9% by matching batch-normalized networks. This suggests that the features used by unnormalized networks are more stable than those of batch-normalized networks. Unfortunately, the pre-trained ImageNet models provided by the PyTorch developers do not include hyperparameter settings or other training details. However, we believe that this speaks to the generality of the , i.e., that they are not sensitive to hyperparameters. In Figure 6 we show that BN not only limits the maximum trainable depth, but robustness decreases with the batch size for depths that maintain test accuracy, at around 25 or fewer layers (in Figure 6(a) ). Both clean accuracy and robustness showed little to no relationship with depth nor batch size in unnormalized networks. A few outliers are observed for unnormalized networks at large depths and batch size, which could be due to the reduced number of parameter update steps that from a higher batch size and fixed number of epochs . Note that in Figure 6 (a) the bottom row-without batch norm-appears lighter than the equivalent plot above, with batch norm, indicating that unnormalized networks obtain less absolute peak accuracy than the batch-normalized network. Given that the unnormalized networks take longer to converge, we prolong training for 40 epochs total. When they do converge, we see more configurations that achieve higher clean test accuracy than batch-normalized networks in Figure 6 (b). Furthermore, good robustness can be experienced simultaneously with good clean test accuracy in unnormalized networks, whereas the regimes of good clean accuracy and robustness are still mostly non-overlapping in Figure 6 (b). by training fully-connected models of depth L and constant width (N l =384) with ReLU units by SGD, and learning rate η = 10 −5 B for batch size B on MNIST. We train for 10 and 40 epochs in (a) and (b) respectively. The BN parameters γ and β were left as default, momentum disabled, and c = 1e-3. Each coordinate is first averaged over three seeds. Diamond-shaped artefacts for unnormalized case indicate one of three seeds failed to train -note that we show an equivalent version of (a) with these outliers removed and additional batch sizes from 5-20 in Figure 2. Best viewed in colour. Consider a logistic classification model represented by a neural network consisting of a single unit, parameterized by weights w ∈ R d and bias b ∈ R, with input denoted by x ∈ R d and true labels y ∈ {±1}. Predictions are defined by s = w x + b, and the model is optimized through empirical risk minimization, i.e., by applying stochastic gradient descent (SGD) to the loss function equation 2, where ζ(z) = log(1 + e −z): We note that w x + b is a scaled, signed distance between x and the classification boundary defined by our model. If we define d(x) as the signed Euclidean distance between x and the boundary, then we have: w x + b = w 2 d(x). Hence, minimizing equation 2 is equivalent to minimizing We define the scaled loss as ζ w 2 (z):= ζ(w 2 × z) and note that adding a 2 regularization term in equation 3, ing in equation 5, can be understood as a way of controlling the scaling of the loss function: To test this theory empirically we study a model with a single linear layer (number of units equals input dimension) and cross-entropy loss function on variants of MNIST of increasing input dimension, to approximate the toy model described in the "core idea" from as closely as possible, but with a model capable of learning. Clearly, this model is too simple to obtain competitive test accuracy, but this is a helpful first step that will be subsequently extended to ReLU networks. The model was trained by SGD for 50 epochs with a constant learning rate of 1e-2 and a mini-batch size of 128. In Table 13 we show that increasing the input dimension by resizing MNIST from 28 × 28 to various resolutions with PIL.Image. NEAREST interpolation increases adversarial vulnerability in terms of accuracy and loss. Furthermore, the "adversarial damage", defined as the average increase of the loss after attack, which is predicted to grow like √ d by Theorem 4 of , falls in between that obtained empirically for = 0.05 and = 0.1 for all image widths except for 112, which experiences slightly more damage than anticipated. note that independence between vulnerability and the input dimension can be recovered through adversarial-example augmented training by projected gradient descent (PGD), with a small trade-off in terms of standard test accuracy. We find that the same can be achieved through a much simpler approach: 2 weight decay, with parameter λ chosen dependent on d to correct for the loss scaling. This way we recover input dimension invariant vulnerability with little degradation of test accuracy, e.g., see the for √ d = 112 and = 0.1 in Table 13: the accuracy ratio is 1.00 ± 0.03 with weight decay regularization, compared to 0.10 ± 0.09 without. Compared to PGD training, weight decay regularization i) does not have an arbitrary hyperparameter that ignores inter-sample distances, ii) does not prolong training by a multiplicative factor given by the number of steps in the inner loop, and 3) is less attack-specific. Thus, we do not use adversarially augmented training because we wish to convey a notion of robustness to unseen attacks and common corruptions. Furthermore, enforcing robustness to -perturbations may increase vulnerability to invariance-based examples, where semantic changes are made to the input, thus changing the Oracle label, but not the classifier's prediction (a). Our models trained with weight decay obtained 12% higher accuracy (86% vs. 74% correct) compared to batch norm on a small sample of 100 ∞ invariance-based MNIST examples. 9 We make primary use of traditional p perturbations as they are well studied in the literature and straightforward to compute, but solely defending against these is not the end goal. A more detailed comparison between adversarial training and weight decay can be found in . The scaling of the loss function mechanism of weight decay is complementary to other mechanisms identified in the literature recently, for instance that it also increases the effective learning rate (van ; a). Our are consistent with these works in that weight decay reduces the generalization gap, even in batch-normalized networks where it is presumed to have no effect. Given that batch norm is not typically used on the last layer, the loss scaling mechanism persists in this setting, albeit to a lesser degree. Mini-batch membership is indicated by marker fill and class membership by colour. Each layer is projected to its two principal components. In (b) we scale both components by a factor of 100, as the dynamic range decreases with depth under default initialization. We observe in (a) that some samples are already overlapping at Layer 2, and classes are mixed at Layer 14. For VGG8, the mean generalization gaps due to batch norm for noise were: Gaussian-9.2 ± 1.9%, Impulse-7.5 ± 0.8%, Shot-5.6 ± 1.6%, and Speckle-4.5 ± 1.6%. After the "noise" category the next most damaging corruptions (by difference in accuracy due to batch norm) were: Contrast-4.4 ± 1.3%, Spatter-2.4 ± 0.7%, JPEG-2.0 ± 0.4%, and Pixelate-1.3 ± 0.5%. Results for the remaining corruptions were a coin toss as to whether batch norm improved or degraded robustness, as the random error was in the same ballpark as the difference being measured. For VGG13, the batch norm accuracy gap enlarged to 26 − 28% for Gaussian noise at severity levels 3, 4, and 5; and over 17% for Impulse noise at levels 4 and 5. Averaging over all levels, we have gaps for noise variants of: Gaussian-20.9 ± 1.4%, Impulse-13.6 ± 0.6%, Shot-14.1 ± 1.0%, and Speckle-11.1 ± 0.8%. Robustness to the other corruptions seemed to benefit from the slightly higher clean test accuracy of 1.3 ± 0.1% for the batch-normalized VGG13. The remaining generalization gaps varied from (negative) 0.2 ± 1.3% for Zoom blur, to 2.9 ± 0.6% for Pixelate. For the WRN, the mean generalization gaps for noise were: Gaussian-12.1 ± 2.8%, Impulse-10.7 ± 2.9%, Shot-8.7 ± 2.6%, and Speckle-6.9 ± 2.6%. Note that the large uncertainty for these measurements is due to high variance for the model with batch norm, on average 2.3% versus 0.7% for Fixup. JPEG compression was next at 4.6 ± 0.3%. The "Adversarial Spheres" dataset contains points sampled uniformly from the surfaces of two concentric n-dimensional spheres with radii R = 1 and R = 1.3 respectively, and the classification task is to attribute a given point to the inner or outer sphere. We consider the case n = 2, that is, datapoints from two concentric circles. This simple problem poses a challenge to the conventional wisdom regarding batch norm: not only does batch norm harm robustness, it makes training less stable. In Figure 9 we show that, using the same architecture as in , the batchnormalized network is highly sensitive to the learning rate η. We use SGD instead of Adam to avoid introducing unnecessary complexity, and especially since SGD has been shown to converge to the maximum-margin solution for linearly separable data . We use a finite dataset of 500 samples from N (0, I) projected onto the circles. The unormalized network achieves zero training error for η up to 0.1 (not shown), whereas the batch-normalized network is already untrainable at η = 0.01. To evaluate robustness, we sample 10,000 test points from the same distribution for each class (20k total), and apply noise drawn from N (0, 0.005 × I). We evaluate only the models that could be trained to 100% training accuracy with the smaller learning rate of η = 0.001. The model with batch norm classifies 94.83% of these points correctly, while the unnormalized net obtains 96.06%. (a) (b) Figure 9: We train the same two-hidden-layer fully connected network of width 1000 units using ReLU activations and a mini-batch size of 50 on a 2D variant of the "Adversarial Spheres" binary classification problem . Dashed lines denote the model with batch norm. The batch-normalized model fails to train for a learning rate of η = 0.01, which otherwise converges quickly for the unnormalized equivalent. We repeat the experiment over five random seeds, shaded regions indicate a 95% confidence interval. Figure 11: By increasing the numerical stability constant c to between 1e-2 and 1e-1, we can achieve the same tilting angle θ as obtained by not using tracking (10(b)), but which works with arbitrary batch sizes at test time. We test BNGD in eval mode (with tracked mean and variance from the training set) and sweep c from the pytorch default value of 1e-5 to 1. The weights are then visualized after training. In a concurrent submission, it is suggested that the main reason for BN induced vulnerability is a mismatch between the mean and variance statistics at training versus test time . We investigate this hypothesis using the well controlled MNIST 3 vs. 7 testbed from §3. Figure 10 shows that by eliminating tracking, i.e. going from plot 10(a) to 10(b), the boundary tilting angle θ w.r.t. the nearest-centroid classifier is reduced by 13.7 •. Eliminating BN altogether reduces θ by an additional 25.2 •, and using per-image normalization (rather than per-pixel as in BN) achieves a further reduction of θ by 31.3 •. In terms of FGSM at = 1 /10, model 10(b) (BN, no tracking) achieves 38.7% accuracy whereas the unnormalized baseline 10(c) achieves 66.1%. The observation regarding tracking is insightful and contributes to a more refined understanding of BN. However, this experiment is a counterexample to the claim that the tracking aspect of BN is the main source of vulnerability, compared to boundary tilting arising from the per-dimension normalization procedure itself. As acknowledged in , it is worth re-iterating that using BN without tracked statistics at test-time is restricted to large batch sizes, which may not be practical. In Figure 11, we show that the reduction in vulnerability achieved by not using tracked statistics at test time can be achieved by increasing the numerical stability constant c. Here, the c value can be used to interpolate between the configurations achieved when tracking is used (shown in Figure 10(a) ), or not used (shown in Figure 10(b) ) for a fixed c. There remains a distinct qualitative difference between the weight matrices visualized in Figures [10(b) (c=1e-5, no tracking) -11(f) (c=1, with tracking)], which both resemble the hand-drawn digit "3", compared to the case without BN in Figures [10(c) -(e)] which contain both a "3" as dark pixels and a "7" as bright pixels. This is a consequence of the BN procedure not being class-aware, i.e., all pixels of a digit "7" are normalized by the variance of both class "3" and class "7", which more closely resembles a "3" for this dataset as shown in Figure 10 (f). To alleviate possible concerns that gradient masking effects may explain the apparent difference in robustness between batch-normalized and unnormalized networks, we provide qualitative evidence that BN degrades the visual appearance of adversarial examples for strong unbounded attacks. In other words, we show that these unbounded adversarial examples remain "adversarial" for batch-normalized networks, while semantically relevant features are introduced for inputs classified with high confidence by the baselines, such that a human oracle would agree with the predictions. If the gradients were obfuscated in any way, we would be unable to meaningfully increase the accuracy or prediction confidence of the models beyond random chance, and the examples that we initialize from white noise would remain noisy. Note: this is not the sole source of evidence we provide that gradient masking is not a factor, the common corruption benchmarks and unbounded attacks reaching 100% success also support this. For the MNIST experiments, we train a LetNet-5 CNN variant from the AdverTorch library . All models (batch-normalized or not) are trained for 90 epochs of SGD with a constant learning rate of 0.01, and a mini-batch size of 50. We compare three variants of this model: Uses an L 2 weight decay regularization constant λ = 0.01. The two convolution layers of the LetNet-5 are batch-normalized, with default settings (γ and β enabled, numerical stability constant set to 1e-5, momentum of 0.1). 3. Per-img: Subsumes "L2" and additionally uses per-image normalization, i.e., we subtract the global mean of each image to correct the average intensity, then divide by the standard deviation of all features in the image. The models obtain similar clean test accuracy: "L2"-98.0%, "BN"-99.2%, "Per-img"-98.4%. To construct adversarial examples, two unbounded attack schemes are used: "fooling images" and the (CWL2) method. The fooling images procedure starts with white noise, then minimizes the cross entropy loss wrt a target class until it is predicted with high confidence. "Confidence" here means the softmax margin, i.e., the difference between the max softmax output and the second from max output. Formally, we sample a starting point x 0 ∼ N (0, 0.1) ∈ R 28×28, then update x according to x n = x n−1 + δ, using n = 1 to 100 iterations of PGD-2 w.r.t. each class label. For "L2" and "BN", we clip pixel values to, and for "Per-img" to [−1, 1]. We use a PGD-2 step size of 0.2. Figure 12 shows the fooling images, their predicted class labels, and prediction confidence for the three configurations. The procedure bears similarity to "Activation Maximization" techniques for interpreting neural networks, but this usually requires post-hoc regularization of the input images . This was not required here, owing to the competitive robustness of models 12(a) and 12(b), which makes them somewhat interpretable by default. The batch-normalized model classifies images that only faintly resemble natural digits with full confidence, while those of the baselines contain more clearly class-relevant features. In particular, the "Per-img" case in Figure 12 (b) makes no fully saturated confidence predictions, and its fooling images resemble matched filters w.r.t. task relevant edges. (CWL2) Adam optimizer-based attack seeks to find the smallest perturbation in terms of 2 distance which achieves either an arbitrary, or targeted, misclassification. We use the arbitrary misclassification objective, confidence parameter k = 4 to encourage reasonably high confidence predictions, a "learning rate" of 1e-2, nine binary search steps, and an initial value for the constant c of 1e-3, which balances the minimization of the 2 -norm of the perturbation δ, and prediction confidence. 11 We allow up to 10,000 attack iterations, and show the ing examples in Figure 13. For the same confidence threshold, the mean 2 norm of the perturbations required to change the predictions of the batch-normalized model was δ 2 = 0.95, while δ 2 = 2.89 was required for the unnormalized model. This is a three fold improvement in terms of the relevant performance metric for this attack, which always obtains 100% success by virtue of being unbounded. For SVHN, we visualize the fooling images in Figure 14, and all combinations of source to targeted misclassification crafted using a basic iterative gradient method (BIM-2) for a random sample of each class in Figure 15. We use a simple CNN architecture described in Table 15. The model is trained for 50 epochs of SGD on the SVHN dataset (excluding the "extra" split) with a constant learning rate of 0.01, and a mini-batch size of 128 in both cases. Since SVHN is imbalanced, the loss is weighted by the inverse frequency of each class. As in §J.1, for the no-BN baseline we use an L 2 weight decay regularization constant λ = 0.01, and per-image preprocessing. Although both models obtain similar a test accuracy of about 90%, the fooling images for the unnormalized model contain task relevant edges, like those of Figure 12(b), whereas the batchnormalized model yields difficult to interpret images. Similarly, the examples of grid 15(a) resemble the target class in many cases, while those of 15(b) for the batch-normalized model have a subtle textured effect, and preserve the semantic cues of the source image. It is known that using a high initial learning rate, which is annealed during training, often achieves higher test accuracy than training with a smaller fixed learning rate, even though the latter approach optimizes the training loss more quickly. To explain this phenomenon, introduce the concept of learning order, which is a training time property that was found to be predictive of generalization ability, in contrast with post-training notions of model complexity that are typically believed to govern generalization. Learning order refers to different examples being learned at. For CWL2 we set the confidence parameter k = 4, and use the "arbitrary misclassification" objective, i.e., the adversary only needs to flip the label, not achieve any particular label. Plot (b) shows the perturbed images (x adv = x + δ) for the baseline L 2 regularized model (δ 2 = 2.89), while plot (d) shows those of the batch-normalized model. The "adversarial examples" shown in (b) contain semantic features corresponding to the predicted class. An image which was originally correctly classified as the digit "1" is now classified as "7", but the top of a "7" has been added, as well as the commonly used "dash" through the center. Several instances of "4" are turned into "9", as the top of the loop of the "4" is completed. For the batch-normalized model, whose adversarial examples are shown in (d), the examples are classified with the same or higher confidence, yet the perturbations are not perceptible and their 2 -norms are significantly lower (δ 2 = 0.95). different times, according to their intrinsic noise level and complexity. Li et al. find that using a high initial learning rate prevents the model from fitting high complexity patterns too early, at the expense of easier to fit patterns. When the learning rate is annealed, the model effectively moves on to the next stage of a "curriculum" in which it can learn more challenging patterns. To examine the interplay between the learning rate and robustness, we conduct a similar experiment on CIFAR-10 as in with "small" and "large" initial learning rate ("lr"). We evaluate several measures of robustness every 10 epochs to better characterize the regimes in which the vulnerability occurs, and its relationship with the annealing of the learning rate. A shallow VGG variant (VGG8) is trained over 150 epochs using two different initial learning rates (details in caption of Figure 16), and the process is repeated over three random seeds for BN vs. no-BN. We use standard meta-parameters, SGD with momentum 0.9, a mini-batch size of 128, weight decay 5e-4, and standard data augmentation. We use "Un" to denote the unnormalized model, and "BN" for batchnormalized. The final test accuracies are: "Un, small lr": 88.7 ± 0.1%, "Un, large lr": 90.4 ± 0.1%, (diff. 1.7 ± 0.1%), "BN, small lr": 90.3 ± 0.1%, "BN, large lr": 91.3 ± 0.1, (diff. 1.0 ± 0.1%). To evaluate robustness, we evaluate accuracy on several variants of the test set: "Clean"-the original test set, "AWGN"-with additive white Gaussian noise (σ 2 = 1 /16), "FSGM"-with perturbation crafted by one-step fast gradient sign method ∞, and "PGD"-a 40-step PGD-∞ perturbation, with a step size of /30. We use = 8 /255 as the ∞ norm budget for both FGSM and PGD perturbations, and clip the pixel values to [±2]. For the "high lr" case shown in Figure 16 (a), the unnormalized model achieves higher robustness than the batch-normalized model after around 70-80 epochs, shortly after the learning rate is first annealed. Meanwhile, Figure 16 (b) shows that in the "small lr" case, the unnormalized model achieves higher robustness on all tests compared to its batch-normalized equivalent, at all epochs. The "high lr" of Figure 16 (a) begets the question: if we are willing to sacrifice clean test accuracy and do not anneal the large learning rate, can we mitigate the BN vulnerability? We investigate this in a subsequent experiment depicted in Figure 17, which indeed shows that there appears to be little difference in robustness for this case. We conclude with a few remarks: 1. The "large constant learning rate" is not a particularly practical setting, as 10% absolute clean test accuracy has been left on the table. 2. We interpret the "large constant learning rate" not as "the vulnerability does not occur", but rather, the baseline fails to reach its potential. Indeed, from Figure 16 (b) and Table 2, training the same architecture with a small learning rate yields 52.9 ± 0.6% for the same PGD-∞ perturbation, while the BN variant achieves less than 40% accuracy at all times (this can be seen by inspection of Figures 16 and 17). 3. The accuracy of the unnormalized model on all test sets increases almost monotonically with prolonged training, and appears to be still increasing at the of the 150 epochs. For fairness, we did not investigate further training to see how long this trend continues. Conversely, the batch-normalized model converges quickly then plateaus. At this point its accuracy tends to oscillate, or even decline monotonically (particularly for PGD in Figure 16 (b)), thus careful early stopping is much more relevant to BN. We train a VGG model on CIFAR-10 for 150 epochs using a "large" (a) and "small" (b) initial learning rate (0.1, and 0.01 respectively), which is dropped by a factor of ten at epochs 60 and 120 (best seen in the training curves of the right column). Every ten epochs, we evaluate the accuracy on several variants of the test set: "Clean"-the original test set, "AWGN"-with additive white Gaussian noise, "FSGM"-with fast gradient sign method one-step ∞ perturbation, and "PGD"-a 40-step PGD-∞ perturbation. See text for additional meta-parameters. The light grey shading indicates where the accuracy of the unnormalized model exceeds that of the batch-normalized equivalent for a given test. Note: the series "Un PGD" achieves higher accuracy than "BN FGSM" in (b), so we use two different shades to distinguish the overlapping FGSM and PGD areas. As expected, the FGSM curve lies above the PGD curve corresponding to the same model at all points. Error bars indicate the standard error over three random seeds. Figure 17: To isolate the effect of annealing the learning rate for CIFAR-10, we repeat the experiment of Figure 16, only this time using a large learning rate (0.1) that is fixed during training. In (a), we have our first negative : the accuracy of the BN variant is either compatible, or slightly higher than that of the unnormalized variant. We discuss the implications of this in the text. In (b), we double the weight decay penalty to 1e-3. For the sake of completeness, we do this for both BN and no-BN cases, even though weight decay seems to have a different mechanism when combined with BN, to increase the effective learning rate (van ; a). Thus, increasing the weight decay leads to instability in the BN case, as the learning rate is already large. The relevant comparison is therefore between the unnormalized curves of (b), and the BN curves of (a), in which the higher weight decay unnormalized model is competitive with, and slightly outperforms by epoch 130, the better performing batch-normalized model in terms of robustness. Error bars indicate the standard error over three random seeds.
Batch normalization reduces robustness at test-time to common corruptions and adversarial examples.
1,451
scitldr
Learning to Optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning. In this paper, we explore learning an optimization algorithm for training shallow neural nets. Such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms. We develop an extension that is suited to learning optimization algorithms in this setting and demonstrate that the learned optimization algorithm consistently outperforms other known optimization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture. More specifically, we show that an optimization algorithm trained with the proposed method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. Machine learning is centred on the philosophy that learning patterns automatically from data is generally better than meticulously crafting rules by hand. This data-driven approach has delivered: today, machine learning techniques can be found in a wide range of application areas, both in AI and beyond. Yet, there is one domain that has conspicuously been left untouched by machine learning: the design of tools that power machine learning itself. One of the most widely used tools in machine learning is optimization algorithms. We have grown accustomed to seeing an optimization algorithm as a black box that takes in a model that we design and the data that we collect and outputs the optimal model parameters. The optimization algorithm itself largely stays static: its design is reserved for human experts, who must toil through many rounds of theoretical analysis and empirical validation to devise a better optimization algorithm. Given this state of affairs, perhaps it is time for us to start practicing what we preach and learn how to learn. Recently, BID20 and BID0 introduced two different frameworks for learning optimization algorithms. Whereas BID0 focuses on learning an optimization algorithm for training models on a particular task, BID20 sets a more ambitious objective of learning an optimization algorithm for training models that is task-independent. We study the latter paradigm in this paper and develop a method for learning an optimization algorithm for high-dimensional stochastic optimization problems, like the problem of training shallow neural nets. Under the "Learning to Optimize" framework proposed by BID20, the problem of learning an optimization algorithm is formulated as a reinforcement learning problem. We consider the general structure of an unconstrained continuous optimization algorithm, as shown in Algorithm 1. In each iteration, the algorithm takes a step ∆x and uses it to update the current iterate x (i). In hand-engineered optimization algorithms, ∆x is computed using some fixed formula φ that depends on the objective function, the current iterate and past iterates. Often, it is simply a function of the current and past gradients. Different choices of φ yield different optimization algorithms and so each optimization algorithm is essentially characterized by its update formula φ. Hence, by learning φ, we can learn an optimization algorithm. BID20 observed that an optimization algorithm can be viewed as a Markov decision process (MDP), where the state includes the current iterate, the action is the step vector ∆x end if x (i) ← x (i−1) + ∆x end for and the policy is the update formula φ. Hence, the problem of learning φ simply reduces to a policy search problem. In this paper, we build on the method proposed in BID20 and develop an extension that is suited to learning optimization algorithms for high-dimensional stochastic problems. We use it to learn an optimization algorithm for training shallow neural nets and show that it outperforms popular hand-engineered optimization algorithms like ADAM BID18, AdaGrad BID10 and RMSprop BID28 and an optimization algorithm learned using the supervised learning method proposed in BID0. Furthermore, we demonstrate that our optimization algorithm learned from the experience of training on MNIST generalizes to training on other datasets that have very dissimilar statistics, like the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. The line of work on learning optimization algorithms is fairly recent. BID20 and BID0 were the first to propose learning general optimization algorithms. BID20 explored learning task-independent optimization algorithms and used reinforcement learning to learn the optimization algorithm, while BID0 investigated learning task-dependent optimization algorithms and used supervised learning. In the special case where objective functions that the optimization algorithm is trained on are loss functions for training other models, these methods can be used for "learning to learn" or "metalearning". While these terms have appeared from time to time in the literature BID1 BID29 BID7 BID27, they have been used by different authors to refer to disparate methods with different purposes. These methods all share the objective of learning some form of meta-knowledge about learning, but differ in the type of meta-knowledge they aim to learn. We can divide the various methods into the following three categories. Methods in this category BID27 aim to learn what parameter values of the base-level learner are useful across a family of related tasks. The meta-knowledge captures commonalities shared by tasks in the family, which enables learning on a new task from the family to be performed more quickly. Most early methods fall into this category; this line of work has blossomed into an area that has later become known as transfer learning and multi-task learning. Methods in this category BID7 aim to learn which base-level learner achieves the best performance on a task. The meta-knowledge captures correlations between different tasks and the performance of different base-level learners on those tasks. One challenge under this setting is to decide on a parameterization of the space of base-level learners that is both rich enough to be capable of representing disparate base-level learners and compact enough to permit tractable search over this space. BID8 proposes a nonparametric representation and stores examples of different base-level learners in a database, whereas BID23 proposes representing base-level learners as general-purpose programs. The former has limited representation power, while the latter makes search and learning in the space of base-level learners intractable. BID16 views the (online) training procedure of any base-learner as a black box function that maps a sequence of training examples to a sequence of predictions and models it as a recurrent neural net. Under this formulation, meta-training reduces to training the recurrent net, and the base-level learner is encoded in the memory state of the recurrent net. Hyperparameter optimization can be seen as another example of methods in this category. The space of base-level learners to search over is parameterized by a predefined set of hyperparameters. Unlike the methods above, multiple trials with different hyperparameter settings on the same task are permitted, and so generalization across tasks is not required. The discovered hyperparameters are generally specific to the task at hand and hyperparameter optimization must be rerun for new tasks. Various kinds of methods have been proposed, such those based on Bayesian optimization BID17 BID5 BID24 BID26 BID11, random search BID4 and gradient-based optimization BID3 BID9 BID21. Methods in this category aim to learn a good algorithm for training a base-level learner. Unlike methods in the previous categories, the goal is not to learn about the outcome of learning, but rather the process of learning. The meta-knowledge captures commonalities in the behaviours of learning algorithms that achieve good performance. The base-level learner and the task are given by the user, so the learned algorithm must generalize across base-level learners and tasks. Since learning in most cases is equivalent to optimizing some objective function, learning a learning algorithm often reduces to learning an optimization algorithm. This problem was explored in BID20 and BID0. Closely related is BID2, which learns a Hebblike synaptic learning rule that does not depend on the objective function, which does not allow for generalization to different objective functions. Various work has explored learning how to adjust the hyperparameters of hand-engineered optimization algorithms, like the step size BID14; BID12 or the damping factor in the Levenberg-Marquardt algorithm BID22 ). Related to this line of work is stochastic meta-descent BID6, which derives a rule for adjusting the step size analytically. A different line of work BID13 BID25 parameterizes intermediate operands of special-purpose solvers for a class of optimization problems that arise in sparse coding and learns them using supervised learning.3 LEARNING TO OPTIMIZE 3.1 SETTING In the "Learning to Optimize" framework, we are given a set of training objective functions f 1,..., f n drawn from some distribution F. An optimization algorithm P takes an objective function f and an initial iterate x as input and produces a sequence of iterates DISPLAYFORM0, where x (T) is the solution found by the optimizer. We are also given a distribution D that generates the initial iterate x and a meta-loss L, which takes an objective function f and a sequence of iterates x,..., x (T) produced by an optimization algorithm as input and outputs a scalar that measures the quality of the iterates. The goal is to learn an optimization algorithm P * such that DISPLAYFORM1 The meta-loss is chosen to penalize optimization algorithms that exhibit behaviours we find undesirable, like slow convergence or excessive oscillations. Assuming we would like to learn an algorithm that minimizes the objective function it is given, a good choice of meta-loss would then simply be DISPLAYFORM2, which is equivalent to cumulative regret and can be interpreted as the area under the curve of objective values over time. The objective functions f 1,..., f n may correspond to loss functions for training base-level learners, in which case the algorithm that learns the optimization algorithm can be viewed as a meta-learner. In this setting, each objective function is the loss function for training a particular base-learner on a particular task, and so the set of training objective functions can be loss functions for training a base-learner or a family of base-learners on different tasks. At test time, the learned optimization algorithm is evaluated on unseen objective functions, which correspond to loss functions for training base-learners on new tasks, which may be completely unrelated to tasks used for training the optimization algorithm. Therefore, the learned optimization algorithm must not learn anything about the tasks used for training. Instead, the goal is to learn an optimization algorithm that can exploit the geometric structure of the error surface induced by the base-learners. For example, if the base-level model is a neural net with ReLU activation units, the optimization algorithm should hopefully learn to leverage the piecewise linearity of the model. Hence, there is a clear division of responsibilities between the meta-learner and base-learners. The knowledge learned at the meta-level should be pertinent for all tasks, whereas the knowledge learned at the base-level should be task-specific. The meta-learner should therefore generalize across tasks, whereas the base-learner should generalize across instances. The goal of reinforcement learning is to learn to interact with an environment in a way that minimizes cumulative costs that are expected to be incurred over time. The environment is formalized as a partially observable Markov decision process (POMDP) 1, which is defined by the tuple DISPLAYFORM0 is the probability density over initial states s 0, p (s t+1 |s t, a t) is the probability density over the subsequent state s t+1 given the current state s t and action a t, p o (o t |s t) is the probability density over the current observation o t given the current state s t, c: S → R is a function that assigns a cost to each state and T is the time horizon. Often, the probability densities p and p o are unknown and not given to the learning algorithm. A policy π (a t |o t, t) is a conditional probability density over actions a t given the current observation o t and time step t. When a policy is independent of t, it is known as a stationary policy. The goal of the reinforcement learning algorithm is to learn a policy π * that minimizes the total expected cost over time. More precisely, DISPLAYFORM1 where the expectation is taken with respect to the joint distribution over the sequence of states and actions, often referred to as a trajectory, which has the density DISPLAYFORM2 To make learning tractable, π is often constrained to lie in a parameterized family. A common assumption is that π (DISPLAYFORM3, where N (µ, Σ) denotes the density of a Gaussian with mean µ and covariance Σ. The functions µ π (·) and possibly Σ π (·) are modelled using function approximators, whose parameters are learned. In our setting, the state s t consists of the current iterate x (t) and features Φ(·) that depend on the history of iterates x,..., x (t), (noisy) gradients ∇f (x ),..., ∇f (x (t) ) and (noisy) objective valuesf (x ),...,f (x (t) ). The action a t is the step ∆x that will be used to update the iterate. The observation o t excludes x (t) and consists of features Ψ(·) that depend on the iterates, gradient and objective values from recent iterations, and the previous memory state of the learned optimization algorithm, which takes the form of a recurrent neural net. This memory state can be viewed as a statistic of the previous observations that is learned jointly with the policy. Under this formulation, the initial probability density p i captures how the initial iterate, gradient and objective value tend to be distributed. The transition probability density p captures the how the gradient and objective value are likely to change given the step that is taken currently; in other words, it encodes the local geometry of the training objective functions. Assuming the goal is to learn an optimization algorithm that minimizes the objective function, the cost c of a state s t = x (t), Φ (·)T is simply the true objective value f (x (t) ).Any particular policy π (a t |o t, t), which generates a t = ∆x at every time step, corresponds to a particular (noisy) update formula φ, and therefore a particular (noisy) optimization algorithm. Therefore, learning an optimization algorithm simply reduces to searching for the optimal policy. The mean of the policy is modelled as a recurrent neural net fragment that corresponds to a single time step, which takes the observation features Ψ(·) and the previous memory state as input and outputs the step to take. The reinforcement learning method we use is guided policy search (GPS) BID19, which is a policy search method designed for searching over large classes of expressive non-linear policies in continuous state and action spaces. It maintains two policies, ψ and π, where the former lies in a time-varying linear policy class in which the optimal policy can found in closed form, and the latter lies in a stationary non-linear policy class in which policy optimization is challenging. In each iteration, it performs policy optimization on ψ, and uses the ing policy as supervision to train π. More precisely, GPS solves the following constrained optimization problem: DISPLAYFORM0 where η and θ denote the parameters of ψ and π respectively, E ρ [·] denotes the expectation taken with respect to the trajectory induced by a policy ρ and π (a t | s t ; θ): DISPLAYFORM1 Since there are an infinite number of equality constraints, the problem is relaxed by enforcing equality on the mean actions taken by ψ and π at every time step 3. So, the problem becomes: DISPLAYFORM2 This problem is solved using Bregman ADMM BID30, which performs the following updates in each iteration: DISPLAYFORM3 where DISPLAYFORM4 The algorithm assumes that ψ (a t | s t, t; η) = N (K t s t + k t, G t), where η: DISPLAYFORM5, where θ:= (ω, Σ π) and µ π ω (·) can be an arbitrary function that is typically modelled using a nonlinear function approximator like a neural net. At the start of each iteration, the algorithm constructs a model of the transition probability densitỹ p (s t+1 | s t, a t, t; ζ) = N (A t s t + B t a t + c t, F t), where ζ:= (A t, B t, c t, F t) T t=1 is fitted to samples of s t drawn from the trajectory induced by ψ, which essentially amounts to a local linearization of the true transition probability p (s t+1 | s t, a t, t). We will use Eψ [·] to denote expectation taken with respect to the trajectory induced by ψ under the modelled transition probabilityp. Additionally, the algorithm fits local quadratic approximations to c(s t) around samples of s t drawn from the trajectory induced by ψ so that c(s t) ≈c(s t):= 1 2 s T t C t s t + d T t s t + h t for s t's that are near the samples. With these assumptions, the subproblem that needs to be solved to update η = (K t, k t, G t) becomes: DISPLAYFORM0 Eψ DKL ψ (at| st, t; η) ψ at| st, t; η ≤, where η denotes the old η from the previous iteration. Becausep andc are only valid locally around the trajectory induced by ψ, the constraint is added to limit the amount by which η is updated. It turns out that the unconstrained problem can be solved in closed form using a dynamic programming algorithm known as linear-quadratic-Gaussian (LQG) regulator in time linear in the time horizon T and cubic in the dimensionality of the state space D. The constrained problem is solved using dual gradient descent, which uses LQG as a subroutine to solve for the primal variables in each iteration and increments the dual variable on the constraint until it is satisfied. Updating θ is straightforward, since expectations taken with respect to the trajectory induced by π are always conditioned on s t and all outer expectations over s t are taken with respect to the trajectory induced by ψ. Therefore, π is essentially decoupled from the transition probability p (s t+1 | s t, a t, t) and so its parameters can be updated without affecting the distribution of s t' s. The subproblem that needs to be solved to update θ therefore amounts to a standard supervised learning problem. Since ψ (a t | s t, t; η) and π (a t | s t ; θ) are Gaussian, D t (θ, η) can be computed analytically. More concretely, if we assume Σ π to be fixed for simplicity, the subproblem that is solved for updating θ = (ω, Σ π) is: DISPLAYFORM1 Note that the last term is the squared Mahalanobis distance between the mean actions of ψ and π at time step t, which is intuitive as we would like to encourage π to match ψ. The problem of learning high-dimensional optimization algorithms presents challenges for reinforcement learning algorithms due to high dimensionality of the state and action spaces. For example, in the case of GPS, because the running time of LQG is cubic in dimensionality of the state space, performing policy search even in the simple class of linear-Gaussian policies would be prohibitively expensive when the dimensionality of the optimization problem is high. Fortunately, many high-dimensional optimization problems have underlying structure that can be exploited. For example, the parameters of neural nets are equivalent up to permutation among certain coordinates. More concretely, for fully connected neural nets, the dimensions of a hidden layer and the corresponding weights can be permuted arbitrarily without changing the function they compute. Because permuting the dimensions of two adjacent layers can permute the weight matrix arbitrarily, an optimization algorithm should be invariant to permutations of the rows and columns of a weight matrix. A reasonable prior to impose is that the algorithm should behave in the same manner on all coordinates that correspond to entries in the same matrix. That is, if the values of two coordinates in all current and past gradients and iterates are identical, then the step vector produced by the algorithm should have identical values in these two coordinates. We will refer to the set of coordinates on which permutation invariance is enforced as a coordinate group. For the purposes of learning an optimization algorithm for neural nets, a natural choice would be to make each coordinate group correspond to a weight matrix or a bias vector. Hence, the total number of coordinate groups is twice the number of layers, which is usually fairly small. In the case of GPS, we impose this prior on both ψ and π. For the purposes of updating η, we first impose a block-diagonal structure on the parameters A t, B t and F t of the fitted transition probability densityp (s t+1 | s t, a t, t; ζ) = N (A t s t + B t a t + c t, F t), so that for each coordinate in the optimization problem, the dimensions of s t+1 that correspond to the coordinate only depend on the dimensions of s t and a t that correspond to the same coordinate. As a ,p (s t+1 | s t, a t, t; ζ) decomposes into multiple independent probability densitiesp j s j t+1 s j t, a j t, t; ζ j, one for each coordinate j. Similarly, we also impose a block-diagonal structure on C t for fittingc(s t) and on the parameter matrix of the fitted model for π (a t | s t ; θ). Under these assumptions, K t and G t are guaranteed to be block-diagonal as well. Hence, the Bregman divergence penalty term, D (η, θ) decomposes into a sum of Bregman divergence terms, one for each coordinate. We then further constrain dual variables λ t, sub-vectors of parameter vectors and sub-matrices of parameter matrices corresponding to each coordinate group to be identical across the group. Additionally, we replace the weight ν t on D (η, θ) with an individual weight on each Bregman divergence term for each coordinate group. The problem then decomposes into multiple independent subproblems, one for each coordinate group. Because the dimensionality of the state subspace corresponding to each coordinate is constant, LQG can be executed on each subproblem much more efficiently. Similarly, for π, we choose a µ π ω (·) that shares parameters across different coordinates in the same group. We also impose a block-diagonal structure on Σ π and constrain the appropriate sub-matrices to share their entries. We describe the features Φ(·) and Ψ(·) at time step t, which define the state s t and observation o t respectively. Because of the stochasticity of gradients and objective values, the state features Φ(·) are defined in terms of summary statistics of the history of iterates DISPLAYFORM0, gradients ∇f (DISPLAYFORM1 and objective values f ( DISPLAYFORM2 . We define the following statistics, which we will refer to as the average recent iterate, gradient and objective value respectively: • ∇f ( DISPLAYFORM3 DISPLAYFORM4 The state features Φ(·) consist of the relative change in the average recent objective value, the average recent gradient normalized by the magnitude of the a previous average recent gradient and a previous change in average recent iterate relative to the current change in average recent iterate: DISPLAYFORM5 Note that all operations are applied element-wise. Also, whenever a feature becomes undefined (i.e.: when the time step index becomes negative), it is replaced with the all-zeros vector. Unlike state features, which are only used when training the optimization algorithm, observation features Ψ(·) are used both during training and at test time. Consequently, we use noisier observation features that can be computed more efficiently and require less memory overhead. The observation features consist of the following: DISPLAYFORM6 For clarity, we will refer to training of the optimization algorithm as "meta-training" to differentiate it from base-level training, which will simply be referred to as "training".We meta-trained an optimization algorithm on a single objective function, which corresponds to the problem of training a two-layer neural net with 48 input units, 48 hidden units and 10 output units on a randomly projected and normalized version of the MNIST training set with dimensionality 48 and unit variance in each dimension. We modelled the optimization algorithm using an recurrent neural net with a single layer of 128 LSTM BID15 cells. We used a time horizon of 400 iterations and a mini-batch size of 64 for computing stochastic gradients and objective values. We evaluate the optimization algorithm on its ability to generalize to unseen objective functions, which correspond to the problems of training neural nets on different tasks/datasets. We evaluate the learned optimization algorithm on three datasets, the Toronto Faces Dataset (TFD), CIFAR-10 and CIFAR-100. These datasets are chosen for their very different characteristics from MNIST and each other: TFD contains 3300 grayscale images that have relatively little variation and has seven different categories, whereas CIFAR-100 contains 50,000 colour images that have varied appearance and has 100 different categories. All algorithms are tuned on the training objective function. For hand-engineered algorithms, this entails choosing the best hyperparameters; for learned algorithms, this entails meta-training on the objective function. We compare to the seven hand-engineered algorithms: stochastic gradient descent, momentum, conjugate gradient, L-BFGS, ADAM, AdaGrad and RMSprop. In addition, we compare to an optimization algorithm meta-trained using the method described in BID0 on the same training objective function (training two-layer neural net on randomly projected and normalized MNIST) under the same setting (a time horizon of 400 iterations and a mini-batch size of 64).First, we examine the performance of various optimization algorithms on similar objective functions. The optimization problems under consideration are those for training neural nets that have the same number of input and hidden units (48 and 48) as those used during meta-training. The number of output units varies with the number of categories in each dataset. We use the same mini-batch size as that used during meta-training. As shown in FIG1, the optimization algorithm meta-trained using our method (which we will refer to as Predicted Step Descent) consistently descends to the optimum the fastest across all datasets. On the other hand, other algorithms are not as consistent and the relative ranking of other algorithms varies by dataset. This suggests that Predicted Step Descent has learned to be robust to variations in the data distributions, despite being trained on only one objective function, which is associated with a very specific data distribution that characterizes MNIST. It is also interesting to note that while the algorithm meta-trained using (which we will refer to as L2LBGDBGD) performs well on CIFAR, it is unable to reach the optimum on TFD.Next, we change the architecture of the neural nets and see if PredictedStep Descent generalizes to the new architecture. We increase the number of input units to 100 and the number of hidden units to 200, so that the number of parameters is roughly increased by a factor of 8. As shown in FIG2, Predicted Step Descent consistently outperforms other algorithms on each dataset, despite having not been trained to optimize neural nets of this architecture. Interestingly, while it exhibited a bit of oscillation initially on TFD and CIFAR-10, it quickly recovered and overtook other algorithms, which is reminiscent of the phenomenon reported in BID20 for low-dimensional optimization problems. This suggests that it has learned to detect when it is performing poorly and knows how to change tack accordingly. L2LBGDBGD experienced difficulties on TFD and CIFAR-10 as well, but slowly diverged. Step Descent is to stochasticity of the gradients. To this end, we take a look at its performance when we reduce the mini-batch size from 64 to 10 on both the original architecture with 48 input and hidden units and the enlarged architecture with 100 input units and 200 hidden units. As shown in Figure 3, on the original architecture, PredictedStep Descent still outperforms all other algorithms and is able to handle the increased stochasticity fairly well. In contrast, conjugate gradient and L2LBGDBGD had some difficulty handling the increased stochasticity on TFD and to a lesser extent, on CIFAR-10. In the former case, both diverged; in the latter case, both were progressing slowly towards the optimum. On the enlarged architecture, Predicted Step Descent experienced some significant oscillations on TFD and CIFAR-10, but still managed to achieve a much better objective value than all the other algorithms. Many hand-engineered algorithms also experienced much greater oscillations than previously, suggesting that the optimization problems are inherently harder. L2LBGDBGD diverged fairly quickly on these two datasets. Finally, we try doubling the number of iterations. As shown in FIG4, despite being trained over a time horizon of 400 iterations, Predicted Step Descent behaves reasonably beyond the number of iterations it is trained for. In this paper, we presented a new method for learning optimization algorithms for high-dimensional stochastic problems. We applied the method to learning an optimization algorithm for training shallow neural nets. We showed that the algorithm learned using our method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on unrelated tasks/datasets like the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. We also demonstrated that the learned optimization algorithm is robust to changes in the stochasticity of gradients and the neural net architecture.
We learn an optimization algorithm that generalizes to unseen tasks
1,452
scitldr
The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks. Nevertheless, the functional form of this dependency remains elusive. In this work, we present a functional form which approximates well the generalization error in practice. Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales. Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks. We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data. With the success and heightened adoption of neural networks for real world tasks, some questions remain poorly answered. For a given task and model architecture, how much data would one require to reach a prescribed performance level? How big a model would be needed? Addressing such questions is made especially difficult by the mounting evidence that large, deep neural networks trained on large-scale data outperform their smaller counterparts, rendering the training of high performance models prohibitively costly. Indeed, in the absence of practical answers to the above questions, surrogate approaches have proven useful. One such common approach is model scaling, where one designs and compares small-scale models, and applies the obtained architectural principles at a larger scale (e.g., ; ;). Despite these heuristics being widely used to various degrees of success, the relation between the performance of a model in the small-and large-scale settings is not well understood. Hence, exploring the limitations or improving the efficiency of such methods remains subject to trial and error. In this work we circle back to the fundamental question: what is the (functional) relation between generalization error and model and dataset sizes? Critically, we capitalize on the concept of model scaling in its strictest form: we consider the case where there is some given scaling policy that completely defines how to scale up a model from small to large scales. We include in this context all model parameters, such that traversing from one scale (in which all parameters are known) to another requires no additional resources for specifying the model (e.g., architecture search/design). We empirically explore the behavior of the generalization error over a wide range of datasets and models in vision and language tasks. While the error landscape seems fairly complex at first glance, we observe the emergence of several key characteristics shared across benchmarks and domains. Chief among these characteristics is the emergence of regions where power-law behavior approximates the error well both with respect to data size, when holding model size fixed, and vice versa. Motivated by these observations, we establish criteria which a function approximating the error landscape should meet. We propose an intuitive candidate for such a function and evaluate its quality, both in explaining the observed error landscapes and in extrapolating from small scale (seen) to large scale (unseen) errors. Critically, our functional approximation of the error depends on both model and data sizes. We find that this function leads to a high quality fit and extrapolation. For instance, the mean and standard deviation of the relative errors are under 2% when fitting across all scales investigated and under 5% when extrapolating from a slimmed-down model (1/16 of the parameters) on a fraction of the training data (1/8 of the examples) on the ImageNet and WikiText-103 datasets, with similar for other datasets. To the best of our knowledge, this is the first work that provides simultaneously: • A joint functional form of the generalization error landscape-as dependent on both data and model size-with few, interpretable degrees of freedom (section 5). • Direct and complete specification (via the scaling policy) of the model configuration attaining said generalization error across model and dataset sizes. • Highly accurate approximation of error measurements across model and data scales via the functional form, evaluated on different models, datasets, and tasks (section 6). • Highly accurate error prediction from small to large model and data (section 7). We conclude with a discussion of some implications of our findings as a practical and principled tool for understanding network design at small scale and for efficient computation and trade-off design in general. We hope this work also provides a useful empirical leg to stand on and an invitation to search for a theory of generalization error which accounts for our findings. Model scaling: A number of studies have explored the effect of model scaling on performance. For instance, image classification networks can be scaled by depth (number of layers;) or width (number of channels; ;). More recently, demonstrated how scaling width, depth, and input resolution has combined positive effects larger than scaling each factor in isolation. However, this relationship has yet to be quantified in a predictive form -by how much will error change with model scaling? In this work, we focus on finding a constructive functional form for determining the model given a specified performance. Data scaling: It has long been recognized that more data improves performance, and various studies report such trends in both computer vision (e.g., ;) and language processing tasks (e.g., ;). A number of prior studies observed power-law relations between the generalization error and training data size (; ;). Most relevant to our work, explored the effect of data size on the generalization error in vision, language, and speech tasks, and observed a strikingly consistent power-law behavior in a large set of experiments. However, while these studies point to the empirical existence of a power law in terms of data, they do not offer tools for predicting the performance given a specified model. Nor do they offer low-cost methods to specify the model configuration which would attain the power law with data dependency. Indeed, Hestness et al. had to search over models and their configurations at large scale to exhibit their findings, incurring prohibitive computational costs. In contrast, we demonstrate a constructive recipe, where we directly predict the test performance at large scale and specify the full model configuration which attains it (with no need for large-scale search), given performance at small scale. (a) Training data size (number of words) and model size (number of parameters excluding word embeddings) for language modeling tasks. Size (N) (b) Training data size (number of images) and model size (number of parameters) for image classification tasks. Theoretical error bounds: Much attention has been given to theoretical explanations of the generalization capabilities of deep neural networks (a; b; a; b;). While fully engaging with this literature is beyond our scope, we note that recent studies have derived bounds involving power-law dependencies in both model and data size . We leave it as an open question for future work to find theoretical explanations for the empirical behavior and the functional form we investigate in this work. Notation: denote a labeled (training) dataset with n samples or datapoints. Let f m denote a neural network whose size is the number of parameters m, such thatŷ = f m (x) is the predicted label. Let (n, m) be the generalization error as a function of n and m, measured by a performance metric (e.g., top-1 accuracy or cross-entropy loss) on a held-out test set. We refer to this error function as the error landscape. Dataset scaling: We wish to scale datasets while preserving the original distribution. For image classification, we uniformly subsample all classes by a constant ratio, thus preserving the relative sample size per class. We limit the maximal sub-sampling to avoid eradicating any class. For language modeling, where the number of classes (vocabulary items) has a very long tail distribution, we randomly sample sentences such that the total number of sampled words will be a certain fraction of the original dataset. Table 1 reports the data scales we use. In all tasks the held-out test set remains untouched for evaluating the error. Model scaling: We are critically interested in a method where moving across scales is defined by some scaling function, such that no additional significant computation would be incurred. We thus consider the case where the model architecture is given and the model size determines how to scale it. For instance, one may scale width (number of channels in convolutional networks, hidden state size in recurrent networks), depth (number of layers), do compound scaling , or more generally define a function tying the model degrees of freedom and size. We focus primarily on width scaling in our experiments; the model scales are reported in Table 1. We also perform selected depth scaling to demonstrate flexibility with respect to the scaling method. Hyper-parameters: For similar reasons we wish to avoid hyper-paramater search at large scales, and thus avoid the temptation to tune hyper-parameters accordingly (learning rate, regularization, etc.). Therefore, we hold all hyper-parameters fixed. This enables us to construct a functional form that fits the error landscape and can be used to predict the error across scales while completely defining the model attaining it. We consider pros and cons of this approach in the discussion (section 8). We experiment with both vision and language tasks. We use 6 benchmark datasets for image classification and 3 for language modeling. For image classification, we train ResNet and WRN models with stochastic gradient decent (SGD). In section 6.2 we explore the effect of varying architectures and optimizers for a fixed task (CIFAR100), adding VGG16 These two observations together can be summarized as the following relation: These two observations together can be summarized as the following relation: where a, α, c n may depend on the model size m, s.t. as n grows, → c n. Example fits to this form (allowing a, α, c n to be fit per m) are seen in figure 2a (left) and figure 2b (left). O3 Joint properties The behavior of the error when scaling model size while holding data size fixed, and vice versa, extends to the entire error landscape in a well-behaved manner, such that the manifold (m, n) is smooth everywhere as a function of both model and data scales. 5.1 CRITERIA Motivated by the above observations, we now consider a functional approximation for the error landscape. In particular, let us consider function families meeting the following criteria which augment and restrict our observations: C1 As either model or dataset size goes to zero, the expected performance is equivalent to a random-guess error level 0. 2 C2 For a given dataset size, scaling up the model will in an initial increase in performance, which will then saturate, taking the form in equation 1. C3 For a given model size, scaling up the dataset will in an initial increase in performance, which will then saturate, taking the form in equation 2. C4 There exists an irreducible error ∞, intrinsic to the dataset. C5 The function must be smooth everywhere and monotonic non-increasing in terms of model and data size (observation O3). While there are many possible function families meeting the above criteria, below we propose a simple function family for our evaluation. We do not claim that this is in fact the true underlying dependency, but rather that it serves as a good approximation of the error landscape-consistent with these criteria. As a first insightful step, consider the implications of satisfying C2 and C3 simultaneously. By examining the limiting behavior as m or n grow, we have: As m grows large: As n grows large: Thus, a consistent form satisfying C2 and C3 simultaneously is: where c ∞ is a constant not dependent on either m or n. Let us now examine the simplified case where a, b, α, β are constant: where α ≥ 0 and β ≥ 0 control the global rate at which error decreases with data and model size, respectively, a > 0 and b > 0 are a form of unit conversion between data and model sizes and error, and c ∞ > 0 is the asymptotic lower value attainable. This function is a special case of equation 3 and meets criteria C2 and C3 by construction. Importantly C4 and C5 are also met. However, by giving up the dependence of a, b, α, β on m, n, this function does not meet criterion C1. We thus need to model the transition from the initial random-guess level to the power-law region. We propose to parameterize the transition using the following envelope (complex) function: where i = √ −1. Here the simple pole at η controls the transition point from the initial random-guess level 0 as (m, n) increase. As (m, n) grow,˜ → c ∞ and the final irreducible error is approached. The random-guess error, 0, is a known parameter determined by dataset statistics (e.g, (N classes −1)/N classes for a balanced dataset). Note that due to our choice of rational envelope, we can divide by a constant the form in equation 4. Without loss of generality, let us choose a = 1. Note that while the forms in equations 3 and 4 are well motivated, the approach taken for modeling the transition is solely a convenience one. In fact, the transition(s) as function of m and n may be captured in the functional forms of a, b, α, β or another envelope mechanism. We leave a more refined investigation of the nature of the transitions to future work. We wish to empirically estimate the quality of the proposed functional parameterization as a fit to the true error landscape. Letˆ (n, m; θ) be the parametric function family (equation 5) approximating the error landscape (n, m), where θ = {α, β, b, c ∞, η}. 3 Define the divergence δ(n, m; θ) as the relative difference between the estimated errorˆ (m, n; θ) and the true error (m, n): We fit a least squares regression model to find the best parameters minimizing the divergence. In this section, we fit the function using 10-fold cross-validation across all model/data configurations m, n (see Table 1) and evaluate the fit quality. (In the next section, we perform extrapolation experiments, from seen to unseen points.) We perform the fit separately for each dataset and evaluate its quality by the mean µ and standard deviation σ of the divergence δ over all points (m, n). See Appendix B.1 for experimental details. As figure 3 shows, estimated test accuracy is highly correlated with actual test accuracy for various datasets, with worst-case values µ < 1% and σ < 5%. Note that the number of free parameters is small (|θ| ≤ 6) compared to the number of points (42-49 model-data configurations), demonstrating the appropriateness of the proposed function for modeling the complex error landscape. Here we verify that our extend to another canonical scaling policy, namely depth scaling. Figure 4a shows the error landscape with depth scaling on CIFAR10, exhibiting the same characteristics as width scaling. Figures 4b and 4c show error landscape estimation for both cases of width and depth scaling, exhibiting small and comparable fit errors (confidence intervals < 3%). Since the difference in approximation quality is effectively indistinguishable when scaling depth or width orthogonally, we expect compound scaling to adhere to the same functional form. Indeed, we verified this on the publicly available (model scaling only) for EfficientNet (However, the model/optimizer settings differ in multiple aspects across the different tasks, rendering the comparison of, say, different optimizers, challenging. In this section we verify that the functional form holds when varying the optimizer and/or the architecture on the same task, namely image classification on CIFAR100. In addition to the previously examined setting of WRN with SGD, we add four more settings: two well known architectures (VGG and DenseNet), each trained with both SGD and Adam optimizers. See Appendix A for experimental details. Figure 5 exhibits consistent, accurate, fit values across all architecture/optimizer settings, with mean divergence of µ < 1% (std: σ < 6%; confidence intervals < 4%). In this section, we evaluate the ability of our functional approximation to extrapolate beyond seen model/data configurations. The primary question we ask is: can we predict the error of a large model/data configuration from the errors of smaller-scale model/data configurations? To do this, we fit the least squares regression on a subset of the configurations and predict the error on larger, unseen configurations. More formally, let (m i, n j) denote a given model/data configuration. We first estimate parameters θ ij by fitting the function in equation 5 on all points of at most that size (m ≤ m i, n ≤ n j). Then we predict the error (m, n) in all points corresponding to larger configurations (m > m i, n > n j) using estimated θ ij. Finally, we measure the divergence δ(m, n) between the estimated error and the actual error at all larger configurations. This process is illustrated in figure 6a. Figure 6b shows the of one such extrapolation experiment, on ImageNet. In this case, we have fit the functional form on all configurations of model size m ≤ m i = M/16 and data size n ≤ n j = N/8, and predicted the error on all larger configurations. As the figure shows, the extrapolation is highly accurate, with a mean divergence of µ = 4.5% (std: σ = 4.7%). Figure 6c reports a similar experiment on WikiText-103. Here, again, we see very good extrapolation, with a mean divergence of µ = 0.5% (std: σ = 1.7%). Note that each extrapolation is run 10 times with different random initializations of θ ij in the least squares with negligible effect on the prediction. In practice, we may be interested in extrapolation quality with different subsets of configurations. Appendix D provides detailed extrapolation on multiple subsets of configurations, for both vision and language datasets. Generally, the extrapolation performs well once not ill-posed, which may be caused by lack of signal in the region of the initial "random-guess" level, or in degenerate cases like having fewer measurements than the number of free parameters in θ. In this work, through insights gained by the joint examination of the dependencies of generalization error on both model and data size, we arrive at criteria for functions consistent with the form of the generalization error under a given scaling policy. We consider one such function and find it to be in very good agreement with the actual behavior of the error landscape. Indeed, the agreement is strong enough that extrapolation from small to large scale becomes feasible: the function predicts the behavior of the generalization error in practice for the practical case of scaling models and data. We discuss several example implications of knowing such a functional form. Small-scale network development: At the core of small fidelity searches is the notion of performance rank comparison between models. However, small scale and large scale ranks are not assured to be consistent. If indeed a functional form such as empirically found in this work holds very generally, then in contrast, one can safely assess scaling rank between models at small scale, with the assurance that it remains consistent. This suggests that one would be well served by searching over scaling policies; a pertinent example of such a success is. The functional form also explains the limitation of small-scale search: once reaching the random-guess error level, where the sensitivity to scaling vanishes, the informativeness of ranking diminishes. Finally, the functional form allows direct usage of differentiable methods for NAS. Principled design: Knowing the error landscape function facilitates reasoning about the choice of (m, n) attaining a specified error level. In other words, for any given error level, one can solve Eq. 5 for m, n based on small-scale measurements. Thus, one can quantitatively answer design questions regarding the expected (in particular, large-scale) relations between m, n, and. In fact, Eq. 5 provides direct ansewrs to questions such as "how much data would one require to reach a prescribed performance level?" or "how big a model would be needed?" Imposing constraints is also straightforward. For instance, consider the following question: "What is the maximal model size possibly needed (useful), when the data is limited in size, n = n lim (for a given model architecture and scaling policy)?" For a fixed dataset size, model scaling eventually contributes marginally to error reduction and becomes negligible when bm Similarly, The maximal useful amount of data for a limited sized model m lim is: Moreover, Eq. 5 allows for complex design trade-offs. Generally, given some design-tradeoff cost function C(m, n,), one can minimize such cost s.t. Eq. 5. For example, consider the case of optimizing for efficient computation which has both practical and environmental importance . Since the number of FLOPs during training is ∝ m · n (for constant epoch budget), the trade-off cost function may be formulated as C(FLOPS,) = C(mn,). Further, since constant error contour is very well approximated by c = 1 n α + b m β (Eq. 5), dataset and models may be scaled with optimal resource efficiency with no effect on performance by solving for: The solution gives us the optimal-computational-efficiency ratio of model to data size: Limitations: We have made a few simplifying assumptions in our choice of approximating function, in particular in how to model the transition from the initial random-guess error level and the union of the random-guess level of the two scenarios (small model with large data and large model with small data). We leave a more detailed examination of the behavior of the transitions from random-guess error levels and refinements of the functional form to future work. Critically, the restrictive nature of our scaling framework (all parameters and hyperparameters described by a policy) is both a blessing and a challenge. The blessing comes in fulfilling the goal of finding simultaneously both the form of the generalization error and the full specification of the model and hyperparameters that attain it across scales. The challenge is that we have demonstrated in this work only the case of constant hyper-parameters. We conjecture that the relation between model configuration and hyperparameter choice may entail the potential to formulate hyperparameter-scaling policies similar in nature to the model-scaling polices, and that these too fall under the scope of the form we find in this work. This too will be the subject of future work. We hope that this work will bring the actual functional form of the generalization error in this practical case of scaling to the fore, both in practice and as an empirical leg to stand on in the quest for its theoretical origins. Scaling the models' width is performed by multiplying the number of channels in each convolutional layer and the width of the hidden linear layers by a constant factor and rounding to the nearest integer. The ranges of width scales (and data scales) for the main experiments are detailed in Table 1b. In section 6.2, we perform width scaling for two additional architectures, VGG16bn and DenseNet (L=40, k=32) . The VGG and DenseNet models were also modified for width scaling from the implementation of. The model scales in this case are 4 −k, 0 ≤ k ≤ 5, for both VGG and DenseNEt. Depth-scaling, in the CIFAR10 case (section 6.1), is performed by appending extra layers within each block. In the main experiments, training is done via SGD with a momentum of 0.9, weight decay of 1e-4 and initial learning rate of 0.1. For ImageNet we train for 90 epochs, decreasing the learning rate by a multiplicative factor of 0.1 after and 30 and after 60 epochs. We use a batch size of 16. For all other vision datasets we use a batch-size of 128. We begin training with a learning rate of 0.1, run for 200 epochs, and reduce by a multiplicative factor of 0.1 after 80, 120, and 160 epochs. For the VGG and DenseNet experiments on CIFAR100 in section 6.2, we train with both SGD and Adam optimizers. We train VGG for 170 epochs and Densenet for 300 epochs. Adam hyperparameters are default, with an initial learning rate of 1e-3. When training with SGD, we retain initial learning rate, batch size, momentum, and weight-decay, as in the main experiment (at 0.1, 128, 0.9, and 1e-4 respectively) and follow standard stepped learning rate schedules: For VGG, learning rate multiplicative factor of 0.1 after 80, 120, and 160 epochs; For DenseNet, learning rate multiplicative factor of 0.1 after 150 and 225 epochs. We evaluate on several datasets commonly used for (word-level) language modeling: Penn Treebank , WikiText-2 , and WikiText-103 . The PTB is a relatively small language modeling dataset of news texts, with a vocabu-lary of 10K unique words and about 900K/70K/80K training/validation/test words. WikiText-2 is drawn from Wikipedia articles and it is both larger and richer, with a vocabulary of 33K words and 2M/210K/240K training/validation/test words. WikiText-103 is also based on Wikipedia, but larger still, with a vocabulary of 270K words and 100M training words (and the same validation and test sets as WikiText-2). We experiment with two standard models for language modeling: Transformer-XL and AWD-LSTM . Transformer-XL is a recent language modeling architecture that is based on transformer self-attention , but modified to better learn dependencies beyond a fixed length by adding a segment-level recurrence mechanism. It has achieved state-of-the-art on multiple benchmarks. We use the official PyTorch implementation 4 with their base configuration: 16 layers, embedding size of 410, inner dimension of 2100 in the fullyconnected layers, and 10 attention heads. Training is done with Adam. See the implementation for other details. For scaling experiments, we decimate the inner dimension. We use Transformer-XL for WikiText-103. AWD-LSTM is a long short-term memory language model with adaptive weight averaging. We use the official implementation 5 with the recommended configuration: 3 layers, embedding size of 400, and hidden state size of 1150. Training is done with SGD. We use AWD-LSTM for PTB and WikiText-2 and follow the recommended settings for these two datasets. For scaling experiments, we decimate the hidden state size. In the experiment described in section 6, we fit a least squares regression model to find the best parameters minimizing the divergence δ(m, n) -evaluated at configurations m, n as in Table 1: We quantify the quality of the fit by the mean µ and standard deviation σ of the fitted divergence by performing standard 10-fold cross validation over all points (m, n) with confidence intervals reported as ±1 std over the folds. In this appendix, we provide error landscape measurements and estimations for all datasets, corresponding to the experiment in section 6. The are shown in 3D graphs similar to figure 1. In each such graph, the z-axis is the logarithm of the generalization error as a function of two independent variables: the model size m and the data size n. The 3D graph is deliberately portrayed in log-log-log scale, as we cover a very large range of data scales and model scales and a correspondingly wide range of errors. This view is a useful one when one wishes to evaluate both large dynamic ranges (simultaneously both very large and very small values) and is especially vivid in portraying power-law like dependencies; a power-law naturally forms a straight line in a log-log view. In each figure, subfigure (a) shows the measured error landscape is in log-log-log scale, where each point (blue dot) is the error ing from training with a model/data configuration m, n. Subfigure (b) shows the best-fit estimated error landscape. The surface is a linear interpolation between the points, which is then projected on the model-error (m,), data-error (n,), and model-data (m, n) planes. The contour plots on each one of these planes are the projections of the error landscape surface, and are useful in considering the behavior of the surface when holding one dimension constant. We call to attention several interesting observations on the datasets explored: • As quantified rigorously in section 6, the fits perform well across error ranges. In these surfaces, one also gets qualitative sense of the fit adequacy across the wide ranges of the dataset and model scales directly. While perhaps slightly difficult to asses the surface directly, a helpful view is to consider the similarity between the projections of the actual and projected surfaces. • With increasing model size, indeed typically the error does remain saturated. However, in one of our tested datasets (figure 12) there was a renewed slight increase. We verify that this is indeed over-fitting, in the sense that there is no corresponding increase in the training error. We note that the functional form we find can actually be used to veer clear of the m, n regions where such over-fitting may occur. • The simplifying approach taken by considering the random guess levels (and associated transitions) for small models or small data as identical, seems to work fairly well with some deviation apparent by examining figure 15. Indeed the simplification can hold well for balanced datasets, but need not for imbalanced ones such as in the task of language modeling. Thus, a relaxation of this simplification is expected to be important conceptually and practically. Here we provide detailed extrapolation , for all datasets. All figures are structured in a similar way. Each subplot shows estimated (y-axis) vs. actual error (x-axis) (0 to 1 scale on both axes). Each subplot is located at the coordinate of the maximal data and model given for the task of performing the fit to the functional form in equation 5. This is the point at the top-right corner of the green dots in the illustration in figure 6a. The target is to find the error-landscape values for unseen, larger scales of both model and data (red points in the same illustration). Going from left to right in each figure indicates observed measurements of the error from models of an increasing fraction w.r.t the full size. Going from bottom-to top indicates observed measurements of the error from dataset sizes of an increasingly large fraction of the full dataset. In each subplot, every point shows the estimated vs. actual error on a model-data configuration. Points that were given for fitting the function are colored in green, while unseen points that were not used are in red. The red points show the estimation error vs. actual error when extrapolating to all larger models and data sizes. In each subplot, the mean and standard deviation over all divergences δ at target points are given in text. Each experiment fit of the parameters was repeated 100 times, with different random initializations of θ. The shaded bands show one standard deviation across these runs. The quality of the extrapolation is critically dependent on the signal provided in the (green) fitted points. Two limiting factors are evident by examining the figures below, which both play a role in the well-posedness of the solution: • The proximity to the initial random guess level. Only upon transitioning from the initial error plateau, does meaningful signal about the scaling rates become available. Indeed, for scales prior still in the region or close to the initial error level, one sees poor extrapolation ; see figures 18, 19, and 21, and the vivid origin of this phenomena by examining figures 11, 10, and 12. • A second source of ill-posedness is tied to the number of configurations used for the estimation of θ. Clearly, when this is small, one cannot expect the extrapolation to be stable. In fact, at least two measurements in each scaling dimension (model/data) are needed, and no less than the number of parameters in θ in total. Indeed, for all the plots in this appendix, the smallest scale of m, n is omitted form the graph such that the lowermost row and leftmost column span exactly two model and data scales correspondingly. Of course, there is nothing tying directly the number of points and scale of configurations measured, and one can decouple these two factors by taking closer spaced samples at small scale. • When both the above factors are not limiting the measurement, one readily sees that for divergences of no more than a few percent, it is sufficient to measure model/data configurations which are far-ranged from the configurations which one wishes to extrapolate to.
We predict the generalization error and specify the model which attains it across model/data scales.
1,453
scitldr
Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research. We present an unsupervised learning algorithm to train agents to achieve perceptually-specified goals using only a stream of observations and actions. Our agent simultaneously learns a goal-conditioned policy and a goal achievement reward function that measures how similar a state is to the goal state. This dual optimization leads to a co-operative game, giving rise to a learned reward function that reflects similarity in controllable aspects of the environment instead of distance in the space of observations. We demonstrate the efficacy of our agent to learn, in an unsupervised manner, to reach a diverse set of goals on three domains -- Atari, the DeepMind Control Suite and DeepMind Lab. Currently, the best performing methods on many reinforcement learning benchmark problems combine model-free reinforcement learning methods with policies represented using deep neural networks BID18 BID8. Despite reaching or surpassing human-level performance on many challenging tasks, deep model-free reinforcement learning methods that learn purely from the reward signal learn in a way that differs greatly from the manner in which humans learn. In the case of learning to play a video game, a human player not only acquires a strategy for achieving a high score, but also gains a degree of mastery of the environment in the process. Notably, a human player quickly learns which aspects of the environment are under their control as well as how to control them, as evidenced by their ability to rapidly adapt to novel reward functions BID22.Focusing learning on mastery of the environment instead of optimizing a single scalar reward function has many potential benefits. One benefit is that learning is possible even in the absence of an extrinsic reward signal or with an extrinsic reward signal that is very sparse. Another benefit is that an agent that has fully mastered its environment should be able to reach arbitrary achievable goals, which would allow it to generalize to tasks on which it wasn't explicitly trained. Building reinforcement learning agents that aim for environment mastery instead of or in addition to learning about a scalar reward signal is currently an open challenge. One way to represent such knowledge about an environment is using an environment model. Modelbased reinforcement learning methods aim to learn accurate environment models and use them either for planning or for training a policy. While learning accurate environment models of some visually rich environments is now possible BID33 BID7 BID15 using learned models in model-based reinforcement learning has proved to be challenging and model-free approaches still dominate common benchmarks. We present a new model-free agent architecture of Discriminative Embedding Reward Networks, or DISCERN for short. DISCERN learns to control an environment in an unsupervised way by learning purely from the stream of observations and actions. The aim of our agent is to learn a goal-conditioned policy π θ (a|s; s g) BID19 BID37 which can reach any goal state s g that is reachable from the current state s. We show how to learn a goal achievement reward function r(s; s g) that measures how similar state s is to state s g using a mutual information objective at the same time as learning π θ (a|s; s g). The ing learned reward function r(s; s g) measures similarity in the space of controllable aspects of the environment instead of in the space of raw observations. Crucially, the DISCERN architecture is able to deal with goal states that are not perfectly reachable, for example, due to the presence of distractor objects that are not under the agent's control. In such cases the goal-conditioned policy learned by DISCERN tends to seek states where the controllable elements match those in the goal state as closely as possible. We demonstrate the effectiveness of our approach on three domains -Atari games, continuous control tasks from the DeepMind Control Suite, and DeepMind Lab. We show that our agent learns to successfully achieve a wide variety of visually-specified goals, discovering underlying degrees of controllability of an environment in a purely unsupervised manner and without access to an extrinsic reward signal. In the standard reinforcement learning setup an agent interacts with an environment over discrete time steps. At each time step t the agent observes the current state s t and selects an action a t according to a policy π(a t |s t). The agent then receives a reward r t = r(s t, a t) and transitions to the next state s t+1. The aim of learning is to maximize the expected discounted return R = ∞ t=0 γ t r t of policy π where γ ∈ is a discount factor. In this work we focus on learning only from the stream of actions and observations in order to forego the need for an extrinsic reward function. Motivated by the idea that an agent capable of reaching any reachable goal state s g from the current state s has complete mastery of its environment, we pose the problem of learning in the absence of rewards as one of learning a goal-conditioned policy π θ (a|s; s g) with parameters θ. More specifically, we assume that the agent interacts with an environment defined by a transition distribution p(s t+1 |s t, a t). We define a goal-reaching problem as follows. At the beginning of each episode, the agent receives a goal s g sampled from a distribution over possible goals p goal. For example, p goal could be the uniform distribution over all previously visited states. The agent then acts for T steps according to the goal-conditioned policy π θ (a|s; s g) receiving a reward of 0 for each of the first T − 1 actions and a reward of r(s T ; s g) after the last action, where r(s; s g) ∈ for all s and s g 1. The goal achievement reward function r(s; s g) measures the degree to which being in state s achieves goal s g. The episode terminates upon the agent receiving the reward r(s T ; s g) and a new episode begins. It is straightforward to train π θ (a|s; s g) in a tabular environment using the indicator reward r(s; s g) = 1{s = s g}. We are, however, interested in environments with continuous highdimensional observation spaces. While there is extensive prior work on learning goal-conditioned policies BID19 BID37 BID0 BID16 BID34, the reward function is often hand-crafted, limiting generality of the approaches. In the few cases where the reward is learned, the learning objective is typically tied to a pre-specified notion of visual similarity. Learning to achieve goals based purely on visual similarity is unlikely to work in complex, real world environments due to the possible variations in appearance of objects, or goal-irrelevant perceptual context. We now turn to the problem of learning a goal achievement reward function r φ (s; s g) with parameters φ for high-dimensional state spaces. We aim to simultaneously learn a goal-conditioned policy π θ and a goal achievement reward function r φ by maximizing the mutual information between the goal state s g and the achieved state s T as shown in. DISPLAYFORM0 Note that we are slightly overloading notation by treating s g as a random variable distributed according to p goal. Similarly, s T is a random variable distributed according to the state distribution induced by running π θ for T steps for goal states sampled from p goal.The prior work of BID13 showed how to learn a set of abstract options by optimizing a similar objective, namely the mutual information between an abstract option and the achieved state. Following their approach, we simplify in two ways. First, we rewrite the expectation in terms of the goal distribution p goal and the goal conditioned policy π θ. Second, we lower bound the expectation term by replacing p(s g |s T) with a variational distribution q φ (s g |s T) with parameters φ following BID3, leading to DISPLAYFORM1 Finally, we discard the entropy term H(s g) from because it does not depend on either the policy parameters θ or the variational distribution parameters φ, giving our overall objective DISPLAYFORM2 This objective may seem difficult to work with because the variational distribution q φ is a distribution over possible goals s g, which in our case are high-dimensional observations, such as images. We sidestep the difficulty of directly modelling the density of high-dimensional observations by restricting the set of possible goals to be a finite subset of previously encountered states that evolves over time BID25. Restricting the support of q φ to a finite set of goals turns the problem of learning q φ into a problem of modelling the conditional distribution of possible intended goals given an achieved state, which obviates the requirement of modelling arbitrary statistical dependencies in the observations. Optimization: The expectation in the DISCERN objective is with respect to the distribution of trajectories generated by the goal-conditioned policy π θ acting in the environment against goals drawn from the goal distribution p goal. We can therefore optimize this objective with respect to policy parameters θ by repeatedly generating trajectories and performing reinforcement learning updates on π θ with a reward of log q φ (s g |s T) given at time T and 0 for other time steps. Optimizing the objective with respect to the variational distribution parameters φ is also straightforward since it is equivalent to a maximum likelihood classification objective. As will be discussed in the next section, we found that using a reward that is a non-linear transformation mapping log q φ (s g |s T) to worked better in practice. Nevertheless, since the reward for the goal conditioned-policy is a function of log q φ (s g |s T), training the variational distribution function q φ amounts to learning a reward function. Communication Game Interpretation: Dual optimization of the DISCERN objective has an appealing interpretation as a cooperative communication game between two players -an imitator that corresponds to the goal-conditioned policy and a teacher that corresponds to the variational distribution. At the beginning of each round or episode of the game the imitator is provided with a goal state. The aim of the imitator is to communicate the goal state to the teacher by taking T actions in the environment. After the imitator takes T actions, the teacher has to guess which state from a set of possible goals was given to the imitator purely from observing the final state s T reached by the imitator. The teacher does this by assigning a probability to each candidate goal state that it was the goal given to the imitator at the start of the episode, i.e. it produces a distribution p(s g |s T). The objective of both players is for the teacher to guess the goal given to the imitator correctly as measured by the log probability assigned by the teacher to the correct goal. We now describe the DISCERN algorithm -a practical instantiation of the approach for jointly learning π θ (a|s; s g) and r(s; s g) outlined in the previous section. Goal distribution: We adopt a non-parametric approach to the problem of proposing goals, whereby we maintain a fixed size buffer G of past observations from which we sample goals during training. We update G by replacing the contents of an existing buffer slot with an observation from the agent's recent experience according to some substitution strategy; in this work we considered two such strategies, detailed in Appendix A3. This means that the space of goals available for training drifts as a function of the agent's experience, and states which may not have been reachable under a poorly trained policy become reachable and available for substitution into the goal buffer, leading to a naturally induced curriculum. In this work, we sample training goals for our agent uniformly at random from the goal buffer, leaving the incorporation of more explicitly instantiated curricula to future work. We train a goal achievement reward function r(s; s g) used to compute rewards for the goal-conditioned policy based on a learned measure of state similarity. We parameterize r(s; s g) as the positive part of the cosine similarity between s and s g in a learned embedding space, although shaping functions other than rectification could be explored. The state embedding in which we measure cosine similarity is the composition of a feature transformation h(·) and a learned L 2 -normalized mapping ξ φ (·). In our implementation, where states and goals are represented as 2-D RGB images, we take h(·) to be the final layer features of the convolutional network learned by the policy in order to avoid learning a second convolutional network. We find this works well provided that while training r, we treat h(·) as fixed and do not adapt the convolutional network's parameters with respect to the reward learner's loss. This has the effect of regularizing the reward learner by limiting its adaptive capacity while avoiding the need to introduce a hyperparameter weighing the two losses against one another. We train ξ φ (·) according to a goal-discrimination objective suggested by. However, rather than using the set of all goals in the buffer G as the set of possible classes in the goal discriminator, we sample a small subset for each trajectory. Specifically, the set of possible classes includes the goal g for the trajectory and DISPLAYFORM0 we maximize the log likelihood given by DISPLAYFORM1 where β is an inverse temperature hyperparameter which we fix to K + 1 in all experiments. Note that is a maximum log likelihood training objective for a softmax nearest neighbour classifier in a learned embedding space, making it similar to a matching network BID47. Intuitively, updating the embedding ξ φ using the objective in aims to increase the cosine similarity between e(s T) and e(g) and to decrease the cosine similarity between e(s T) and the decoy embeddings e(d),..., e(d K). Subsampling the set of possible classes as we do is a known method for approximate maximum likelihood training of a softmax classifier with many classes BID6.We use max(0, g) as the reward for reaching state s T when given goal g. We found that this reward function is better behaved than the reward logq(s g = g|s T ; d 1, . . . d K, π θ) suggested by the DISCERN objective in Section 3 since it is scaled to lie in. The reward we use is also less noisy since, unlike logq, it does not depend on the decoy states. Goal-conditioned policy: The goal-conditioned policy π θ (a|s; s g) is trained to optimize the goal achievement reward r(s; s g). In this paper, π θ (a|s; s g) is an -greedy policy of a goal-conditioned action-value function Q with parameters θ. Q is trained using Q-learning and minibatch experience replay; specifically, we use the variant of Q(λ) due to Peng (see Chapter 7, BID40). We use a form of goal relabelling BID19 or hindsight experience replay BID0 BID32 as a source successfully achieved goals as well as to regularize the embedding e(·). Specifically, for the purposes of parameter updates (in both the policy and the reward learner) we substitute, with probability p HER the goal with an observation selected from the final H steps of the trajectory, and consider the agent to have received a reward of 1. The motivation, in the case of the policy, is similar to that of previous work, i.e. that being in state s t should correspond to having achieved the goal of reaching s t. When employed in the reward learner, it amounts to encouraging temporally consistent state embeddings BID29 BID38, i.e. encouraging observations which are nearby in time to have similar embeddings. Pseudocode for the DISCERN algorithm, decomposed into an experience-gathering (possibly distributed) actor process and a centralized learner process, is given in Algorithm 1. The problem of reinforcement learning in the context of multiple goals dates at least to BID19, where the problem was examined in the context of grid worlds where the state space is DISPLAYFORM0 /* See Appendix A3 */ end with probability p HER, Sample s HER uniformly from {s T −H, . . ., s T} and set g ← s HER, r T ← 1 otherwise Compute g using r T ← max(0, g) Send (s 1:T, a 1:T, r 1:T, g) to the learner. Poll the learner periodically for updated values of θ,φ. Reset the environment if the episode has terminated. until termination procedure LEARNER Input:Batch size B, number of decoys K, initial policy parameters θ, initial goal embedding parameters φ repeat Assemble batch of experience B = {(s DISPLAYFORM1 Use an off-policy reinforcement learning algorithm to update θ based on B Update φ to maximize DISPLAYFORM2 small and enumerable. BID41 proposed generalized value functions (GVFs) as a way of representing knowledge about sub-goals, or as a basis for sub-policies or options. Universal Value Function Approximators (UVFAs) BID37 extend this idea by using a function approximator to parameterize a joint function of states and goal representations, allowing compact representation of an entire class of conditional value functions and generalization across classes of related goals. While the above works assume a goal achievement reward to be available a priori, our work includes an approach to learning a reward function for goal achievement jointly with the policy. Several recent works have examined reward learning for goal achievement in the context of the Generative Adversarial Networks (GAN) paradigm BID12. The SPIRAL BID11 algorithm trains a goal conditioned policy with a reward function parameterized by a Wasserstein GAN BID1 discriminator. Similarly, AGILE BID2 learns an instruction-conditional policy where goals in a grid-world are specified in terms of predicates which should be satisfied, and a reward function is learned using a discriminator trained to distinguish states achieved by the policy from a dataset of instruction, goal state pairs. Reward learning has also been used in the context of imitation. BID17 derives an adversarial network algorithm for imitation, while time-contrastive networks BID38 leverage pre-trained ImageNet classifier representations to learn a reward function for robotics skills from video demonstrations, including robotic imitation of human poses. Universal Planning Networks (UPNs) BID39 ) learn a state representation by training a differentiable planner to imitate expert trajectories. Experiments showed that once a UPN is trained the state representation it learned can be used to construct a reward function for visually specified goals. Bridging goal-conditioned policy learning and imitation learning, BID34 learns a goal-conditioned policy and a dynamics model with supervised learning without expert trajectories, and present zero-shot imitation of trajectories from a sequence of images of a desired task. A closely related body of work to that of goal-conditioned reinforcement learning is that of unsupervised option or skill discovery. BID26 proposes a method based on an eigendecomposition of differences in features between successive states, further explored and extended in BID27. Variational Intrinsic Control (VIC) BID13 leverages the same lower bound on the mutual information as the present work in an unsupervised control setting, in the space of abstract options rather than explicit perceptual goals. VIC aims to jointly maximize the entropy of the set of options while making the options maximally distinguishable from their final states according to a parametric predictor. Recently, BID9 showed that a special case of the VIC objective can scale to significantly more complex tasks and provide a useful basis for low-level control in a hierarchical reinforcement learning context. Other work has explored learning policies in tandem with a task policy, where the task or environment rewards are assumed to be sparse. propose a framework in which low-level skills are discovered in a pre-training phase of a hierarchial system based on simple-to-design proxy rewards, while BID36 explore a suite of auxiliary tasks through simultaneous off-policy learning. Several authors have explored a pre-training stage, sometimes paired with fine-tuning, based on unsupervised representation learning. BID35 and BID23 employ a two-stage framework wherein unsupervised representation learning is used to learn a model of the observations from which to sample goals for control in simple simulated environments. BID31 propose a similar approach in the context of model-free Q-learning applied to 3-dimensional simulations and robots. Goals for training the policy are sampled from the model's prior, and a reward function is derived from the latent codes. This contrasts with our non-parametric approach to selecting goals, as well as our method for learning the goal space online and jointly with the policy. An important component of our method is a form of goal relabelling, introduced to the reinforcement learning literature as hindsight experience replay by BID0, based on the intuition that any trajectory constitutes a valid trajectory which achieves the goal specified by its own terminal observation. Earlier, BID32 employed a related scheme in the context of supervised learning of motor programs, where a program encoder is trained on pairs of trajectory realizations and programs obtained by expanding outwards from a pre-specified prototypical motor program through the addition of noise. BID45 expands upon hindsight replay and the all-goal update strategy proposed by BID19, generalizing the latter to non-tabular environments and exploring related strategies for skill discovery, unsupervised pre-training and auxiliary tasks. BID24 propose a hierarchical Q-learning system which employs hindsight replay both conventionally in the lower-level controller and at higher levels in the hierarchy. BID31 also employ a generalized goal relabeling scheme whereby the policy is trained based on a trajectory's achievement not just of its own terminal observation, but a variety of retrospectively considered possible goals. We evaluate, both qualitatively and quantitatively, the ability of DISCERN to achieve visuallyspecified goals in three diverse domains -the Arcade Learning Environment BID5, continuous control tasks in the DeepMind Control Suite BID42, and DeepMind Lab, a 3D first person environment BID4. Experimental details including architecture details, details of distributed training, and hyperparameters can be found in the Appendix. We compared DISCERN to several baseline methods for learning goal-conditioned policies:Conditioned Autoencoder (AE): In order to specifically interrogate the role of the discriminative reward learning criterion, we replace the discriminative criterion for embedding learning with an L 2 reconstruction loss on h t; that is, in addition to ξ φ (·), we learn an inverse mapping ξ −1 φ (·) with a separate set of parameters, and train both with the criterion h t − ξ DISPLAYFORM0 Conditioned WGAN Discriminator: We compare to an adversarial reward on the domains considered according to the protocol of BID11, who successfully used a WGAN discriminator as a reward for training agents to perform inverse graphics tasks. The discriminator takes two pairs of images - a real pair of goal images (s g, s g) and a fake pair consisting of the terminal state of the agent and the goal frame (s t, s g). The output of the discriminator is used as the reward function for the policy. Unlike our DISCERN implementation and the conditioned autoencoder baseline, we train the WGAN discriminator as a separate convolutional network directly from pixels, as in previous work. Pixel distance reward (L2): Finally, we directly compare to a reward based on L 2 distance in pixel space, equal to exp − s t − s g 2 /σ pixel where σ pixel is a hyperparameter which we tuned on a per-environment basis. All the baselines use the same goal-conditioned policy architecture as DISCERN. The baselines also used hindsight experience replay in the same way as DISCERN. They can therefore be seen as ablations of DISCERN's goal-achievement reward learning mechanism. The suite of 57 Atari games provided by the Arcade Learning Environment is a widely used benchmark in the deep reinforcement learning literature. We compare DISCERN to other methods on the task of achieving visually specified goals on the games of Seaquest and Montezuma's Revenge. The relative simplicity of these domains makes it possible to handcraft a detector in order to localize the controllable aspects of the environment, namely the submarine in Seaquest and Panama Joe, the character controlled by the player in Montezuma's Revenge. We evaluated the methods by running the learned goal policies on a fixed set of goals and measured the percentage of goals it was able to reach successfully. We evaluated both DISCERN and the baselines with two different goal buffer substitution strategies, uniform and diverse, which are described in the Appendix. A goal was deemed to be successfully achieved if the position of the avatar in the last frame was within 10% of the playable area of the position of the avatar in the goal for each controllable dimension. The controllable dimensions in Atari were considered to be the x-and y-coordinates of the avatar. The are displayed in FIG0. DISCERN learned to achieve a large fraction of goals in both Seaquest and Montezuma's Revenge while none of the baselines learned to reliably achieve goals in either game. We hypothesize that the baselines failed to learn to control the avatars because their objectives are too closely tied to visual similarity. FIG0 shows examples of goal achievement on Seaquest and Montezuma's Revenge. In Seaquest, DISCERN learned to match the position of the submarine in the goal image while ignoring the position of the fish, since the fish are not directly controllable. We have provided videos of the goal-conditioned policies learned by DISCERN on Seaquest and Montezuma's Revenge at the following anonymous URL https://sites.google.com/view/discern-anonymous/home. The DeepMind Control Suite BID42 ) is a suite of continuous control tasks built on the MuJoCo physics engine BID44. While most frequently used to evaluate agents which receive the underlying state variables as observations, we train our agents on pixel renderings of the scene using the default environment-specified camera, and do not directly observe the state variables. Agents acting greedily with respect to a state-action value function require the ability to easily maximize Q over the candidate actions. For ease of implementation, as well as comparison to other considered environments, we discretize the space of continuous actions to no more than 11 unique actions per environment (see Appendix A4.1).The availability of an underlying representation of the physical state, while not used by the learner, provides a useful basis for comparison of achieved states to goals. We mask out state variables relating to entities in the scene not under the control of the agent; for example, the position of the target in the reacher or manipulator domains. DISCERN is compared to the baselines on a fixed set of 100 goals with 20 trials for each goal. The goals are generated by acting randomly for 25 environment steps after initialization. In the case of R a n d o m a g e n t P i x e l d i s t a n c e Figure 2: Average achieved frames for point mass (task easy), reacher (task hard), manipulator (task bring ball), pendulum (task swingup), finger (task spin) and ball in cup (task catch) environments. The goal is shown in the top row and the achieved frame is shown in the bottom row.cartpole, we draw the goals from a random policy acting in the environment set to the balance task, where the pole is initialized upwards, in order to generate a more diverse set of goals against which to measure. FIG1 compares learning progress of 5 independent seeds for the "uniform" goal replacement strategy (see Appendix A5 for with "diverse" goal replacement) for 6 domains. We adopt the same definition of achievement as in Section 6.1. Figure 2 summarizes averaged goal achievement frames on these domains except for the cartpole domain for policies learned by DISCERN. Performance on cartpole is discussed in more detail in FIG4 of the Appendix. The show that in aggregate, DISCERN outperforms baselines in terms of goal achievement on several, but not all, of the considered Control Suite domains. In order to obtain a more nuanced understanding of DISCERN's behaviour when compared with the baselines, we also examined achievement in terms of the individual dimensions of the controllable state. Figure 4 shows goal achievement separately for each dimension of the underlying state on four domains. The perdimension show that on difficult goal-achievement tasks such as those posed in cartpole (where most proposed goal states are unstable due to the effect of gravity) and finger (where a free-spinning piece is only indirectly controllable) DISCERN learns to reliably match the major dimensions of controllability such as the cart position and finger pose while ignoring the other Actor steps Figure 4: Per-dimension quantitative evaluation of goal achievement on continuous control domains using the "uniform" goal substitution scheme (Appendix A3). Each subplot corresponds to a domain, with each group of colored rows representing a method. Each individual row represents a dimension of the controllable state (such as a joint angle). The color of each cell indicates the fraction of goal states for which the method was able to match the ground truth value for that dimension to within 10% of the possible range. The position along the x-axis indicates the point in training in millions of frames. For example, on the reacher domain DISCERN learns to match both dimensions of the controllable state, but on the cartpole domain it learns to match the first dimension (cart position) but not the second dimension (pole angle).dimensions, whereas none of the baselines learned to reliably match any of the controllable state dimensions on the difficult tasks cartpole and finger. We omitted the manipulator domain from these figures as none of the methods under consideration achieved non-negligible goal achievement performance on this domain, however a video showing the policy learned by DISCERN on this domain can be found at https://sites.google.com/view/discern-anonymous/home. The policy learned on the manipulator domain shows that DISCERN was able to discover several major dimensions of controllability even on such a challenging task, as further evidenced by the per-dimension analysis on the manipulator domain in Figure 8 in the Appendix. DeepMind Lab BID4 ) is a platform for 3D first person reinforcement learning environments. We trained DISCERN on the watermaze level and found that it learned to approximately achieve the same wall and horizon position as in the goal image. While the agent did not learn to achieve the position and viewpoint shown in a goal image as one may have expected, it is encouraging that our approach learns a reasonable space of goals on a first-person 3D domain in addition to domains with third-person viewpoints like Atari and the DM Control Suite. We have presented a system that can learn to achieve goals, specified in the form of observations from the environment, in a purely unsupervised fashion, i.e. without any extrinsic rewards or expert demonstrations. Integral to this system is a powerful and principled discriminative reward learning objective, which we have demonstrated can recover the dominant underlying degrees of controllability in a variety of visual domains. In this work, we have adopted a fixed episode length of T in the interest of simplicity and computational efficiency. This implicitly assumes not only that all sampled goals are approximately achievable in T steps, but that the policy need not be concerned with finishing in less than the allotted number of steps. Both of these limitations could be addressed by considering schemes for early termination based on the embedding, though care must be taken not to deleteriously impact training by terminating episodes too early based on a poorly trained reward embedding. Relatedly, our goal selection strategy is agnostic to both the state of the environment at the commencement of the goal episode and the current skill profile of the policy, utilizing at most the content of the goal itself to drive the evolution of the goal buffer G. We view it as highly encouraging that learning proceeds using such a naive goal selection strategy, however more sophisticated strategies, such as tracking and sampling from the frontier of currently achievable goals BID16, may yield substantial improvements. DISCERN's ability to automatically discover controllable aspects of the observation space is a highly desirable property in the pursuit of robust low-level control. A natural next step is the incorporation of DISCERN into a deep hierarchical reinforcement learning setup BID46 BID24 BID30 where a meta-policy for proposing goals is learned after or in tandem with a low-level controller, i.e. by optimizing an extrinsic reward signal. We employ a distributed reinforcement learning architecture inspired by the IMPALA reinforcement learning architecture BID8, with a centralized GPU learner batching parameter updates on experience collected by a large number of CPU-based parallel actors. While BID8 learns a stochastic policy through the use of an actor-critic architecture, we instead learn a goal-conditioned state-action value function with Q-learning. Each actor acts -greedily with respect to a local copy of the Q network, and sends observations s t, actions a t, rewards r t and discounts γ t for a trajectory to the learner. Following BID18, we use a different value of for each actor, as this has been shown to improve exploration. The learner batches re-evaluation of the convolutional network and LSTM according to the action trajectories supplied and performs parameter updates, periodically broadcasting updated model parameters to the actors. As Q-learning is an off-policy algorithm, the experience traces sent to the learner can be used in the usual n-step Q-learning update without the need for an off-policy correction as in BID8. We also maintain actor-local replay buffers of previous actor trajectories and use them to perform both standard experience replay BID25 and our variant of hindsight experience replay BID0. Our network architectures closely resemble those in BID8, with policy and value heads replaced with a Q-function. We apply the same convolutional network to both s t and s g and concatenate the final layer outputs. Note that the convolutional network outputs for s g need only be computed once per episode. We include a periodic representation (sin(2πt/T), cos(2πt/T)) of the current time step, with period equal to the goal length achievement period T, as an extra input to the network. The periodic representation is processed by a single hidden layer of rectified linear units and is concatenated with the visual representations fed to the LSTM. While not strictly necessary, we find that this allows the agent to become better at achieving goal states which may be unmaintainable due to their instability in the environment dynamics. The output of the LSTM is the input to a dueling action-value output network BID48. In all of our experiments, both branches of the dueling network are linear mappings. That is, given LSTM outputs ψ t, we compute the action values for the current time step t as DISPLAYFORM0 We experimented with two strategies for updating the goal buffer. In the first strategy, which we call uniform, the current observation replaces a uniformly selected entry in the goal buffer with probability p replace. The second strategy, which we refer to as diverse goal sampling attempts to maintain a goal buffer that more closely approximates the uniform distribution over all observation. In the diverse goal strategy, we consider the current observation for addition to the goal buffer with probability p replace at each step during acting. If the current observation s is considered for addition to the goal buffer, then we select a random removal candidate s r by sampling uniformly from the goal buffer and replace it with s if s r is closer to the rest of the goal buffer than s. If s is closer to the rest of the goal buffer than s r then we still replace s r with s with probability p add−non−diverse. We used L 2 distance in pixel space for the diverse sampling strategy and found it to greatly increase the coverage of states in the goal buffer, especially early during training. This bears some relationship to Determinantal Point Processes BID20, and goal-selection strategies with a more explicit theoretical foundation are a promising future direction. The following hyper-parameters were used in all of the experiments described in Section 6. All weight matrices are initialized using a standard truncated normal initializer, with the standard deviation inversely proportional to the square root of the fan-in. We maintain a goal buffer of size 1024 and use p replace = 10 −3. We also use p add−non−diverse = 10 −3. For the teacher, we choose ξ φ (·) to be an L 2 -normalized single layer of 32 tanh units, trained in all experiments with 4 decoys (and thus, according to our heuristic, β equal to 5). For hindsight experience replay, a highsight goal is substituted 25% of the time. These goals are chosen uniformly at random from the last 3 frames of the trajectory. Trajectories were set to be 50 steps long for Atari and DeepMind Lab and 100 for the DeepMind control suite. It is important to note that the environment was not reset after each trajectory, but rather the each new trajectory begins where the previous one ended. We train the agent and teacher jointly with RMSProp BID43 ) with a learning rate of 10 −4. We follow the preprocessing protocol of BID28, resizing to 84 × 84 pixels and scaling 8-bit pixel values to lie in the range. While originally designed for Atari, we apply this preprocessing pipeline across all environments used in this paper. In the point mass domain we use a control step equal to 5 times the task-specified default, i.e. the agent acts on every fifth environment step BID28. In all other Control Suite domains, we use the default. We use the "easy" version of the task where actuator semantics are fixed across environment episodes. Discrete action spaces admit function approximators which simultaneously compute the action values for all possible actions, as popularized in BID28. The action with maximal Q-value can thus be identified in time proportional to the cardinality of the action space. An enumeration of possible actions is no longer possible in the continuous setting. While approaches exist to enable continuous maximization in closed form BID14, they come at the cost of greatly restricting the functional form of Q.For ease of implementation, as well as comparison to other considered environments, we instead discretize the space of continuous actions. For all Control Suite environments considered except manipulator, we discretize an A-dimensional continuous action space into 3A discrete actions, consisting of the Cartesian product over action dimensions with values in {−1, 0, 1}. In the case of manipulator, we adopt a "diagonal" discretization where each action consists of setting one actuator to ±1, and all other actuators to 0, with an additional action consisting of every actuator being set to 0. This is a reasonable choice for manipulator because any position can be achieved by a concatenation of actuator actions, which may not be true of more complex Control Suite environments such as humanoid, where the agent's body is subject to gravity and successful trajectories may require multi-joint actuation in a single control time step. The subset of the Control Suite considered in this work was chosen primarily such that the discretized action space would be of a reasonable size. We leave extensions to continuous domains to future work. We ran two additional baselines on Seaquest and Montezuma's Revenge, ablating our use of hindsight experience replay in opposite ways. One involved training the goal-conditioned policy only in hindsight, without any learned goal achievement reward, i.e. p HER = 1. This approach achieved 12% of goals on Seaquest and 11.4% of goals on Montezuma's Revenge, making it comparable to a uniform random policy. This underscores the importance of learning a goal achievement reward. The second baseline consisted of DISCERN learning a goal achievement reward without hindsight experience replay, i.e. p HER = 0. This also performed poorly, achieving 11.4% of goals on Seaquest and 8% of goals on Montezuma's Revenge. Taken together, these preliminary suggest that the combination of hindsight experience replay and a learned goal achievement reward is important. For the sake of completeness, FIG3 reports goal achievement curves on Control Suite domains using the "diverse" goal selection scheme. Figure 4 for a description of the visualization. DISCERN learns to reliably control more dimensions of the underlying state than any of the baselines.
Unsupervised reinforcement learning method for learning a policy to robustly achieve perceptually specified goals.
1,454
scitldr
State of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process. In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence. Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity. On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers. The size of modern neural sequence models (; ;) can amount to billions of parameters . For example, the winning entry of the WMT'19 news machine translation task in English-German used an ensemble totaling two billion parameters. While large models are required to do better on hard examples, small models are likely to perform as well on easy ones, e.g., the aforementioned ensemble is probably not required to translate a short phrase such as "Thank you". However, current models apply the same amount of computation regardless of whether the input is easy or hard. In this paper, we propose Transformers which adapt the number of layers to each input in order to achieve a good speed-accuracy trade off at inference time. We extend Graves (2016; ACT) who introduced dynamic computation to recurrent neural networks in several ways: we apply different layers at each stage, we investigate a range of designs and training targets for the halting module and we explicitly supervise through simple oracles to achieve good performance on large-scale tasks. Universal Transformers (UT) rely on ACT for dynamic computation and repeatedly apply the same layer . Our work considers a variety of mechanisms to estimate the network depth and applies a different layer at each step. fix the number of steps for large-scale machine translation whereas we vary the number of steps to demonstrate substantial improvements in speed at no loss in accuracy. UT uses a layer which contains as many weights as an entire standard Transformer and this layer is applied several times which impacts speed. Our approach does not increase the size of individual layers. We also extend the resource efficient object classification work of to structured prediction where dynamic computation decisions impact future computation. Related work from computer vision includes; and who explored the idea of dynamic routing either by exiting early or by skipping layers. We encode the input sequence using a standard Transformer encoder to generate the output sequence with a varying amount of computation in the decoder network. Dynamic computation poses a challenge for self-attention because omitted layers in prior time-steps may be required in the future. We experiment with two approaches to address this and show that a simple approach works well (§2). Next, we investigate different mechanisms to control the amount of computation in the decoder network, either for the entire sequence or on a per-token basis. This includes multinomial and binomial classifiers supervised by the model likelihood or whether the argmax is already correct as well as simply thresholding the model score (§3). Experiments on IWSLT14 German-English Figure 1: Training regimes for decoder networks able to emit outputs at any layer. Aligned training optimizes all output classifiers C n simultaneously assuming all previous hidden states for the current layer are available. Mixed training samples M paths of random exits at which the model is assumed to have exited; missing previous hidden states are copied from below. translation as well as WMT'14 English-French translation show that we can match the performance of well tuned baseline models at up to 76% less computation (§4). We first present a model that can make predictions at different layers. This is known as anytime prediction for computer vision models and we extend it to structured prediction . We base our approach on the Transformer sequence-to-sequence model . Both encoder and decoder networks contain N stacked blocks where each has several sub-blocks surrounded by residual skip-connections. The first sub-block is a multi-head dot-product self-attention and the second a position-wise fully connected feed-forward network. For the decoder, there is an additional sub-block after the self-attention to add source context via another multi-head attention. Given a pair of source-target sequences (x, y), x is processed with the encoder to give representations s = (s 1, . . ., s |x|). Next, the decoder generates y step-by-step. For every new token y t input to the decoder at time t, the N decoder blocks process it to yield hidden states (h where block n is the mapping associated with the n th block and embed is a lookup table. The output distribution for predicting the next token is computed by feeding the activations of the last decoder layer h N t into a softmax normalized output classifier W : Standard Transformers have a single output classifier attached to the top of the decoder network. However, for dynamic computation we need to be able to make predictions at different stages of the network. To achieve this, we attach output classifiers C n parameterized by W n to the output h n t of each of the N decoder blocks: ∀n, p(y t+1 |h n t) = softmax(W n h n t) The classifiers can be parameterized independently or we can share the weights across the N blocks. Dynamic computation enables the model to use any of the N exit classifiers instead of just the final one. Some of our models can choose a different output classifier at each time-step which in an exponential number of possible output classifier combinations in the sequence length. We consider two possible ways to train the decoder network (Figure 1). Aligned training optimizes all classifiers simultaneously and assumes all previous hidden states required by the self-attention are available. However, at test time this is often not the case when we choose a different exit for every token which leads to misaligned states. Instead, mixed training samples several sequences of exits for a given sentence and exposes the model to hidden states from different layers. Generally, for a given output sequence y, we have a sequence of chosen exits (n 1, . . ., n |y|) and we denote the block at which we exit at time t as n t. Aligned training assumes all hidden states h n−1 1,..., h n−1 t are available in order to compute selfattention and it optimizes N loss terms, one for each exit (Figure 1a): The compound loss L dec (x, y) is a weighted average of N terms w.r.t. to (ω 1, . . . ω N). We found that uniform weights achieve better BLEU compared to other weighing schemes (c.f . Appendix A). At inference time, not all time-steps will have hidden states for the current layer since the model exited early. In this case, we simply copy the last computed state to all upper layers, similar to mixed training (§2.2.2). However, we do apply layer-specific key and value projections to the copied state. Aligned training assumes that all hidden states of the previous time-steps are available but this assumption is unrealistic since an early exit may have been chosen previously. This creates a mismatch between training and testing. Mixed training reduces the mismatch by training the model to use hidden states from different blocks of previous time-steps for self-attention. We sample M different exit sequences (n and for each one we evaluate the following loss: When n t < N, we copy the last evaluated hidden state h n t to the subsequent layers so that the self-attention of future time steps can function as usual (see Figure 1b). We present a variety of mechanisms to predict the decoder block at which the model will stop and output the next token, or when it should exit to achieve a good speed-accuracy trade-off. We consider two approaches: sequence-specific depth decodes all output tokens using the same block (§3.1) while token-specific depth determines a separate exit for each individual token (§3.2). We model the distribution of exiting at time-step t with a parametric distribution q t where q t (n) is the probability of computing block 1,..., block n and then emitting a prediction with C n. The parameters of q t are optimized to match an oracle distribution q * t with cross-entropy: The exit loss (L exit) is back-propagated to the encoder-decoder parameters. We simultaneously optimize the decoding loss (Eq.) and the exit loss (Eq.) balanced by a hyper-parameter α to ensure that the model maintains good generation accuracy. The final loss takes the form: In the following we describe for each approach how the exit distribution q t is modeled (illustrated in Figure 2) and how the oracle distribution q * t is inferred. (a) Sequence-specific depth Decoder depth Decoder depth Figure 2: Variants of the adaptive depth prediction classifiers. Sequence-specific depth uses a multinomial classifier to choose an exit for the entire output sequence based on the encoder output s (2a). It then outputs a token at this depth with classifier C n. The token-specific multinomial classifier determines the exit after the first block and proceeds up to the predicted depth before outputting the next token (2b). The token geometric-like classifier (2c) makes a binary decision after every block to dictate whether to continue (C) to the next block or to stop (S) and emit an output distribution. For sequence-specific depth, the exit distribution q and the oracle distribution q * are independent of the time-step so we drop subscript t. We condition the exit on the source sequence by feeding the average s of the encoder outputs to a multinomial classifier: where W h and b h are the weights and biases of the halting mechanism. We consider two oracles to determine which of the N blocks should be chosen. The first is based on the sequence likelihood and the second looks at an aggregate of the correctly predicted tokens at each block. This oracle is based on the likelihood of the entire sequence after each block and we optimize it with the Dirac delta centered around the exit with the highest sequence likelihood. We add a regularization term to encourage lower exits that achieve good likelihood: Correctness-based: Likelihood ignores whether the model already assigns the highest score to the correct target. Instead, this oracle chooses the lowest block that assigns the largest score to the correct prediction. For each block, we count the number of correctly predicted tokens over the sequence and choose the block with the most number of correct tokens. A regularization term controls the trade-off between speed and accuracy. Oracles based on test metrics such as BLEU are feasible but expensive to compute since we would need to decode every training sentence N times. We leave this for future work. The token-specific approach can choose a different exit at every time-step. We consider two options for the exit distribution q t at time-step t: a multinomial with a classifier conditioned on the first decoder hidden state h Multinomial q t: The most probable exit arg max q t (n|x, y <t) is selected at inference. Geometric-like q t: where, d is the dimension of the decoder states, W h ∈ R N ×d and w h ∈ R d are the weights of the halting mechanisms, and b h their biases. During inference the decoder exits when the halting signal χ n t exceeds a threshold τ n which we tune on the valid set to achieve a better accuracy-speed trade-off. If thresholds (τ n) 1≤n<N have not been exceeded, then we default to exiting at block N. The two classifiers are trained to minimize the cross-entropy with respect to either one the following oracle distributions: Likelihood-based: At each time-step t, we choose the block whose exit classifier has the highest likelihood plus a regularization term weighted by λ to encourage lower exits. This oracle ignores the impact of the current decision on the future time-steps and we therefore consider smoothing the likelihoods with an RBF kernel. where we control the size of the surrounding context with σ the kernel width. We refer to this oracle as LL(σ, λ) including the case where we only look at the likelihood of the current token with σ → 0. Correctness-based: Similar to the likelihood-based oracle we can look at the correctness of the prediction at time-step t as well as surrounding positions. We define the target q * t as follows: Confidence thresholding Finally, we consider thresholding the model predictions (§2), i.e., exit when the maximum score of the current output classifier p(y t+1 |h n t) exceeds a hyper-parameter threshold τ n. This does not require training and the thresholds τ = (τ 1, . . ., τ N −1) are simply tuned on the valid set to maximize BLEU. Concretely, for 10k iterations, we sample a sequence of thresholds τ ∼ U N −1, decode the valid set with the sampled thresholds and then evaluate the BLEU score and computational cost achieved with this choice of τ. After 10k evaluations we pick the best performing thresholds, that is τ with the highest BLEU in each cost segment. We evaluate on several benchmarks and measure tokenized BLEU : IWSLT'14 German to English (De-En). We use the setup of and train on 160K sentence pairs. We use N = 6 blocks, a feed-forward network (ffn) of intermediate-dimension Uniform n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 Average WMT'14 English to French (En-Fr). We also experiment on the much larger WMT'14 EnglishFrench task comprising 35.5m training sentence pairs. We develop on 26k held out pairs and test on newstest14. The vocabulary consists of 44k joint BPE types . We use a Transformer big architecture and tie the embeddings of the encoder, the decoder and the output classifiers ((W n) 1≤n≤6; §2.1). We average the last ten checkpoints and use a beam of width 4. Models are implemented in fairseq and are trained with Adam . We train for 50k updates on 128 GPUs with a batch size of 460k tokens for WMT'14 En-Fr and on 2 GPUs with 8k tokens per batch for IWSLT'14 De-En. To stabilize training, we re-normalize the gradients if the norm exceeds g clip = 3. For models with adaptive exits, we first train without exit prediction (α = 0 in Eq.) using the aligned mode (c.f . §2.2.1) for 50k updates and then continue training with α = 0 until convergence. The exit prediction classifiers are parameterized by a single linear layer (Eq.) with the same input dimension as the embedding dimension, e.g., 1024 for a big Transformer; the output dimension is N for a multinomial classifier or one for geometric-like. We exit when χ t,n > 0.5 for geometric-like classifiers. We first compare the two training regimes for our model (§2.2). Aligned training performs selfattention on aligned states (§2.2.1) and mixed training exposes self-attention to hidden states from different blocks (§2.2.2). We compare the two training modes when choosing either a uniformly sampled exit or a fixed exit n = 1,..., 6 at inference time for every time-step. The sampled exit experiment tests the robustness to mixed hidden states and the fixed exit setup simulates an ideal setting where all previous states are available. As baselines we show six separate standard Transformers with N ∈ [1..6] decoder blocks. All models are trained with an equal number of updates and mixed training with M =6 paths is most comparable to aligned training since the number of losses per sample is identical. Table 1 shows that aligned training outperforms mixed training both for fixed exits as well as for randomly sampled exits. The latter is surprising since aligned training never exposes the self-attention mechanism to hidden states from other blocks. We suspect that this is due to the residual connections which copy features from lower blocks to subsequent layers and which are ubiquitous in Transformer models (§2). Aligned training also performs very competitively to the individual baseline models. Aligned training is conceptually simple and fast. We can process a training example with N exits in a single forward/backward pass while M passes are needed for mixed training. In the remaining paper, we use the aligned mode to train our models. Appendix A reports experiments with weighing the various output classifiers differently but we found that a uniform weighting scheme worked well. On our largest setup, WMT'14 English-French, the training time of an aligned model with six output classifiers increases only marginally by about 1% compared to a baseline with a single output classifier keeping everything else equal. Next, we train models with aligned states and compare adaptive depth classifiers in terms of BLEU as well as computational effort. We measure the latter as the average exit per output token (AE). As baselines we use again six separate standard Transformers with N ∈ [1..6] with a single output classifier. We also measure the performance of the aligned mode trained model for fixed exits n ∈ [1..6]. For the adaptive depth token-specific models (Tok), we train four combinations: likelihoodbased oracle (LL) + geometric-like, likelihood-based oracle (LL) + multinomial, correctness based oracle (C) + geometric-like and correctness-based oracle (C) + multinomial. Sequence-specific models (Seq) are trained with the correctness oracle (C) and the likelihood oracle (LL) with different values for the regularization weight λ. All parameters are tuned on the valid set and we report on the test set for a range of average exits. Figure 3 shows that the aligned model (blue line) can match the accuracy of a standard 6-block Transformer (black line) at half the number of layers (n = 3) by always exiting at the third block. The aligned model outperforms the baseline for n = 2,..., 6. For token specific halting mechanisms (Figure 3a) the geometric-like classifiers achieves a better speed-accuracy trade-off than the multinomial classifiers (filled vs. empty triangles). For geometriclike classifiers, the correctness oracle outperforms the likelihood oracle (Tok-C geometric-like vs. Tok-LL geometric-like) but the trend is less clear for multinomial classifiers. At the sequence-level, likelihood is the better oracle (Figure 3b). The rightmost Tok-C geometric-like point (σ = 0, λ = 0.1) achieves 34.73 BLEU at AE = 1.42 which corresponds to similar accuracy as the N = 6 baseline at 76% fewer decoding blocks. Figure 3). The best accuracy of the aligned model is 34.95 BLEU at exit 5 and the best comparable Tok-C geometric-like configuration achieves 34.99 BLEU at AE = 1.97, or 61% fewer decoding blocks. When fixing the budget to two decoder blocks, Tok-C geometric-like with AE = 1.97 achieves BLEU 35, a 0.64 BLEU improvement over the baseline (N = 2) and aligned which both achieve BLEU 34.35. Confidence thresholding (Figure 3c) performs very well but cannot outperform Tok-C geometriclike. In this section, we look at the effect of the two main hyperparameters on IWSLT'14 De-En: λ the regularization scale (c.f . Eq.), and the RBF kernel width σ used to smooth the scores (c.f . Eq.). We train Tok-LL Geometric-like models and evaluate them with their default thresholds (exit if χ n t > 0.5). Figure 4a shows that higher values of λ lead to lower exits. Figure 4b shows the effect of σ for two values of λ. In both curves, we see that wider kernels favor higher exits. Finally, we take the best performing models form the IWSLT benchmark and test them on the large WMT'14 English-French benchmark. Results on the test set (Figure 5a) show that adaptive depth still shows improvements but that they are diminished in this very large-scale setup. Confidence thresholding works very well and sequence-specific depth approaches improve only marginally over the baseline. Tok-LL geometric-like can match the best baseline of BLEU 43.4 (N = 6) by using only AE = 2.96 which corresponds to less than half the decoder blocks; the best aligned of BLEU 43.6 can be outmatched at (AE = 3.51, BLEU = 43.71). In this setup, Tok-LL geometric-like slightly outperforms the Tok-C counterpart. Confidence thresholding matches the accuracy of the N =6 baseline with AE 2.5 or 59% fewer decoding blocks. However, confidence thresholding requires computing the output classifier at each block to determine whether to halt or continue. This is a large overhead since output classifiers predict 44k types for this benchmark (§4.1). To better account for this, we measure the average number of FLOPs per output token (details in Appendix B). Figure 5b shows that the Tok-LL geometric-like approach provides a better trade-off when the overhead of the output classifiers is considered. Figures 7 and 6 show outputs for examples of the IWSLT'14 De-En and the WMT'14 En-Fr test sets, respectively, together with the exit and model probability for each token. Less computation is used at the end of the sentence since periods and end of sentence markers (</s>) are easy to predict. The amount of computation increases when the model is less confident e.g. in Figure 6a, predicting'présent' (meaning 'present') is hard. A straightforward translation is'était là' but the model chooses'present' which is also appropriate. In Figure 6b, the model uses more computation to predict the definite article'les' since the source has omitted the article for'passengers'. We extended anytime prediction to the structured prediction setting and introduced simple but effective methods to equip sequence models to make predictions at different points in the network. We compared a number of different mechanisms to predict the required network depth and find that a simple correctness based geometric-like classifier obtains the best trade-off between speed and accuracy. Results show that the number of decoder layers can be reduced by more than three quarters at no loss in accuracy compared to a well tuned Transformer baseline. In this section we experiment with different weights for scaling the output classifier losses. Instead of uniform weighting, we bias towards specific output classifiers by assigning higher weights to their losses. Table 2 shows that weighing the classifiers equally provides good . Uniform n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 Average Uniform n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 Average Table 2: Aligned training with different weights (ω n) on IWSLT De-En. For each model we report BLEU on the dev set evaluated with a uniformly sampled exit n ∼ U([1..6]) for each token and a fixed exit n ∈ [1..6] throughout the sequence. The average corresponds to the average BLEU over the fixed exits. Gradient scaling Adding intermediate supervision at different levels of the decoder in richer gradients for lower blocks compared to upper blocks. This is because earlier layers affect more loss terms in the compound loss of Eq.. To balance the gradients of each block in the decoder, we scale up the gradients of each loss term (− LL n) when it is updating the parameters of its associated block (block n with parameters θ n) and revert it back to its normal scale before back-propagating it to the previous blocks. Figure 8 and Algorithm 1 illustrate this gradient scaling procedure. The θ n are updated with γ n -amplified gradients from the block's supervision and (N −n) gradients from the subsequent blocks. We choose γ n = γ(N − n) to control the ratio γ:1 as the ratio of the block supervision to the subsequent blocks' supervisions. Table 3 shows that gradient scaling can benefit the lowest layer at the expense of higher layers. However, no scaling generally works very well. Algorithm 1 Pseudo-code for gradient scaling (illustrated for a single step t) 6: end for 7: function SCALE_GRADIENT(Tensor x, scale γ) 8: STOP_GRADIENT in PyTorch with x.detach. 10: end function Uniform n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 Average Uniform n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 Average Table 3: Aligned training with different gradient scaling ratios γ: 1 on IWSLT'14 De-En. For each model we report the BLEU4 score evaluated with a uniformly sampled exit n ∼ U([1..6]) for each token and a fixed exit n ∈ [1..6]. The average corresponds to the average BLEU4 of all fixed exits. This section details the computation of the FLOPS we report. The per token FLOPS are for the decoder network only since we use an encoder of the same size for all models. We breakdown the FLOPS of every operation in Algorithm 2 (blue front of the algorithmic statement). We omit non-linearities, normalizations and residual connections. The main operations we account for are dot-products and by extension matrix-vector products since those represent the vast majority of FLOPS (we assume batch size one to simplify the calculation). Table 4: FLOPS of basic operations, key parameters and variables for the FLOPS estimation. With this breakdown, the total computational cost at time-step t of a decoder block that we actually go through, denoted with FC, is: where the cost of mapping the source' keys and values is incurred the first time the block is called (flagged with FirstCall). This occurs at t = 1 for the baseline model but it is input-dependent with depth adaptive estimation and may never occur if all tokens exit early. If skipped, a block still has to compute the keys and value of the self-attention block so the selfattention of future time-steps can function. We will denote this cost with FS and we have FS = 4d Confidence thresholding: FP(t, q(t)) = 2q(t)V d d For a set of source sequences {x (i) } i∈I and generated hypotheses {y (i) } i∈I, the average flops per token is: Baseline (N blocks): Adaptive depth: q(t)FC(x (i), t) + (N − q(t))FS + FP(t, q(t)) + 2V d d In the case of confidence thresholding the final output prediction cost (2V d d) is already accounted for in the exit prediction cost FP.
Sequence model that dynamically adjusts the amount of computation for each input.
1,455
scitldr
Variational Auto-encoders (VAEs) are deep generative latent variable models consisting of two components: a generative model that captures a data distribution p(x) by transforming a distribution p(z) over latent space, and an inference model that infers likely latent codes for each data point . Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata: the learned generative model captures the observed data distribution but does so while ignoring the latent codes, ing in codes that do not represent the data (e.g. van den ;); the aggregate of the learned latent codes does not match the prior p(z). This mismatch means that the learned generative model will be unable to generate realistic data with samples from p(z)(e.g. ;). In this paper, we demonstrate that both issues stem from the fact that the global optima of the VAE training objective often correspond to undesirable solutions. Our analysis builds on two observations: the generative model is unidentifiable – there exist many generative models that explain the data equally well, each with different (and potentially unwanted) properties and bias in the VAE objective – the VAE objective may prefer generative models that explain the data poorly but have posteriors that are easy to approximate. We present a novel inference method, LiBI, mitigating the problems identified in our analysis. On synthetic datasets, we show that LiBI can learn generative models that capture the data distribution and inference models that better satisfy modeling assumptions when traditional methods struggle to do so. Introduction Variational Auto-encoders (VAEs) are deep generative latent variable models consisting of two components: a generative model that captures a data distribution p(x) by transforming a distribution p(z) over latent space, and an inference model that infers likely latent codes for each data point . Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata: the learned generative model captures the observed data distribution but does so while ignoring the latent codes, ing in codes that do not represent the data (e.g. van den ;); the aggregate of the learned latent codes does not match the prior p(z). This mismatch means that the learned generative model will be unable to generate realistic data with samples from p(z)(e.g. ;). In this paper, we demonstrate that both issues stem from the fact that the global optima of the VAE training objective often correspond to undesirable solutions. Our analysis builds on two observations: the generative model is unidentifiable -there exist many generative models that explain the data equally well, each with different (and potentially unwanted) properties and bias in the VAE objective -the VAE objective may prefer generative models that explain the data poorly but have posteriors that are easy to approximate. We present a novel inference method, LiBI, mitigating the problems identified in our analysis. On synthetic datasets, we show that LiBI can learn generative models that capture the data distribution and inference models that better satisfy modeling assumptions when traditional methods struggle to do so. Background A VAE is comprised of a generative model and an inference model. Under the generative model, we posit that the observed data and the latent codes are jointly distributed as p θ (x, z) = p θ (x|z)p(z). The likelihood p θ (x|z) is defined by a neural network f with parameters θ and an output noise model ∼ p such that x|z = f θ (z) +. Direct maximization of the expected observed data log-likelihood E p(x) log Z p θ (x, z)dz over θ is intractable. Instead, we maximize the variational lower bound (ELBO), where q η(x) ∈ Q is a variational distribution with parameters η(x). Since the bound is tight when q η(x) (z) = p θ (z|x), we aim to infer p θ (z|x). To speed up finding the variational parameters η for some new input x, we train a neural inference model g with parameters We demonstrate two general ways wherein global optima of the ELBO correspond to undesirable models. In the following, we fix our variational family to be mean-field Gaussian. Case 1: Learning the Inference Model Compromises the Quality of the Generative Model. Suppose that the variational family does not contain the posteriors of the data-generating-model. Then, often, inference must trade-off between learning a generative model that explains the data well and one that has posteriors that are easy for the inference network to approximate. Thus, the global minima of the VAE objective can specify models that both fail to capture the data distribution and whose aggregated posterior fails to match the prior. As demonstration, consider the following model (described fully in Appendix C.2): with σ 2 = 0.01, B = [0.006 0 0 0.006] and A = 0.75 0.25 1.5 −1.0 as the data generating model. Here, we fix B (which also fixes the covariance of the observation noise) and learn the parameter θ = A. In this example, the ground-truth posteriors are non-diagonal Gaussians. Here, the VAE objective can achieve a lower loss by compromising the MLE objective in order to better satisfy the PM objective -i.e. the VAE objective will prefer a model that fails to capture the data distribution but has a diagonal Gaussian posterior over the ground-truth model. Figure 1C shows the data distribution of the ground truth model θ GT, φ GT (with L(θ GT, φ GT) = 0.532) differs from the distribution of the learned model θ *, φ * in Figure 1D (with L(θ *, φ *) = 0.196). Moreover, since the learned model fails to capture the data distribution, its aggregated posterior fails to match the prior (see Figures 1E vs. 1F): Even when we restrict the class of generative models to ones that fit the data well, the posterior matching objective will still select a model with a simple posterior. Unfortunately, the selected generative model may have undesirable properties like uninformative latent codes. As demonstration, consider the model from Equation 3 with the data generating model specified by: σ 2 = 0.01, A = 0.75 0.25 1.5 −1.0, and B is some diagonal matrix with values in [0, σ 2]. In this case, we fix A and and learn the parameter θ = B. Since the observation noise covariance I ·σ 2 −B changes with B, the data marginal is fixed at p θ (x) = N 0, AA + I · σ 2 for every B. Thus, for every θ, the MLE objective is 0. However, although every choice of θ explain the data equally well, the posterior matching objective (and hence the VAE objective) is minimized by θ's whose posteriors have the least amount of correlation. Figure 1A shows that L(θ, φ) prefers high value in the upper diagonal of B and low value in the lower diagonal. Figure 1B shows the informativeness of the latent codes for the corresponding θ. We see that the data to latent code mutual information I(X; Z) corresponding to the θ selected by L(θ, φ) is not optimal. That is, even if the true data generating model produces highly informative latent codes, the VAE objective may select a model that produces uninformative latent codes. Discussion The principles of our analysis extend to non-linear VAEs and complex variational families. In the VAE objective, the posterior matching objective acts like a regularizing term, biasing the learned generative models towards simple models with posteriors that are easy to approximate (with respect to the choice of variational family). Thus, joint training of the inference and generative models introduces unintended and undesirable optima, which would not appear when these models are learned separately. Case 2: Learning the Inference Model Selects an Undesirable Generative Model. Even if the variational family is rich, the inference for the posterior can nonetheless bias the learning for the generative model. It is well known that the generative model is nonidentifiable under the MLE objective -there are many models that minimize the MLE objective. To focus on the effects of non-identifiability, let us assume that the variational family is expressive enough that it contains the posteriors of multiple models that could have generated the data. Then the posterior matching objective is 0 since we can find parameters φ such that q φ (z|x) = p θ (z|x) for any such θ. Consequently, L(θ, φ) has multiple global minima corresponding to the multiple generative models that maximizes the date likelihood. Some of these models may not satisfy our desiderata; e.g., the latent codes have low mutual information with the data. As demonstration, consider the following model (fully described in Appendix C.1): In this case, the mean-field variational family includes the posterior p θ (z|x) for all θ, i.e. the posterior matching objective can be fully minimized. Furthermore, every θ ∈ [0, σ 2] yields the same data marginal, p θ (x) = N 0, σ 2, and thus minimizes the MLE objective. However, not all choice of θ are equivalent. Given θ, the mutual information between the learned latent codes and the data is I θ (X; Z) = Const − 1 2 log(σ 2 − θ 2). Thus, the set of global minima of L(θ, φ) contain many models that produce uninformative latent codes. Discussion We've shown that posterior collapse can happen at global optima of the VAE objective and that, in these cases, collapse cannot always be mitigated by improving the inference model (as in) or by limiting the capacity of the generative model (as in ; ;). In Section 1, we showed that common problems with traditional VAE training stem from the non-identifiability of the likelihood and the bias of the VAE objective towards models with simple posteriors, even if such models cannot capture the data distribution. We propose a novel inference method to specifically target these problems. To avoid the biasing effect of the PM objective on learning the generative model, we decouple the training of the generative and inference models -first we learn a generative model, then we learn an inference model while fixing the learned generative model (note that amortization allows for efficient posterior inference). To avoid undesirable global optima of the likelihood, we learn a generative model constrained by task-specific modeling desiderata. For instance, if informative latent codes are necessary for the task, the likelihood can be constrained so that the mutual information between the data and latent codes under θ is at least δ. While there are a number of works in literature that incorporate task-specific constraints to VAE training (e.g. ; Zhao et al. (2017 Zhao et al. (, 2018 ;), adding these constraints to the VAE objective directly affects both the generative and the inference models, and, consequently, may introduce additional undesirable global optima. In our approach, added constraints only directly affects the generative model -i.e. the quality of inference cannot be compromised by the added constraints. We call our training framework Likelihood Before Inference (LiBI), and propose one possible instantiation of this framework here. Step 1: Learning the Generative Model We compute a tractable approximation to the MLE objective, constrained so that the likelihood satisfies task-specific modeling desiderata (such as high I(X; Z)) as needed.: where each c i is a constraint applied to the likelihood. We do this by computing joint maximum likelihood estimates for θ and z n while additionally constraining the z n's to have come from our assumed model (see Appendix D for a formal derivation of this approximation): where HZ(·) is the Henze-Zirkler test statistic for Gaussianity, µ(·), Σ(·) represent the empirical mean and covariance, and the z n's are amortized using a neural network z n = h(x n ; ϕ) parametrized by ϕ. These constraints encourage the generative model to capture p(x) given p(z), i.e. the aggregated posterior under this model will match the prior p(z). Step 2: Learning the Inference Model Given the θ learned in Step 1, we learn φ to compute approximate posteriors q φ (z|x):. We note that φ, too, will satisfy our modeling assumptions, since with a fixed θ, the model nonidentifiability we describe in Section 1 is no longer present. Step 3: Reinitialize Inference for the Generative Model We repeat the process, initializing h(x n ; ϕ) = µ(x n ; φ), where µ(x n ; φ) is the mean of q φ (z n |x n). This steps provides an intelligent random initialization allowing step 1 to learn a better quality model. In theory, if the generative model and the inference models are learned perfectly in Steps 1 and 2, then Step 3 is obviated. In practice, we find that Step 3 improves the quality of the generative model and only a very small number of iterations is actually needed. Discussion Using LiBI, we can now evaluate the quality of the generative model and the inference models independently. This is in contrast to traditional VAE inference, in which the ELBO entangles issues of modeling and issues of inference. On 4 synthetic data sets for which we know the data generating model, we compare LiBI with existing inference methods: VAE , β-VAE , β-VAE with annealing, Lagging inference networks . Across all datasets, LiBI learns generative models that better capture p(x) (as quantified by loglikelihood and the Smooth k-NN test statistic ) and for which the aggregated posterior better matches the prior (see Appendix B). Conclusion In this paper, we show that commonly noted issues with VAE training are attributable to the fact that global optima of the VAE training objective often includes undesirable solutions. Based on our analysis, we propose a novel training procedure, LiBI, that avoid these undesirable optima while retaining the tractability of traditional VAE inference. On synthetic datasets, we show that LiBI able to learn generative models that capture the data distribution and inference models whose aggregated posterior matches the prior while traditional methods struggle to do so. Two common issues noted in VAE literature are posterior collapse and the mismatch between aggregated posterior and prior. Posterior collapse occurs when the posterior under both the generative model and approximate posterior learned by the inference model are equal the prior p(z) . Surprisingly, under posterior collapse, the model is still able to generate samples from p data (x)(e.g. ;). This is often attributed to the fact the generative model is very powerful and is therefore able to maximize the log data marginal likelihood without the help of the auxiliary latent codes (van den). Existing literature focuses on mitigating model collapse in one of the three ways: 1. modifying the optimization procedure to bias training way from collapse ; 2. choosing variational families that make collapse less likely to occur ; 3. modifying the generative and inference model architecture to encourage more information sharing between the x's and the z's . Although much of existing literature describes issue of posterior collapse and proposes methods to avoid it, less attention has been given to explaining why it occurs. conjecture that it occurs as a of the joint training: since the likelihood changes over the course of training, it is incentivized to ignore the output of the inference network whose output in the early stages of training is not yet meaningful. Mismatch between aggregated posterior and prior refers to the case when q φ (z) = p(z), where One might expect the two distributions to match because for any given likelihood θ, one should be able to recover the prior from the true posterior p(z|x) as follows: An x produced by the generated model from a z that is likely under the prior but unlikely under the aggregate posterior may have "poor sample quality", since the the generative model is unlikely to have encountered such a z during training . Existing literature mitigate this issue by either increasing the flexibility of the prior to better fit the aggregate posterior or developing a method to sample more robustly from the latent space . Examples of the latter include training a second VAE to be able to generate z from u and then sampling from p θ (x, z) using a Gibbs sampler . In this work, we provide a unifying analysis of both posterior collapse and mismatch, showing that both can occur as global optima of the VAE objective. Through our analysis, we also show that at these optima, neither issue can be reliably resolved by existing methods. In Figures 2 and 3, we compare the posteriors learned by traditional VAE inference and by LiBI, respectively, on the synthetic dataset LinearJTEx. Here we demonstrate that traditional inference learns a generative model θ under which it is easy to approximate the corresponding posteriors. However, this comes at the cost of θ being unable to capture the data distribution. Assume the following generative process for the data: For this generative process, p θ (x) = N 0, σ 2 for any value of θ such that 0 ≤ θ ≤ σ 2. Additionally, θ directly controls I(X; Z) -when θ = 0, we have that I(X; Z); when θ = σ 2, we have that I(X; Z) = ∞. To see this, we will compute I θ (X; Z) directly (by computing p θ (x, z) and p(x)p(z)): As such, we can compute the mutual information between x and z as follows: For this model, the posterior p θ (z|x), is: Since this example is univariate, the mean-field Gaussian variational family will include the true posterior for any θ. Assume the following generative process for the data: where B is a diagonal matrix with diagonal elements between 0 and σ 2. For this generative process, p B (x) = N 0, AA + I · σ 2 for all valid values of B. For this model, the complete data likelihood and marginals are, Therefore, I B (X; Z) can be computed as follows: Lastly, the posterior for this model, p B (z|x), is a Gaussian with mean and covariance, For our choice of A, the mean-field Gaussian will not include the true posterior for this model. The best-fitting mean-field approximation to the true posterior can be computed as in Appendix C.3. Let B be a diagonal matrix and let Σ be a full-covariance matrix. where each element in the above sum is independent and is minimized when B ii = 1 Σ, and where Σ −1 ii is the ith diagonal entry of Σ −1. The LiBI Framework The LiBI framework is composed of two steps: learning a high-quality likelihood capable of generating the observed data distribution, and fixing the likelihood learned in Step 1, performing inference to learn the latent codes given the data. We emphasize that our framework is general, so one can use various existing methods for either step. For example, one can use a GAN for Step 1, and MCMC sampling for Step 2. In this section, we derive a tractable approximation to Step 1 that can be easily enhanced to include constraints for task-specific desiderata, and that is amenable to gradient-based optimization methods. wherein Equation 31, we approximate E p(z) [p θ (x n |z)] with a single sample, z n, that makes its corresponding x n most likely (this is analogous to the Empirical Bayes EB MAP Type II estimates often used to tune prior hyper-parameters). This step, however, has a problem: it is biased towards learning z n's close to 0. We will now demonstrate that this issue exists and is a of non-identifiability in the MLE estimate with respect to θ, {z n} N n=1. We then provide a solution to this problem. Characterization of Non-Identifiability in Tractable Approximation Consider the following: let Z = {z n} N n=1 be the true z's and θ used to generate the observed data, X = {x n} N n=1 in the following generative process: Now, consider, an alternative Z = {ẑ n} N n=1 andθ such that, yielding the following alternative generative process: Under these generative processes, both the data marginals and the likelihoods are equal: However, since in our model we assumed the prior is fixed p(z) = N (0, I), the alternate parameters Z,θ are preferred by the joint log-likelihood when c > 1, since log pθ(x n |ẑ n) = log p θ (x n |z n) by construction and log N (ẑ n |0, I) > log N (z n |0, I) since theẑ n's are closer to 0 when c > 1. This will cause our approximation from Equation 31 to prefer the modelθ, which generates a different data distribution that the true data distribution: Identifying the Tractable Approximation using the Henze-Zirkler Test Statistic Returning to our approximation of the MLE objective in Equation 31, we can avoid this issue by constraining the z n's to have come from the prior: We do this by constraining the z n's to be Gaussian using the Henze-Zirkler test for Gaussianity and by constraining the empirical mean and covariance of the z n's to be that of the standard normal: We hypothesize that if the likelihood function, f θ, is "smooth" and well-behaved (that is, that it maps nearby z's to nearby x's), that our approximation of the likelihood will come close to the true one. Using this framework, we first recover a high-quality likelihood (a likelihood that, unlike in the traditional VAE objective, is not compromised to match the approximate posterior). Our framework therefore naturally encourages this likelihood to satisfy modeling assumptions; that is, if we find a θ for which the x's are reconstructed accurately given Gaussian z's, the aggregated posterior under θ, p θ (z), will match the prior p(z). Given this likelihood, we can then learn a posterior that accurately approximates p θ (z|x). We note that φ, too, will satisfy our modeling assumptions, since with a fixed θ, the model non-identifiability we describe is no longer present. The LiBI Inference Method We incorporate the constraints in Equation 43 as smooth penalties into the Lagrangian in Equation 44. We additionally define h(x n ; ϕ) to be a neural network parameterized by ϕ that, given x n, returns the specific z n that generated it. ϕ allows us to amortize Equation 44. We repeat the following steps R times: 1. Step 1: θ t, ϕ t = argmin θ,ϕ − 1 N n log p θ (x n |h(x n ; ϕ)) + HZ exp HZ {h(x n ; ϕ)} N n=1 + exp Σ {h(x n ; ϕ)} N n=1 − I 2. Step 2: = argmin φ 1 N n −ELBO(θ t, φ)
We characterize problematic global optima of the VAE objective and present a novel inference method to avoid such optima.
1,456
scitldr
Modeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, relation extraction, and question answering. Supervised learning from labeled hypernym sources, such as WordNet, limit the coverage of these models, which can be addressed by learning hypernyms from unlabeled text. Existing unsupervised methods either do not scale to large vocabularies or yield unacceptably poor accuracy. This paper introduces {\it distributional inclusion vector embedding (DIVE)}, a simple-to-implement unsupervised method of hypernym discovery via per-word non-negative vector embeddings which preserve the inclusion property of word contexts. In experimental evaluations more comprehensive than any previous literature of which we are aware---evaluating on 11 datasets using multiple existing as well as newly proposed scoring functions---we find that our method provides up to double the precision of previous unsupervised methods, and the highest average performance, using a much more compact word representation, and yielding many new state-of-the-art . In addition, the meaning of each dimension in DIVE is interpretable, which leads to a novel approach on word sense disambiguation as another promising application of DIVE. Numerous applications benefit from compactly representing context distributions, which assign meaning to objects under the rubric of distributional semantics. In natural language processing, distributional semantics has long been used to assign meanings to words (that is, to lexemes in the dictionary, not individual instances of word tokens). The meaning of a word in the distributional sense is often taken to be the set of textual contexts (nearby tokens) in which that word appears, represented as a large sparse bag of words (SBOW). Without any supervision, word2vec BID22, among other approaches based on matrix factorization BID20, successfully compress the SBOW into a much lower dimensional embedding space, increasing the scalability and applicability of the embeddings while preserving (or even improving) the correlation of geometric embedding similarities with human word similarity judgments. While embedding models have achieved impressive , context distributions capture more semantic features than just word similarity. The distributional inclusion hypothesis (DIH) BID49 BID11 BID6 posits that the context set of a word tends to be a subset of the contexts of its hypernyms. For a concrete example, most adjectives that can be applied to poodle can also be applied to dog, because dog is a hypernym of poodle. For instance, both can be obedient. However, the converse is not necessarily true -a dog can be straight-haired but a poodle cannot. Therefore, dog tends to have a broader context set than poodle. Many asymmetric scoring functions comparing SBOW based on DIH have been developed for automatic hypernymy detection BID49 BID11 BID38.Hypernymy detection plays a key role in many challenging NLP tasks, such as textual entailment BID34, coreference BID32, relation extraction BID8 and question answering BID13. Leveraging the variety of contexts and inclusion properties in context distributions can greatly increase the ability to discover taxonomic structure among words BID38. The inability to preserve these features limits the semantic representation power and downstream applicability of some popular existing unsupervised learning approaches such as word2vec. Several recently proposed methods aim to encode hypernym relations between words in dense embeddings, such as Gaussian embedding BID45 BID0, order embedding BID44, H-feature detector BID33, HyperScore , dual tensor BID12, Poincaré embedding BID28, and LEAR BID46. However, the methods focus on supervised or semi-supervised setting BID44 BID33 BID27 BID12 BID46, do not learn from raw text BID28 or lack comprehensive experiments on the hypernym detection task BID45 BID0.Recent studies BID21 BID38 have underscored the difficulty of generalizing supervised hypernymy annotations to unseen pairs -classifiers often effectively memorize prototypical hypernyms ('general' words) and ignore relations between words. These findings motivate us to develop more accurate and scalable unsupervised embeddings to detect hypernymy and propose several scoring functions to analyze the embeddings from different perspectives. • A novel unsupervised low-dimensional embedding method to model inclusion relations among word contexts via performing non-negative matrix factorization (NMF) on a weighted PMI matrix, which can be efficiently optimized using modified skip-grams.• Several new asymmetric comparison functions to measure inclusion and generality properties and to evaluate different aspects of unsupervised embeddings.• Extensive experiments on 11 datasets demonstrate the learned embeddings and comparison functions achieve state-of-the-art performances on unsupervised hypernym detection while requiring much less memory and compute than approaches based on the full SBOW.• A qualitative experiment illustrates DIVE can be used to solve word sense disambiguation, especially when efficiently modeling word senses at multiple granularities is desirable. The distributional inclusion hypothesis (DIH) suggests that the context set of a hypernym tends to contain the context set of its hyponyms. That is, when representing a word as the counts of contextual co-occurrences, the count in every dimension of hypernym y tends to be larger than or equal to the corresponding count of its hyponym x: DISPLAYFORM0 where x y means y is a hypernym of x, V is the set of vocabulary, and #(x, c) indicates the number of times that word x and its context word c co-occur in a small window with size |W | in corpus D.Our goal is to produce lower-dimensional embeddings that preserve the inclusion property that the embedding of hypernym y is larger than or equal to the embedding of its hyponym x in every dimension. Formally, the desirable property can be written as DISPLAYFORM1 where d 0 is number of dimensions in the embedding space. We add additional non-negativity constraints, i.e. x[i] ≥ 0, y[i] ≥ 0, ∀i, in order to increase the interpretability of the embeddings (the reason will be explained later in this section). This is a challenging task. In reality, there are a lot of noise and systematic biases which cause the violation of DIH in Equation (i.e. #(x, c) > #(y, c) for some neighboring word c), but the general trend can be discovered by processing several thousands of neighboring words in SBOW together. After the compression, the same trend has to be estimated in a much smaller embedding space which discards most of the information in SBOW, so it is not surprising to see most of the unsupervised hypernymy detection studies use SBOW BID38 and the existing unsupervised embeddings like Gaussian embedding have degraded accuracy BID47. Popular methods of unsupervised word embedding are usually based on matrix factorization BID20. The approaches first compute a co-occurrence statistic between the wth word and the cth context word as the (w, c)th element of the matrix M [w, c]. Next, the matrix M is factorized such that M [w, c] ≈ w T c, where w is the low dimension embedding of wth word and c is the cth context embedding. The statistic in M [w, c] is usually related to pointwise mutual information: P M I(w, c) = log(P (w,c) P (w)·P (c) ), where P (w, c) = #(w,c) |D|, |D| = w∈V c∈V #(w, c) is number of co-occurrence word pairs in the corpus, P (w) = #(w) |D|, #(w) = c∈V #(w, c) is the frequency of the word w times the window size |W |, and similarly for P (c). For example, M [w, c] could be set as positive PMI (PPMI), max(P M I(w, c), 0), or shifted PMI, P M I(w, c) − log(k), like skip-grams with negative sampling (SGNS) BID20. Intuitively, since M [w, c] ≈ w T c, larger embedding values of w at every dimension seems to imply larger w T c, larger M [w, c], larger P M I(w, c), and thus larger co-occurrence count #(w, c). However, the derivation has two flaws: c could be negative and lower #(w, c) could still lead to larger P M I(w, c) as long as the #(w) is small enough. To preserve DIH, we propose a novel word embedding method, distributional inclusion vector embedding (DIVE), which fixes the two flaws by performing non-negative factorization (NMF) on the matrix M, where DISPLAYFORM0 where k is a constant which shifts PMI value like SGNS, Z = |D| |V | is the average word frequency, and |V | is the vocabulary size. The design encourages the inclusion property in DIVE (i.e. Equation) to be satisfied because the property implies that Equation (DIH) holds if the matrix is reconstructed perfectly. The derivation is simple: Since context vector c is non-negative, if the embedding of hypernym y is greater than or equal to the embedding of its hyponym x in every dimension, DISPLAYFORM1 and only #(w, c) change with w. Due to its appealing scalability properties during training time BID20, we optimize our embedding based on the skip-gram with negative sampling (SGNS) BID22. The objective function of SGNS is DISPLAYFORM0 where w ∈ R, c ∈ R, c N ∈ R, k is a constant hyper-parameter indicating the ratio between positive and negative samples. BID18 prove SGNS is equivalent to factorizing a shifted PMI matrix M, where (w) and applying non-negativity constraints to the embeddings, DIVE can be optimized using the similar objective function: DISPLAYFORM1 DISPLAYFORM2 where w ≥ 0, c ≥ 0, c N ≥ 0, σ is the logistic sigmoid function, and k is a constant hyper-parameter. P D is the distribution of negative samples, which we set to be the corpus word frequency distribution in this paper. Equation FORMULA6 is optimized by ADAM BID15, a variant of stochastic gradient descent (SGD). The non-negativity constraint is implemented by projection (i.e., clipping any embedding which crosses the zero boundary after an update). The optimization process provides an alternative angle to explain how DIVE preserves DIH. The gradients for the word embedding w is DISPLAYFORM3 Assume hyponym x and hypernym y satisfy DIH in Equation and the embeddings x and y are the same at some point during the gradient ascent. In the case, the gradients coming from negative sampling (the second term) decrease the same amount of embedding values for both x and y because k is a constant hyper-parameter. However, the embedding of hypernym y would get higher or equal positive gradients from the first term than x in every dimension because #(x, c) ≤ #(y, c). This means Equation tends to imply Equation. Combining the analysis from the matrix factorization viewpoint, DIH in Equation FORMULA0 is approximately equivalent to the inclusion property in DIVE (i.e. Equation FORMULA1). For a frequent target word, there must be many neighboring words that incidentally appear near the target word without being semantically meaningful, especially when a large context window size is used. The unrelated context words cause noise in both the word vector and the context vector of DIVE. We address this issue by filtering out context words c for each target word w when the PMI of the co-occurring words is too small (i.e., log( DISPLAYFORM0 . That is, we set #(w, c) = 0 in the objective function. This preprocessing step is similar with computing PPMI in SBOW BID5, where low PMI co-occurrences are removed from the count-based representation. After applying the non-negativity constraint, we observe that each dimension roughly corresponds to a topic, as previous findings suggest BID29 BID24. This gives rise to a natural and intuitive interpretation of our word embeddings: the word embeddings can be seen as unnormalized probability distributions over topics. By removing the normalization of the target word frequency in the shifted PMI matrix, specific words have values in few dimensions (topics), while general words appear in more topics and correspondingly have high values in more dimensions, so the concreteness level of two words can be easily compared using the magnitude of their embeddings. In other words, general words have more diverse context distributions, so we need more dimensions to store the information in order to compress SBOW well BID25.In FIG0, we present three mentions of the word core and its surrounding contexts. These various context words increase the embedding values in different dimensions. Each dimension of the learned embeddings roughly corresponds to a topic, and the more general or representative words for each topic tend to have the higher value in the corresponding dimension (e.g. words in the second column of the table). The embedding is able to capture the common contexts where the word core appears. For example, the context of the first mention is related to the atom topic (dimension id 1) and the electron topic (id 9), while the second and third mention occur in the computer architecture topic (id 2) and education topic (id 11), respectively. We describe four experiments in Section 4-7. The first 3 experiments compare DIVE with other unsupervised embeddings and SBOW using different hypernymy scoring functions. In these experiments, unsupervised approaches refer to the methods that only train on plaintext corpus without using any hypernymy or lexicon annotation. The last experiment presents qualitative on word sense disambiguation. The SBOW and embeddings are tested on 11 datasets. The first 4 datasets come from the recent review of BID38: BLESS BID1, EVALution BID36, Lenci/Benotto , and Weeds BID50. The next 4 datasets are downloaded from the code repository of the H-feature detector BID33: Medical (i.e., Levy 2014), LEDS (also referred to as ENTAILMENT or Baroni 2012) BID3, TM14 (i.e., Turney 2014) BID43, and Kotlerman 2010 BID16. In addition, the performance on the test set of HyperNet BID40 ) (using the random train/test split), the test set of WordNet BID44, and all pairs in HyperLex BID47 are also evaluated. The F1 and accuracy measurements are sometimes very similar even though the quality of prediction varies, so average precision AP@all is adopted as the main evaluation metric. The HyperLex dataset has a continuous score on each candidate word pair, so we adopt Spearman rank coefficient ρ as suggested by the review study of BID47. Any OOV (out-of-vocabulary) word encountered in the testing data is pushed to the bottom of the prediction list (effectively assuming the word pair does not have a hypernym relation). We use WaCkypedia corpus BID2 ), a 2009 Wikipedia dump, to compute SBOW and train the embedding. For the datasets without Part of Speech (POS) information (i.e. Medical, LEDS, TM14, Kotlerman 2010, and HyperNet), the training data of SBOW and embeddings are raw text. For other datasets, we concatenate each token with the Part of Speech (POS) of the token before training the models except the case when we need to match the training setup of another paper. All words are lower cased. Stop words and rare words (occurs less than 10 times) are removed during our preprocessing step. The number of embedding dimensions in DIVE d 0 is set to be 100. Other hyper-parameters used in the experiments are listed in the supplementary materials. The hyper-parameters of DIVE were decided based on the performance of HyperNet training set. To train embeddings more efficiently, we chunk the corpus into subsets/lines of 100 tokens instead of using sentence segmentation. Preliminary experiments show that this implementation simplification does not hurt the performance. In the following experiments, we train both SBOW and DIVE on only the first 512,000 lines (51.2 million tokens) because we find this way of training setting provides better performances (for both SBOW and DIVE) than training on the whole WaCkypedia or training on randomly sampled 512,000 lines. We suspect this is due to the corpus being sorted by the Wikipedia page titles, which makes some categorical words such as animal and mammal occur 3-4 times more frequently in the first 51.2 million tokens than the rest. The performances of training SBOW PPMI on the whole WaCkypedia is also provided for reference in TAB6. If a pair of words has the hypernym relation, the words tend to be similar and the hypernym should be more general than the hyponym. As in HyperScore , we score the hypernym candidates by multiplying two factors corresponding to these properties. The C·∆S (i.e. the cosine similarity multiply the difference of summation) scoring function is defined as DISPLAYFORM0 where w p is the embedding of hypernym and w q is the embedding of hyponym. As far as we know, Gaussian embedding (GE) is the only unsupervised embedding method which can capture the asymmetric relations between a hypernym and its hyponyms. Using the same training and testing setup, we use the code implemented by BID0 1 to train Gaussian embedding on the first 51.2 million tokens and test the embeddings on 11 datasets. Its hyper-parameters are determined using the same way as DIVE (i.e. maximizing the AP on HyperNet training set). We compare DIVE with GE 2 in TAB1, and the performances of random scores and only measuring word similarity using skip-grams are also presented for reference. As we can see, DIVE is usually significantly better than other baselines. In Experiment 1, we show that there exists a scoring function (C·∆S) which detects hypernymy accurately using the embedding space of DIVE. Nevertheless, different scoring functions measure different signals in SBOW or embeddings. Since there are so many scoring functions and datasets available in the domain, we first introduce and test the performances of various scoring functions so as to select the representative ones for a more comprehensive evaluation of DIVE on the hypernymy detection tasks. We denote the embedding/context vector of the hypernym candidate and the hyponym candidate as w p and w q, respectively. The SBOW model which represents a word by the frequency of its neighboring words is denoted as SBOW Freq, while the SBOW which uses PPMI of its neighboring words as the features BID5 ) is denoted as SBOW PPMI. A hypernym tends to be similar to its hyponym, so we measure the cosine similarity between word vectors of the SBOW features BID21 or DIVE. We refer to the symmetric scoring function as Cosine or C for short in the following tables. We also train the original skip-grams with 100 dimensions and measure the cosine similarity between the ing word2vec embeddings. This scoring function is referred to as Word2vec or W. The distributional informativeness hypothesis BID35 observes that in many corpora, semantically'general' words tend to appear more frequently and in more varied contexts. Thus, BID35 advocate using entropy of context distributions to capture the diversity of context. We adopt the two variations of the approach proposed by BID38: SLQS Row and SLQS Sub functions. We also refer to SLQS Row as ∆E because it measures the entropy difference of context distributions. For SLQS Sub, the number of top context words is fixed as 100.Although effective at measuring diversity, the entropy totally ignores the frequency signal from the corpus. To leverage the information, we measure the generality of a word by its L1 norm (|w p | 1) and L2 norm (||w p || 2). Recall that Equation indicates that the embedding of the hypernym y should have a larger value at every dimension than the embedding of the hyponym x. When the inclusion property holds, |y| 1 = i y[i] ≥ i x[i] = |x| 1 and similarly ||y|| 2 ≥ ||x|| 2. Thus, we propose two scoring functions, difference of vector summation (|w p | 1 − |w q | 1) and the difference of vector 2-norm (||w p || 2 − ||w q || 2). Notice that when applying the difference of vector summations (denoted as ∆S) to SBOW Freq, it is equivalent to computing the word frequency difference between the hypernym candidate pair. The combination of 2 similarity functions (Cosine and Word2vec) and the 3 generality functions (difference of entropy, summation, and 2-norm of vectors) leads to six different scoring functions as shown in TAB2, and C·∆S is the same scoring function we used in Experiment 1. It should be noted that if we use skip-grams with negative sampling (word2vec) as the similarity measurement (i.e., W · ∆ {E,S,Q}), the scores are determined by two embedding/feature spaces together (word2vec and DIVE/SBOW). Several scoring functions are proposed to measure inclusion properties of SBOW based on DIH. Weeds Precision BID49 and CDE BID7 ) both measure the magnitude of the intersection between feature vectors (|w p ∩ w q |). For example, w p ∩ w q is defined by the elementwise minimum in CDE. Then, both scoring functions divide the intersection by the magnitude of the potential hyponym vector (|w q |). invCL BID17 ) (A variant of CDE) is also tested. We choose these 3 functions because they have been shown to detect hypernymy well in a recent study BID38. However, it is hard to confirm that their good performances come from the inclusion property between context distributions -it is also possible that the context vectors of more general words have higher chance to overlap with all other words due to their high frequency. For instance, considering a one dimension feature which stores only the frequency of words, the naive embedding could still have reasonable performance on the CDE function, but the embedding DISPLAYFORM0 where w 0 is a constant which emphasizes the inclusion penalty. If w 0 = 1 and a = 1, AL 1 is equivalent to L1 distance. The lower AL 1 distance implies a higher chance of observing the hypernym relation. We tried w 0 = 5 and w 0 = 20. w 0 = 20 produces a worse micro-average AP@all on SBOW Freq, SBOW PPMI and DIVE, so we fix w 0 to be 5 in all experiments. An efficient way to solve the optimization in AL 1 is presented in the supplementary materials. We show the micro average AP@all on 10 datasets using different hypernymy scoring functions in TAB2. We can see the combination functions such as C·∆S and W·∆S perform the best overall. Among the unnormalized inclusion based scoring functions, CDE works the best. AL 1 performs well compared with other functions which remove the frequency signal such as Word2vec, Cosine, and SLQS Row. The summation is the most robust generality measurement. In the In TAB4, DIVE with two of the best scoring functions (C·∆S and W·∆S) is compared with the previous unsupervised state-of-the-art approaches based on SBOW on different datasets. There are several reasons which might cause the large performance gaps in some datasets. In addition to the effectiveness of DIVE, some improvements come from our proposed scoring functions. The fact that every paper uses a different training corpus also affects the performances. Furthermore, BID38 select the scoring functions and feature space for the first 4 datasets based on AP@100, which we believe is too sensitive to the hyper-parameter settings of different methods. To isolate the impact of each factor, we perform a more comprehensive comparison next. In this experiment, we examine whether DIVE successfully preserves the signals for hypernymy detection tasks, which are measured by the same scoring functions designed for SBOW. Summation difference (∆S) and CDE perform the best among generality and inclusion functions in TAB2, BID17, APSyn BID37, and CDE BID7 ) are selected because they have the best AP@100 in the first 4 datasets BID38. Cosine similarity BID21, balAPinc BID16 in 3 datasets BID43, SLQS BID35 in HyperNet dataset BID40, and Freq ratio (FR) BID47 respectively. AL 1 could be used to examine the inclusion properties after removing the frequency signal. Therefore, we will present the using these 3 scoring functions, along with W·∆S and C·∆S. In addition to classic representations such as SBOW Freq and SBOW PPMI, we compare distributional inclusion vector embedding (DIVE) with additional 4 baselines in TAB6.• SBOW PPMI with additional frequency weighting (PPMI w/ FW). Specifically, w[c] = max(log( DISPLAYFORM0), 0). This forms the matrix reconstructed by DIVE when k = 1.• DIVE without the PMI filter (DIVE w/o PMI) • NMF on shifted PMI: Non-negative matrix factorization (NMF) on the shifted PMI without frequency weighting for DIVE (DIVE w/o FW). This is the same as applying the nonnegative constraint on the skip-gram model. • K-means (Freq NMF): The method first uses Mini-batch k-means BID39 to cluster words in skip-gram embedding space into 100 topics, and hashes each frequency count in SBOW into the corresponding topic. If running k-means on skip-grams is viewed as an approximation of clustering the SBOW context vectors, the method can be viewed as a kind of NMF BID9. Let the N × N context matrix be denoted as M c, where the (i, j)th element stores the count of word j appearing beside word i. K-means hashing creates a N × 100 matrix G with orthonormal rows (G T G = I), where the (i, k)th element is 0 if the word i does not belong to cluster k. The orthonormal G is also an approximated solution of a type of NMF (M c ≈ F G T) BID9. Hashing context vectors into topic vectors can be written DISPLAYFORM1 In the experiment, we also tried to apply a constant log(k) shifting to SBOW PPMI (i.e. max(P M I − log(k), 0)). We found that the performance degrades as k increases. Similarly, applying PMI filter to SBOW PPMI (set context feature to be 0 if the value is lower than log(k f)) usually makes the performances worse, especially when k f is large. Applying PMI filter to SBOW Freq only makes its performances closer to (but still much worse than) SBOW PPMI, so we omit this baseline as well. In TAB6, we first confirm the finding of the previous review study of BID38: there is no single hypernymy scoring function which always outperforms others. One of the main reasons is that different datasets collect negative samples differently. This is also why we evaluate our method on many datasets to make sure our hold in general. For example, if negative samples come from random word pairs (e.g. WordNet dataset), a symmetric similarity measure is already a pretty good scoring function. On the other hand, negative samples come from related or similar words in HyperNet, EVALution, Lenci/Benotto, and Weeds, so only computing generality difference leads to the best (or close to the best) performance. The negative samples in many datasets are composed of both random samples and similar words (such as BLESS), so the combination of similarity and generality difference yields the most stable . DIVE performs similar or better on all the scoring functions compared with SBOW consistently across all datasets in TAB6, while using many fewer dimensions (see TAB8). Its on combination scoring functions outperform SBOW Freq. Meanwhile, its on AL 1 outperform SBOW PPMI. The fact that combination scoring functions (i.e., W·∆S or C·∆S) usually outperform generality functions suggests that only memorizing general words is not sufficient. The best average performance on 4 and 10 datasets are both produced by W·∆S on DIVE.SBOW PPMI improves the combination functions from SBOW Freq but sacrifices AP on the inclusion functions. It generally hurts performance to change the frequency sampling of PPMI (PPMI w/ FW) or compute SBOW PPMI on the whole WaCkypedia (all wiki) instead of the first 51.2 million tokens. The similar trend can also be seen in TAB7. Note that AL 1 completely fails in HyperLex dataset using SBOW PPMI, which suggests that PPMI might not necessarily preserve the distributional inclusion property, even though it can have good performance on combination functions. Removing the PMI filter from DIVE slightly drops the overall precision while removing frequency weights on shifted PMI (w/o FW) leads to poor performances. K-means (Freq NMF) produces similar AP compared with SBOW Freq, but has worse AL 1 scores. Its best AP scores on different datasets are also significantly worse than the best AP of DIVE. This means that only making word2vec (skip-grams with negative sampling) non-negative or naively accumulating topic distribution in contexts cannot lead to satisfactory embeddings. In addition to hypernymy detection, BID0 show that the mixture of Gaussian distributions can also be used to discover multiple senses of each word. In our qualitative experiment, we show that DIVE can achieve the similar goal without fixing the number of senses before training the embedding. Recall that each dimension roughly corresponds to one topic. Given a query word, the higher embedding value on a dimension implies higher likelihood to observe the word in the context of the topic. The embedding of a polysemy would have high values on different groups of topics/dimensions. This allows us to discover the senses by clustering the topics/dimensions of the polysemy. We use the embedding values as the feature each dimension, compute the pairwise similarity between dimensions, and apply spectral clustering BID41 to group topics as shown in the TAB9. See more implementation details in the supplementary materials. In the word sense disambiguation tasks, it is usually challenging to determine how many senses/clusters each word should have. Many existing approaches fix the number of senses before training the embedding BID42 BID0. BID26 make the number of clusters approximately proportional to the diversity of the context, but the assumption does not always hold. Furthermore, the training process cannot capture different granularity of senses. For instance, race in the car context could share the same sense with the race in the game topic because they all mean contest, but the race in the car context actually refers to the specific contest of speed. Therefore, they can also be viewed as separate senses (like the in TAB9). This means the correct number of clusters is not unique, and the methods, which fixes the cluster numbers, need to re-train the embedding many times to capture such granularity. In our approach, clustering dimensions is done after the training process of DIVE is completed, so it is fairly efficient to change the cluster numbers and hierarchical clustering is also an option. Similar to our method, BID31 also discover word senses by graph-based clustering. The main difference is that they cluster the top n words which are most related to the query word instead of topics. However, choosing the hyper-parameter n is difficult. Large n would make graph clustering algorithm inefficient, while small n would make less frequent senses difficult to discover. Most previous unsupervised approaches focus on designing better hypernymy scoring functions for sparse bag of word (SBOW) features. They are well summarized in the recent study BID38. BID38 also evaluate the influence of different contexts, such as changing the window size of contexts or incorporating dependency parsing information, but neglect scalability issues inherent to SBOW methods. A notable exception is the Gaussian embedding model BID45. The context distribution of each word is encoded as a multivariate Gaussian distribution, where the embeddings of hypernyms tend to have higher variance and overlap with the embedding of their hyponyms. However, since a Gaussian distribution is normalized, it is difficult to retain frequency information during the embedding process, and experiments on HyperLex BID47 demonstrate that a simple baseline only relying on word frequency can achieve good . Follow-up work models contexts by a mixture of Gaussians BID0 relaxing the unimodality assumption but achieves little improvement on hypernym detection tasks. BID14 show that images retrieved by a search engine can be a useful source of information to determine the generality of lexicons, but the resources might not be available for some corpora such as scientific literature. Order embedding BID44 ) is a supervised approach to encode many annotated hypernym pairs (e.g. all of the whole WordNet BID23) into a compact embedding space, where the embedding of a hypernym should be smaller than the embedding of its hyponym in every dimension. Our method learns embedding from raw text, where a hypernym embedding should be larger than the embedding of its hyponym in every dimension. Thus, DIVE can be viewed as an unsupervised and reversed form of order embedding. Other semi-supervised hypernym detection methods aim to generalize from sets of annotated word pairs using raw text corpora. The goal of HyperScore BID27 is similar to our model: the embedding of a hypernym should be similar to its hyponym but with higher magnitude. However, their training process relies heavily on annotated hypernym pairs, and the performance drops significantly when reducing the amount of supervision. In addition to context distributions, previous work also leverages training data to discover useful text pattern indicating is-a relation BID40 BID33, but it remains challenging to increase recall of hypernym detection because commonsense facts like cat is-a animal might not appear in the corpus. Non-negative matrix factorization (NMF) has a long history in NLP, for example in the construction of topic models BID29. Non-negative sparse embedding (NNSE) BID24 and BID10 indicate that non-negativity can make embeddings more interpretable and improve word similarity evaluations. The sparse NMF is also shown to be effective in cross-lingual lexical entailment tasks but does not necessarily improve monolingual hypernymy detection BID48. In our study, a new type of NMF is proposed, and the comprehensive experimental analysis demonstrates its state-of-the-art performances on unsupervised hypernymy detection. Compressing unsupervised SBOW models into a compact representation is challenging while preserving the inclusion, generality, and similarity signals which are important for hypernym detection. Our experiments suggest that simple baselines such as accumulating K-mean clusters and non-negative skip-grams do not lead to satisfactory performances in this task. To achieve this goal, we proposed an interpretable and scalable embedding method called distributional inclusion vector embedding (DIVE) by performing non-negative matrix factorization (NMF) on a weighted PMI matrix. We demonstrate that scoring functions which measure inclusion and generality properties in SBOW can also be applied to DIVE to detect hypernymy, and DIVE performs the best on average, slightly better than SBOW while using many fewer dimensions. Our experiments also indicate that unsupervised scoring functions, which combine similarity and generality measurements, work the best in general, but no one scoring function dominates across all datasets. A combination of unsupervised DIVE with the proposed scoring functions produces new state-of-the-art performances on many datasets under the unsupervised setup. Finally, a qualitative experiment shows that clusters of the topics discovered by DIVE often correspond to the word senses, which allow us to do word sense disambiguation without the need to know the number of senses before training the embeddings. In addition to the unsupervised approach, we also compare DIVE with semi-supervised approaches. When there are sufficient training data, there is no doubt that the semi-supervised embedding approaches such as HyperNet BID40, H-feature detector BID33, and HyperScore can achieve better performance than all unsupervised methods. However, in many domains such as scientific literature, there are often not many annotated hypernymy pairs (e.g. Medical dataset).Since we are comparing an unsupervised method with semi-supervised methods, it is hard to fairly control the experimental setups and tune the hyper-parameters. In TAB10, we only show several performances which are copied from the original paper when training data are limited 3. As we can see, the performance from DIVE is roughly comparable to the previous semi-supervised approaches trained on small amount of hypernym pairs. This demonstrates the robustness of our approach and the difficulty of generalizing hypernymy annotations with semi-supervised approaches. In TAB11, we show the most general words in DIVE under different queries as constraints. We also present the accuracy of judging which word is a hypernym (more general) given word pairs with hypernym relations in TAB1. The direction is classified correctly if the generality score is greater than 0 (hypernym is indeed predicted as the more general word). For instance, summation difference (∆S) classifies correctly if DISPLAYFORM0 From the table, we can see that the simple summation difference performs better than SQLS Sub, and DIVE predicts directionality as well as SBOW. Notice that whenever we encounter OOV, the directionality is predicted randomly. If OOV is excluded, the accuracy of predicting directionality using unsupervised methods can reach around 0.7-0.75. In HyperNet and WordNet, some hypernym relations are determined between phrases instead of words. Phrase embeddings are composed by averaging word embeddings or SBOW features. For WordNet, we assume the Part of Speech (POS) tags of the words are the same as the phrase. All part-of-speech (POS) tags in the experiments come from NLTK.The window size |W | of SBOW, DIVE, and GE are set as 20 (left 10 words and right 10 words). For DIVE, the number of epochs is 15, the learning rate is 0.001, the batch size is 128, the threshold in PMI filter k f is set to be 30, and the ratio between negative and positive samples (k) is 1.5. The hyper-parameters of DIVE were decided based on the performance of HyperNet training set. The window size of skip-grams (word2vec) is 10. The number of negative samples (k) in skip-gram is set as 5. When composing skip gram into phrase embedding, average embedding is used. For Gaussian embedding (GE), the number of mixture is 1, the number of dimension is 100, the learning rate is 0.01, the lowest variance is 0.1, the highest variance is 100, the highest Gaussian mean is 10, and other hyper-parameters are the default value in https://github.com/ benathi/word2gm. The hyper-parameters of GE were also decided based on the performance of HyperNet training set. When determining the score between two phrases, we use the average score of every pair of tokens in two phrases. The number of testing pairs N and the number of OOV word pairs is presented in TAB1. We use all the default hyper-parameters of the spectral clustering library in Scikit-learn 0. However, clustering on the global features might group topics together based on the co-occurrence of words which are unrelated to the query words and we want to make the similarity dependent on the query word. For example, a country topic should be clustered together with a city topic if the query word is place, but it makes more sense to group the country topic with the money topic together if the query word is bank like we did in the word sense disambiguation experiment TAB9. This means we want to focus on the geographical meaning of country when the query is related to geography, while focus on the economic meaning of country when the query is about economics. To create query dependent similarity measurement, we only consider the embedding of words which are related to the query word when preparing the features of dimensions. Specifically, given a query word q, the feature vector of the ith dimension f (c i, q) is defined as: DISPLAYFORM0 where w q [c j] is the value of jth dimension of query word embedding, C j (n) is the set of embeddings of top n words in the jth dimension, and the operator ⊕ means concatenation. This means instead of considering all the words in the vocabulary, we only take the top n words of every dimension j (n is fixed as 100 in the experiment), weight the feature based on how likely to observe query word in dimension j (w q [c j]), and concatenate all features together. That is, when measuring the similarity of dimensions, we only consider the aspects related to query word (e.g. mostly considering words related to facility and money when the query word is bank).After the features of all dimensions are collected, we normalize the feature of each dimension to have the norm 1, compute the pairwise similarity and run the spectral clustering to get the clustering . A.5 EFFICIENT WAY TO COMPUTE ASYMMETRIC L1 (AL 1)Recall that Equation FORMULA10 FIG1, an simple example is visualized to illustrate the intuition behind the distance function. By adding slack variables ζ and ξ, the problem could be converted into a linear programming problem: dq a*dq dp a*dq-dp dp-a*dq
We propose a novel unsupervised word embedding which preserves the inclusion property in the context distribution and achieve state-of-the-art results on unsupervised hypernymy detection
1,457
scitldr
Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' \textit{importance}. In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB) where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify \textit{what to remember} and \textit{what to change} as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e.~it does not presume knowledge of which task a sample belongs to. Humans can easily accumulate and maintain knowledge gained from previously observed tasks, and continuously learn to solve new problems or tasks. Artificial learning systems typically forget prior tasks when they cannot access all training data at once but are presented with task data in sequence. Overcoming these challenges is the focus of continual learning, sometimes also referred to as lifelong learning or sequential learning. Catastrophic forgetting refers to the significant drop in the performance of a learner when switching from a trained task to a new one. This phenomenon occurs because trained parameters on the initial task change in favor of learning new objectives. This is the reason that naive finetuning intuitively suffers from catastrophic forgetting. Given a network of limited capacity, one way to address this problem is to identify the importance of each parameter and penalize further changes to those parameters that were deemed to be important for the previous tasks a; ). An alternative is to freeze the most important parameters and allow future tasks to only adapt the remaining parameters to new tasks . Such models rely on the explicit parametrization of importance. We propose here implicit uncertainty-guided importance representation. Bayesian approaches to neural networks (b) can potentially avoid some of the pitfalls of explicit parameterization of importance in regular neural networks. Bayesian techniques, naturally account for uncertainty in parameters estimates. These networks represent each parameter with a distribution defined by a mean and variance over possible values drawn from a shared latent probability distribution . Variational inference can approximate posterior distributions using Monte Carlo sampling for gradient estimation. These networks act like ensemble methods in that they reduce the prediction variance but only use twice the number of parameters present in a regular neural network. We propose the use of the predicted mean and variance of the latent distributions to characterize the importance of each parameter. We perform continual learning with Bayesian neural networks by controlling the learning rate of each parameter as a function of its uncertainty. Figure 1 illustrates how posterior distributions evolve for certain and uncertain weight Illustration of evolution of weight distributions through learning two tasks. (a) circles represent weight parameters, initialized by distributions with mean and variance values randomly sampled from Ɲ(0,0.1). As an example we show five color-coded and plot their distributions. (b) Shows posterior distribution after learning Task 1. While W1 and W2 exhibit lower uncertainties (more contributions in learning Task 1), W3, W4, and W5 appear to have larger uncertainties, with the highest STD in W5, making them available to learn more tasks. (c) Task 2 is learned using higher learning rates for previously uncertain parameters (W3 and W4, W5) while learning rates for W1 and W2 are moderated according to their predicted low uncertainty after finishing task 1. Figure 1: Illustration of the evolution of weight distributions -uncertain weights adapt more quicklywhen learning two tasks using UCB. (a) weight parameter initialized by distributions initialized with mean and variance values randomly sampled from N (0, 0.1). (b) posterior distribution after learning task 1; while θ 1 and θ 2 exhibit lower uncertainties after learning the first task, θ 3, θ 4, and θ 5 have larger uncertainties, making them available to learn more tasks. (c) a second task is learned using higher learning rates for previously uncertain parameters (θ 1, θ 2, θ 3, and θ 4) while learning rates for θ 1 and θ 2 are reduced. Size of the arrows indicate the magnitude of the change of the distribution mean upon gradient update. distributions while learning two consecutive tasks. Intuitively, the more uncertain a parameter is, the more learnable it can be and therefore, larger gradient steps can be taken for it to learn the current task. As a hard version of this regularization technique, we also show that pruning, i.e., preventing the most important model parameters from any change and learning new tasks with the remaining parameters, can be also integrated into UCB. We refer to this method as UCB-P. We propose to perform continual learning with Bayesian neural networks and develop a new method which exploits the inherent measure of uncertainty therein to adapt the learning rate of individual parameters (Sec. 4). Second, we introduce a hard-threshold variant of our method that decides which parameters to freeze (Sec. 4.2). Third, in Sec. 5, we extensively validate our approach experimentally, comparing it to prior art both on single datasets split into different tasks, as well as for the more difficult scenario of learning a sequence of different datasets. Forth, in contrast to most prior work, our approach does not rely on knowledge about task boundaries at inference time, which humans do not need and might not be always available. We show in Sec. 6 that our approach naturally supports this scenario and does not require task information at test time, sometimes also referred to as a "single head" scenario for all tasks. We refer to evaluation metric of a "single head" model without task information at test time as "generalized accuracy". Our code will be released upon acceptance. Conceptually, approaches to continual learning can be divided into the following categories: dynamic architectural methods, memory-based methods, and regularization methods. In this setting, the architecture grows while keeping past knowledge fixed and storing new knowledge in different forms such as additional layers, nodes, or modules. In this approach, the objective function remains fixed whereas the model capacity grows -often exponentially-with the number of tasks. Progressive networks ) was one of the earliest works in this direction and was successfully applied to reinforcement learning problems; the base architecture was duplicated and lateral connections added in response to new tasks. Dynamically Expandable Network (DEN) also expands its network by selecting drifting units and retraining them on new tasks. In contrast to our method, these approaches require the architecture grow with each new task. Memory-based methods: In this regime, previous information is partially stored to be used later as a form of rehearsal . Gradient episodic memory (GEM) uses this idea to store the data at the end of each episode to be used later to prevent gradient updates from deviating from their previous values. GEM also allows for positive backward knowledge transfer, i.e, an improvement on previously learned tasks, and it was the first method capable of learning using a single training example. Recent approaches in this category have mitigated forgetting by using external data combined with distillation loss and/or confidence-based sampling strategies to select the most representative samples. (; ;) Regularization methods: In these approaches, significant changes to the representation learned for previous tasks are prevented. This can be performed through regularizing the objective function or directly enforced on weight parameters. Typically, this importance measure is engineered to represent the importance of each parameter. Inspired by Bayesian learning, in elastic weight consolidation (EWC) method important parameters are those to have the highest in terms of the Fisher information matrix. In Synaptic Intelligence (SI) this parameter importance notion is engineered to correlate with the loss function: parameters that contribute more to the loss are more important. Similar to SI, Memory-aware Synapses (MAS) (a) proposed an online way of computing importance adaptive to the test set using the change in the model outputs w.r.t the inputs. While all the above algorithms are task-dependent, in parallel development to this work, (b) has recently investigated task-free continual learning by building upon MAS and using a protocol to update the weights instead of waiting until the tasks are finished. PackNet used iterative pruning to fully restrict gradient updates on important weights via binary masks. This method requires knowing which task is being tested to use the appropriate mask. PackNet also ranks the weight importance by their magnitude which is not guaranteed to be a proper importance indicative. HAT identifies important neurons by learning an attention vector to the task embedding to control the gradient propagation. It maintains the information learned on previous tasks using an almost-binary mask per previous tasks. Bayesian approaches: Using Bayesian approach in learning neural networks has been studied for few decades (b; a). Several approaches have been proposed for Bayesian neural networks, based on, e.g., the Laplace approximation (a), Hamiltonian Monte Carlo , variational inference , and probabilistic backpropagation (Hernández-). Variational continual learning uses Bayesian inference to perform continual learning where new posterior distribution is simply obtained by multiplying the previous posterior by the likelihood of the dataset belonging to the new task. They also showed that by using a core-set, a small representative set of data from previous tasks, VCL can experience less forgetting. In contrast, we rely on Bayesian neural networks to use their predictive uncertainty to perform continual learning. Moreover, we do not use episodic memory or any other way to access or store previous data in our approach. Natural gradient descent methods: A fast natural gradient descent method for variational inference was introduced in in which, the Fisher Information matrix is approximated using the generalized Gauss-Newton method. In contrast, in our work, we use classic gradient descent. Although second order optimization algorithms are proven to be more accurate than the first order methods, they add considerable computational cost.; both investigate the effect of natural gradient descent methods as an alternative to classic gradient descent used in VCL and EWC methods. GNG uses Gaussian natural gradients in the Adam optimizer in the framework of VCL because as opposed to conventional gradient methods which perform in Euclidian space, natural gradients cause a small difference in terms of distributions following the changes in parameters in the Riemannian space. Similar to VCL, they obtained their best performance by adding a coreset of previous examples. introduce two modifications to VCL called Natural-VCL (N-VCL) and VCL-Vadam. N-VCL uses a Gauss-Newton approximation introduced by to estimate the VCL objective function and used natural gradient method proposed in to exploit the Riemannian geometry of the variational posterior by scaling the gradient with an adaptive learning rate equal to σ −2 obtained by approximating the Fisher Information matrix in an online fashion. VCL-Vadam is a simpler version of N-VCL to trade-off accuracy for simplicity which uses Vadam to update the gradients by perturbing the weights with a Gaussian noise using a reparameterization trick and scaling by σ −1 instead of its squared. N-VCL/VCL-Vadam both use variational inference to adapt the learning rate within Adam optimizer at every time step, whereas in our method below, gradient decent is used with constant learning rate during each task where learning rate scales with uncertainty only after finishing a task. We show extensive comparison with state-of-the-art on short and relatively long sequence of vision datasets with Bayesian convolutional neural networks, whereas VCL-Vadam only rely on multi-layer perceptron networks. We also like to highlight that this is the first work which evaluates and shows the working of convolutional Bayesian Neural Networks rather than only fully connected MLP models for continual learning. In this section, we review the Bayes-by-Backprop (BBB) framework which was introduced by ; to learn a probability distribution over network parameters. showed a back-propagation-compatible algorithm which acts as a regularizer and yields comparable performance to dropout on the MNIST dataset. In Bayesian models, latent variables are drawn from a prior density p(w) which are related to the observations through the likelihood p(x|w). During inference, the posterior distribution p(w|x) is computed conditioned on the given input data. However, in practice, this probability distribution is intractable and is often estimated through approximate inference. Markov Chain Monte Carlo (MCMC) sampling has been widely used and explored for this purpose, see for different methods under this category. However, MCMC algorithms, despite providing guarantees for finding asymptotically exact samples from the target distribution, are not suitable for large datasets and/or large models as they are bounded by speed and scalability issues. Alternatively, variational inference provides a faster solution to the same problem in which the posterior is approximated using optimization rather than being sampled from a chain .Variational inference methods always take advantage of fast optimization techniques such as stochastic methods or distributed methods, which allow them to explore data models quickly. See for a complete review of the theory and for more discussion on how to use Bayes by Backprop (BBB) in convolutioal neural networks. Let x ∈ IR n be a set of observed variables and w be a set of latent variables. A neural network, as a probabilistic model P (y|x, w), given a set of training examples D = (x, y) can output y which belongs to a set of classes by using the set of weight parameters w. Variational inference aims to calculate this conditional probability distribution over the latent variables by finding the closest proxy to the exact posterior by solving an optimization problem. We first assume a family of probability densities over the latent variables w parametrized by θ, i.e., q(w|θ). We then find the closest member of this family to the true conditional probability of interest P (w|D) by minimizing the Kullback-Leibler (KL) divergence between q and P which is equivalent to minimizing variational free energy or maximizing the expected lower bound: The objective function can be written as: Eq. 2 can be approximated using N Monte Carlo samples w i from the variational posterior : We assume q(w|θ) to have a Gaussian pdf with diagonal covariance and parametrized by θ = (µ, ρ). A sample weight of the variational posterior can be obtained by sampling from a unit Gaussian and reparametrized by w = µ + σ • where is the noise drawn from unit Gaussian, and • is a pointwise multipliation. Standard deviation is parametrized as σ = log(1 + exp(ρ)) and thus is always positive. For the prior, as suggested by , a scale mixture of two Gaussian pdfs are chosen which are zero-centered while having different variances of σ 2 1 and σ 2 2. The uncertainty obtained for every parameter has been successfully used in model compression and uncertainty-based exploration in reinforcement learning . In this work we propose to use this framework to learn sequential tasks without forgetting using per-weight uncertainties. In this section, we introduce Uncertainty-guided Continual learning approach with Bayesian neural networks (UCB), which exploits the estimated uncertainty of the parameters' posterior distribution to regulate the change in "important" parameters both in a soft way (Section 4.1) or setting a hard threshold (Section 4.2). A common strategy to perform continual learning is to reduce forgetting by regularizing further changes in the model representation based on parameters' importance. In UCB the regularizing is performed with the learning rate such that the learning rate of each parameter and hence its gradient update becomes a function of its importance. As shown in the following equations, in particular, we scale the learning rate of µ and ρ for each parameter distribution inversely proportional to its importance Ω to reduce changes in important parameters while allowing less important parameters to alter more in favor of learning new tasks. The core idea of this work is to base the definition of importance on the well-defined uncertainty in parameters distribution of Bayesian neural networks, i.e., setting the importance to be inversely proportional to the standard deviation σ which represents the parameter uncertainty in the Baysian neural network: We explore different options to set Ω in our ablation study presented in Section A.2 of the appendix, Table 1. We empirically found that Ω µ = 1/σ and not adapting the learning rate for ρ (i.e. Ω ρ = 1) yields the highest accuracy and the least forgetting. The key benefit of UCB with learning rate as the regularizer is that it neither requires additional memory, as opposed to pruning technique nor tracking the change in parameters with respect to the previously learned task, as needed in common weight regularization methods. More importantly, this method does not need to be aware of task switching as it only needs to adjust the learning rates of the means in the posterior distribution based on their current uncertainty. The complete algorithm for UCB is shown in Algorithm 1 with parameter update function given in Algorithm 2. In this section, we introduce a variant of our method, UCB-P, is related to recent efforts in weight pruning in the context of reducing inference computation and network compression . More specifically, weight pruning has been recently used in continual learning , where the goal is to continue learning multiple tasks using a single network's capacity. accomplished this by freeing up parameters deemed to be unimportant to the current task according to their magnitude. Forgetting is prevented in pruning by saving a task-specific binary mask of important vs. unimportant parameters. Here, we adapt pruning to Bayesian neural networks. Specifically, we propose a different criterion for measuring importance: the statistically-grounded uncertainty defined in Bayesian neural networks. Unlike regular deep neural networks, in a BBB model weight parameters are represented by probability distributions parametrized by their mean and standard deviation. Similar to , in order to take into account both mean and standard deviation, we use the signal-to-noise ratio (SNR) for each parameter defined as SNR is a commonly used measure in signal processing to distinguish between "useful" information from unwanted noise contained in a signal. In the context of neural models, the SNR can be thought as an indicative of parameter importance; the higher the SNR, the more effective or important the parameter is to the model predictions for a given task. Algorithm 1 Uncertainty-guided Continual Learning with Bayesian Neural Networks UCB 1: Require Training data for all tasks D = (x, y), µ (mean of posterior), ρ, σ1 and σ2 (std for the scaled mixture Gaussian pdf of prior), π (weighting factor for prior), N (number of samples in a mini-batch), M (Number of minibatches per epoch), initial learning rate (α0) 2: αµ = αρ = α0 3: for every task do 4: repeat 5: ∼ N (0, I) 6: σ = log(1 + exp(ρ)) Ensures σ is always positive 7: w = µ + σ • w = {w1, . . ., wi, . . ., wN} posterior samples of weights 8: µ ← µ − αµ∇LBBB µ 13: ρ ← ρ − αρ∇LBBB ρ 14: until loss plateaus 15: αµ, αρ ← LearningRateUpdate(αµ, αρ, σ, µ) See Algorithm 2 for UCB and 3 for UCB-P 16: end for Algorithm 2 LearningRateUpdate in UCB 1: function LearningRateUpdate(αµ, αρ, σ) 2: for each parameter do 3: Ωµ ← 1/σ 4: Ωρ ← 1 5: αµ ← αµ/Ωµ 6: αρ ← αρ/Ωρ 7: end for 8: end function Algorithm 3 LearningRateUpdate in UCB-P 1: function LearningRateUpdate(αµ, αρ, σ, µ) 2: for each parameter j in each layer l do 3: Ω ← |µ| /σ Signal to noise ratio 4: if Ω[j] ∈ top p% of Ωs in l then 5: αµ = αρ = 0 6: end if 7: end for 8: end function UCB-P, as shown in Algorithms 1 and 3, is performed as follows: for every layer, convolutional or fully-connected, the parameters are ordered by their SNR value and those with the lowest importance are pruned (set to zero). The pruned parameters are marked using a binary mask so that they can be used later in learning new tasks whereas the important parameters remain fixed throughout training on future tasks. Once a task is learned, an associated binary mask is saved which will be used during inference to recover key parameters and hence the exact performance to the desired task. The overhead memory per parameter in encoding the mask as well as saving it on the disk is as follows. Assuming we have n tasks to learn using a single network, the total number of required bits to encode an accumulated mask for a parameter is at max log 2 n bits assuming a parameter deemed to be important from task 1 and kept being encoded in the mask. Datasets: We evaluate our approach in two common scenarios for continual learning: 1) classincremental learning of a single or two randomly alternating datasets, where each task covers only a subset of the classes in a dataset, and 2) continual learning of multiple datasets, where each task is a dataset. We use Split MNIST with 5 tasks (5-Split MNIST) similar to and permuted MNIST for class incremental learning with similar experimental settings as used in ). Furthermore, to have a better understanding of our method, we evaluate our approach on continually learning a sequence of 8 datasets with different distributions using the identical sequence as in, which includes FaceScrub , MNIST, CIFAR100, NotMNIST , SVHN , CIFAR10, TrafficSigns , and FashionMNIST . Details of each are summarized in Table 4 in appendix. No data augmentation of any kind has been used in our analysis. Baselines: Within the Bayesian framework, we compare to three models which do not incorporate the importance of parameters, namely fine-tuning, feature extraction, and joint training. In fine-tuning (BBB-FT), training continues upon arrival of new tasks without any forgetting avoidance strategy. Feature extraction, denoted as (BBB-FE), refers to freezing all layers in the network after training the first task and training only the last layer for the remaining tasks. In joint training (BBB-JT) we learn all the tasks jointly in a multitask learning fashion which serves as the upper bound for average accuracy on all tasks, as it does not adhere to the continual learning scenario. We also perform the counterparts for FT, FE, and JT using ordinary neural networks and denote them as ORD-FT, ORD-FE, and ORD-JT. From the prior work, we compare with state-of-the-art approaches including Elastic Weight Consolidation (EWC), Incremental Moment Matching (IMM), Learning Without Forgetting (LWF) , Less-Forgetting Learning (LFL), PathNet, Progressive neural networks (PNNs), and Hard Attention Mask (HAT) using implementations provided by. On Permuted MNIST for SI are reported from. On Split and Permuted MNIST, for VCL are obtained using their original provided code whereas for VCL-GNG and VCL-Vadam are reported from the original work without re-implementation. Because our method lies into the regularization-based regime, we only compare against baselines which do not benefit from episodic or coreset memory. Hyperparameter tuning: Unlike commonly used tuning techniques which use a validation set composed of all classes in the dataset, we only rely on the first two task and their validations set, similar to the setup in . In all our experiments we consider a 0.15 split for the validation set on the first two tasks. After tuning, training starts from the beginning of the sequence. Our scheme is different from , where the models are trained on the first (e.g. three) tasks for validation and then training is restarted for the remaining ones and the reported performance is only on the remaining tasks. Training details: It is important to note that in all our experiments, no pre-trained model is used. We used stochastic gradient descent with a batch size of 64 and a learning rate of 0.01, decaying it by a factor of 0.3 once the loss plateaued. Dataset splits and batch shuffle are identically in all UCB experiments and all baselines. Pruning procedure and mask size: Once a task is learned, we compute the performance drop for a set of arbitrary pruning percentages from the maximum training accuracy achieved when no pruning is applied. The pruning portion is then chosen using a threshold beyond which the performance drop is not accepted. Mask size is chosen without having the knowledge of how many tasks to learn in the future. Upon learning each task we used a uniform distribution of pruning ratios (50-100%) and picked the ratio ed in at most 1%, 2%, and 3% forgetting for MNIST, CIFAR, and 8tasks experiments, respectively. We did not tune this parameter because in our hyperparameter tuning, we only assume we have validation sets of the first two tasks. Parameter regularization and importance measurement: Table 1 ablates different ways to compute the importance Ω of an parameter in Eq. 4 and 5. As shown in Table 1 the configuration that yields the highest accuracy and the least forgetting (maximum BWT) occurs when the learning rate regularization is performed only on µ of the posteriors using Ω µ = 1/σ as the importance and Ω ρ = 1. Performance measurement: Let n be the total number of tasks. Once all are learned, we evaluate our model on all n tasks. ACC is the average test classification accuracy across all tasks. To measure forgetting we report backward transfer, BWT, which indicates how much learning new tasks has influenced the performance on previous tasks. While BWT < 0 directly reports catastrophic forgetting, BWT > 0 indicates that learning new tasks has helped with the preceding tasks. Formally, BWT and ACC are as follows: where R i,n is the test classification accuracy on task i after sequentially finishing learning the n th task. Note that in UCB-P, R i,i refers the test accuracy on task i before pruning and R i,n after pruning which is equivalent to the end of sequence performance. In Section 6, we show that our UCB model can be used when tasks labels are not available at inference time by training it with a "single head" architecture with a sum of number of classes for all tasks. We refer to the ACC measured for this scenario as "Generalized Accuracy". We first present our for class incremental learning of MNIST (5-Split MNIST) in which we learn the digits 0 − 9 in five tasks with 2 classes at a time in 5 pairs of 0/1, 2/3, 4/5, 6/7, and 8/9. Table 2a shows the for reference baselines in Bayesian and non-Bayesian neural networks including fine-tuning (BBB-FT, ORD-FT), feature extraction (BBB-FE, ORD-FE) and, joint training (BBB-JT, ORD-JT) averaged over 3 runs and standard deviations are given in Table?? in the appendix. Although the MNIST dataset is an "easy" dataset, we observe throughout all experiments that Bayesian fine-tuning and joint training perform significantly better than their counterparts, ORD-FT and ORD-JT. For Bayesian methods, we compare against VCL and its variations named as VCL with Variational Adam (VCL-Vadam), VCL with Adam and Gaussian natural gradients (VCL-GNG). For non-Bayesian methods, we compare against HAT, IMM, and EWC (EWC can be regarded as Bayesian-inspired). VCL-Vadam (ACC=99.17%) appears to be outperforming VCL (ACC=98.20%) and VCL-GNG (ACC=96.50%) in average accuracy. However, full comparison is not possible because forgetting was not reported for Vadam and GNG. Nevertheless, UCB (ACC=99.63%) is able to surpass all the baselines including VCL-Vadam in average accuracy while in zero forgetting it is on par with HAT (ACC=99.59%). We also report on incrementally learning MNIST in two tasks (2-Split MNIST) in Table 8 in the appendix, where we compare it against PackNet, HAT, and LWF where PackNet, HAT, UCB-P, and UCB have zero forgetting while UCB has marginally higher accuracy than all others. Permuted MNIST is a popular variant of the MNIST dataset to evaluate continual learning approaches in which each task is considered as a random permutation of the original MNIST pixels. Following the literature, we learn a sequence of 10 random permutations and report average accuracy at the end. Table 2b shows ACC and BWT of UCB and UCB-P in comparison to state-of-the-art models using a small and a large network with 0.1M and 1.9M parameters, respectively (architecture details are given in Section A.2 of the appendix). The accuracy achieved by UCB (ACC=91.44 ± 0.04%) using the small network outperforms the ACC reported by for SI (ACC=86.0%), EWC (ACC=88.2%), while HAT attains a slightly better performance (ACC=91.6%). Comparing the average accuracy reported in VCL-Vadam (ACC=86.34%) and VCL-GNG (ACC=90.50%) as well as obtained for VCL (ACC=88.80%) shows UCB with BWT=(0.03% ± 0.00%) is able to outperform other Bayesian approaches in accuracy while forgetting significantly less compared to VCL with BWT=−7.9%. While we do not experiment with memory in this work, not surprisingly adding memory to most approaches will improve their performance significantly as it allows looking into past tasks. E.g. report ACC=94.37% for VCL-GNC when adding a memory of size 200. Next, we compare the for the larger network (1.9M). While HAT and UCB have zero forgetting, UCB, reaching ACC=97.42 ± 0.01%, performs better than all baselines including HAT which obtains ACC=97.34 ± 0.05% using 1.9M parameters. We also observe again that BBB-FT, despite being not specifically penalized to prevent forgetting, exhibits reasonable negative BWT values, performing better than IMM and LWF baselines. It is close to joint training, BBB-JT, with ACC=98.1%, which can be seen as an upper bound. In this experiment, we randomly alternate between class incremental learning of CIFAR10 and CIFAR100. Both datasets are divided into 5 tasks each with 2 and 20 classes per task, respectively. Table 2c presents ACC and BWT obtained with UCB-P, UCB, and three BBB reference methods compared against various continual learning baselines. Among the baselines presented in Table 2c, PNN and PathNet are the only zero-forgetting-guaranteed approaches. It is interesting to note that in this setup, some baselines (PathNet, LWF, and LFL) do not perform better than the naive accuracy achieved by feature extraction. PathNet suffers from bad pre-assignment of the network's capacity per task which causes poor performance on the initial task from which it never recovers. IMM performs almost similar to fine-tuning in ACC, yet forgets more. PNN, EWC, and HAT are the only baselines that perform better than BBB-FE and BBB-FT. EWC and HAT are both allowed to forget by construction, however, HAT shows zero forgetting behavior. While EWC is outperformed by both of our UCB variants, HAT exhibits 1% better ACC over UCB-P. Despite having a slightly higher forgetting, the overall accuracy of UCB is higher, reaching 79.4%. Finally, we present our for continual learning of 8 tasks using UCB-P and UCB in Table 2d. Similar to the previous experiments we look at both ACC and BWT obtained for UCB-P, UCB, BBB references (FT, FE, JT) as well as various baselines. Considering the ACC achieved by BBB-FE or BBB-FT (58.1%) as a lower bound we observe again that some baselines are not able to do better than BBB-FT including LFL, PathNet, LWF, IMM, and EWC while PNN and HAT remain the only strong baselines for our UCB-P and UCB approaches. UCB-P again outperforms PNN by 3.6% in ACC. HAT exhibits only −0.1% BWT, but our UCB achieves 2.4% higher ACC. 6 SINGLE HEAD AND GENERALIZED ACCURACY OF UCB UCB can be used even if the task information is not given at test time. For this purpose, at training time, instead of using a separate fully connected classification head for each task, we use a single head with the total number of outputs for all tasks. For example in the 8-dataset experiment we only use one head with 293 number of output classes, rather than using 8 separate heads, during training and inference time. for UCB from training with multi-head to a single head. The ACC reduction is 0.3%, 2.6%, 5.1%, and 4.1% for 2-Split MNIST, Permuted MNIST, Alternating CIFAR10/100, and sequence of 8 tasks experiments, respectively. We evaluated UCB and BBB-FT with a more challenging metric where the prediction space covers the classes across all the tasks. Hence, confusion of similar class labels across tasks can be measured. Performance for this condition is reported as Generalized ACC in Table 3 in columns 2-3. We observe a small performance reduction in going from ACC to Generalized ACC, suggesting non-significant confusion caused by the presence of more number of classes at test time. The performance degradation from ACC to Generalized ACC is 0.2%, 2.6%, 3.1%, and 3.1% for 2-Split MNIST, Permuted MNIST, Alternating CIFAR10/100, and sequence of 8 tasks, respectively. This shows that UCB can perform competitively in more realistic conditions such as unavailability of task information at test time. We believe the main insight of our approach is that instead of computing additional measurements of importance, which are often task, input or output dependent, we directly use predicted weight uncertainty to find important parameters. We can freeze them using a binary mask, as in UCB-P, or regularize changes conditioned on current uncertainty, as in UCB. In this work, we propose a continual learning formulation with Bayesian neural networks, called UCB, that uses uncertainty predictions to perform continual learning: important parameters can be either fully preserved through a saved binary mask (UCB-P) or allowed to change conditioned on their uncertainty for learning new tasks (UCB). We demonstrated how the probabilistic uncertainty distributions per weight are helpful to continually learning short and long sequences of benchmark datasets compared against baselines and prior work. We show that UCB performs superior or on par with state-of-the-art models such as HAT across all the experiments. Choosing between the two UCB variants depends on the application scenario: While UCB-P enforces no forgetting after the initial pruning stage by saving a small binary mask per task, UCB does not require additional memory and allows for more learning flexibility in the network by allowing small forgetting to occur. UCB can also be used in a single head setting where the right subset of classes belonging to the task is not known during inference leading to a competitive model that can be deployed where it is not possible to distinguish tasks in a continuous stream of the data at test time. UCB can also be deployed in a single head scenario and where tasks information is not available at test time. A APPENDIX A.1 DATASETS Table 4 shows a summary of the datasets utilized in our work along with their size and number of classes. In all the experiments we resized images to 32 × 32 × 3 if necessary. For datasets with monochromatic images, we replicate the image across all RGB channels. 10 60,000 10,000 CIFAR100 100 50,000 10,000 NotMNIST 10 16,853 1,873 SVHN 10 73,257 26,032 CIFAR10 10 39,209 12,630 TrafficSigns 43 39,209 12,630 FashionMNIST 10 60,000 10,000 In this section we take a closer look at elements of our UCB model on MNIST and evaluate variants of parameter regularization, importance measurement, as well as the effect of the number of samples drawn from the posited posterior. Bayes-by-backprop (BBB) Hyperparamters: Table 5 shows the search space for hyperparamters in the BBB algorithm which we used for tuning on the validation set of the first two tasks. Table 5: Search space for hyperparamters in BBB given by BBB hyperparamters − log σ 1 − log σ 2 π Search space {0, 1, 2} {6, 7, 8} {0.25,0.5,0.75} Network architecture: For Split MNIST and Permuted MNIST experiments, we have used a twolayer perceptron which has 1200 units. Because there is more number of parameters in our Bayesian neural network compared to its equivalent regular neural net, we ensured fair comparison by matching the total number of parameters between the two to be 1.9M unless otherwise is stated. For the multiple datasets learning scenario, as well as alternating incremental CIFAR10/100 datasets, we have used a ResNet18 Bayesian neural network with 7.1-11.3M parameters depending on the experiment. However, the majority of the baselines provided in this work are originally developed using some variants of AlexNet structure and altering that, e.g. to ResNet18, ed in degrading in their reported and experimented performance as shown in Table 6. Therefore, we kept the architecture for baselines as AlexNet and ours as ResNet18 and only matched their number of parameters to ensure having equal capacity across different approaches. Table 6: Continually learning on CIFAR10/100 using AlexNet and ResNet18 for UCB (our method) and HAT. BWT and ACC in %. All are (re)produced by us. Number of Monte Carlo samples: UCB is ensured to be robust to random noise using multiple samples drawn from posteriors. Here we explore different number of samples and the effect on final performance for ACC and BWT. We have used Ω µ = 1/σ as importance and regularization has been performed on mean values only. Following the in Table 7 we chose the number of samples to be 10 for all experiments. Here we include some additional such as Table 8 for 2-split MNIST and some complementary for tables in the main text as follows:??, 9, and 10 include standard deviation for shown in Table 2a, 2b, 2c, respectively.
A regularization-based approach for continual learning using Bayesian neural networks to predict parameters' importance
1,458
scitldr
Humans have a natural curiosity to imagine what it feels like to exist as someone or something else. This curiosity becomes even stronger for the pets we care for. Humans cannot truly know what it is like to be our pets, but we can deepen our understanding of what it is like to perceive and explore the world like them. We investigate how wearables can offer people animal perspective-taking opportunities to experience the world through animal senses that differ from those biologically natural to us. To assess the potential of wearables in animal perspective-taking, we developed a sensory-augmenting wearable that gives wearers cat-like whiskers. We then created a maze exploration experience where blindfolded participants utilized the whiskers to navigate the maze. We draw on animal behavioral research to evaluate how the whisker activity supported authentically cat-like experiences, and discuss the implications of this work for future learning experiences. Posthumanist philosophies characterize the human body as "the original prosthesis we all learn to manipulate", and suggest the idea that augmenting or substituting aspects of this prosthesis is the normal progression for humanity. Technology allows humans to enhance senses that may be impaired, and to extend our bodies with added senses beyond what we would otherwise be biologically limited by-giving humans the ability to improve their quality of life. " In short, we are cyborgs". Scholars have investigated how immersive virtual environments can enhance social perspective-taking, and computer-augmented, embodied perspective-taking has been shown to encourage a productive "learning stance" and to enhance both conceptual learning and engagement. Some environmental education scholars and indigenous educational scholars have suggested that building relational ties to non-human actors in nature may contribute to environmental and biology education. In a few cases, educators have asked learners to take on the embodied experiences of insects such as bees and animals such as polar bears. Danish found that children enacting a computer-augmented pollination activity embodying the roles of bees helped them learn nuances of individual and aggregate bee behavior; Lyons and colleagues found that wearable polar bear paws that simulated the feeling of traversing melting polar ice enabled people to show an empathetic understanding of the impacts of climate change. For many people, the most common experience they will have with entities who have different sensory capabilities is through everyday interaction with pets or neighborhood animals. For example, in noticing that our house cat is navigating a dark space where we would likely bump into something, we may recognize the limits of our own senses and consider how our pets' experiences are both similar to and different from our own. Our work explores how embodied technology can mediate human experiences in ways that offer people opportunities to explore and relate to the animal-like behaviors of their pets. We present the design of a cat-inspired whiskers wearable, the Whisker Beard, and an embodied navigation activity that provided a firsthand perspective-taking experience for participants curious about what it might be like to have whiskers. In addition, we discuss our philosophy of what it means for an animal-imitating experience to be authentic and we present the evaluation framework we used to understand how our whiskers activity encouraged participants to behave like cats. Our study addresses two research questions: In what ways can we create technologies and environments that remediate human experiences to be like those of non-humans? RQ2: What are humans' impressions of these technologically remediated experiences? In this paper we describe the design of a sensory augmentation whiskers wearable; the creation of a maze exploration activity for testing the experience of wearing whiskers; our analysis methods for evaluating the authenticity of an animallike experience; and outline opportunities to extend this work, as well as discuss the implications of it. Our technology, experience design, and analysis are motivated by a desire to re-shape science and science education in light of feminist critiques and visions of those fields. Whereas science and science education today tend to emphasize distance, objectivity, and dispassion, we strive to create spaces for discovery and learning that also include closeness, subjectivity, and emotion and that thereby enable learners to author identities in science based on more engaged and connected ways of knowing. The overall approach of the project is to investigate how affection for and curiosity about pets can catalyze scientific investigations and engineering for young people and their families that are based on empathy and perspective-taking, both in and out of schools. We see value in using wearable and mixed-reality technologies to provide humans with the ability to experience the lives of other beings. In particular, this would allow humans to experience what their pets' senses might be like, and thereby facilitate learning experiences that encourage empathy and perspective-taking. Once we can support interspecies perspective taking, we then wish to encourage participants to conduct scientific inquiry within that intersubjective sensational realm -a realm which the German biologist Jakob von Uexküll called "Umwelt". This agenda has the potential to unify efforts to advance scientific and social-emotional education. Humans have strong emotional attachments to their pets and these human-animal bonds coincide with higher amounts of empathy. This can motivate people's curiosities about their pets' lives and experiences. One technological and philosophical challenge involved in pursuing our agenda is the intrinsic disconnect between humans' experiences and those of other species. People only know what it is like to be human, and are unable to know on a phenomenological level what it is truly like to be their pets. Though people cohabitate with their pets, on a biological level they see, hear, smell, taste, and experience the world differently from them-ranging from slight variations of senses to things incomprehensible and alien like sonar and magnetic field detection. Despite this challenge, most pet owners believe that their pets feel something, even if they cannot fully understand what they feel. People naturally evaluate animal behavior and experience through a human lens. In a thought exercise of imagining what it is like to be a bat, Nagel highlights the mind-body problem he encounters: "... I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind". While we will likely never solve the mind-body problem of humans truly understanding their pets' experiences, we can address the "body" aspect of the problem and use it to drive thought experiments about the lives of pets. These thought exercises can give people a better sense of animals' lives by acknowledging biological differences that many people have about how their pets see, hear, and feel the world. In this study, we present the of an early step toward feminist science and science education: the design and evaluation of a wearable device meant to offer humans the experience of exploring an environment using whiskers. This first step is intended to both elucidate how new technologies can offer transspecies sensory experiences, and to show how we might assess the validity of those experiences (i.e., the extent to which they immerse humans in the reality of another species) by comparing humans' behaviors in the new technologically-mediated umwelt with those whom the umwelt natively describes. Wearables and mixed reality technologies have been applied in a wide variety of domains ranging from educational contexts, medical settings, natural environments, performances, and museums. The ing hybrids of wearables and mixed reality technologies are sensory augmentation devices. Sensory augmentation devices give humans the ability to experience phenomena that they are physically unable to process, as well as some phenomena that are simply unnatural to the human experience. The field of sensory substitution and augmentation enables humans to use existing senses to substitute for the ones that they cannot experience, and to augment senses they already have to make them more powerful. For example, sensory substitution has been applied as a method for offering hearing and/or visually impaired people additional senses to substitute for the one in which they have an impairment. These technologies range from devices that can be implanted within the body, sending direct signals to internal mechanisms in the brain, to external substitute devices that provide tactile feedback on the surface of the skin. In one example of tactile feedback, researchers developed a non-invasive "vibratory vest" for deaf and hearing impaired individuals that processes auditory information and converts it into vibrotactile feedback on the wearer's torso. Our work builds on approaches that use these technologies to provide physical, sensory feedback experiences for the wearer. Wearables have the ability to provide hands-on, interactive, and embodied learning experiences. There is research to suggest that wearable technologies in the classroom can increase engagement and improve student attitudes towards STEM activities. One of the pedagogical affordances of wearable technologies is the ability to gather contextual information from a firsthand account. Some uses of wearables in education focus on understanding and quantifying the self, such as: calculating our heart rates, monitoring how many steps we have taken, and providing feedback about our emotional states. Other approaches embed the quantifying potential of wearables in scientific inquiry and discourse. In contrast, our research focuses on how wearable technologies can help people understand experiences beyond the self, more specifically, the potential for wearable technologies to foster empathy by helping humans understand the experiences of others. Incorporating empathy and perspective-taking into scientific question-asking is deeply rooted in feminist educational theory and practice. These practices offer a more inclusive view of what it means to actively participate in science, and promote empathy as a valued part of scientific discovery. An example of a wearable device aimed to motivate empathy is a series of mushroom foraging tools designed to build more intimate relationships between humans and the environments they are probing by connecting them physically to the environment. Nobel laureate Barbara McClintock cited her ability to get "a feel for the organism" as influential in supporting her discoveries in the area of maize cytogenetics. McClintock has said, "I know my corn plants intimately, and I find it a great pleasure to know them," a sentiment we believe could inform future wearable-mediated science education. When interacting with non-human animals, it is easy to notice how their sensory capabilities differ from humans'. For example, people may notice their cat hear or see something without hearing or seeing it themselves. In addition to animal companions' sharper senses, people may also notice qualitative differences in how they experience the environment, such as when their pets deftly navigating through small spaces. People may also notice and wonder about the presence of physical characteristics that humans lack, such as whiskers or purring behaviors. Our work in this paper is motivated by human curiosity about these differences between species and explores how taking on the physical or perceptual characteristics of an animal helps people to understand it. In this study, we specifically focus on how participants' behavior compares to navigation and foraging techniques that animals exhibit in the wild and in controlled laboratory maze experiments. An animal's ability to gather environmental information and make decisions in real-time is key for its survival. Animals must constantly make decisions and consider tradeoffs during activities like foraging for food, avoiding predators, and searching for a mate. These trade-offs describe what is called the exploration-exploitation problem, a key idea in organizational learning and animal behavioral science. A common definition describes exploration as the process of randomly searching an unrefined and unexplored area, and exploitation as the process of searching a more refined area for some reward or resource. The less information an animal has about an environment, the more likely it is to be exploring rather than exploiting. Researchers argue that optimal search strategies for animals include alternating patterns of quick movement and searching. In the process of exploring and exploiting an environment, animals exhibit "egocentric" and "geocentric" navigational strategies. In an egocentric strategy, an organism uses previously acquired information, either from memory or another internal mechanism, to move along a path. In a geocentric strategy, an organism uses cues and real-time feedback from the environment to continuously reorient itself along a path. Animals rely heavily on geocentric strategies to explore and search unfamiliar areas; it is too difficult to primarily rely on internal mechanisms when moving through new territory. The term thigmotaxis describes the behavior of either moving toward or away from touch stimulus; an example of this behavior is "wall-hugging" in animals, which is the tendency to avoid open areas and stick to the perimeter of an environment during navigation. Wall-hugging behavior in animals is commonly tested in laboratory maze environments, where high levels of wall-hugging can suggest anxiety or fear in an animal. Therefore, wall-hugging is commonly seen as a strategy that animals invoke when exploring a new and unfamiliar environment. For example, mice spend, on average, 74% of their time hugging a wall in their first five minutes exploring an unfamiliar environment, and about 65% of their time doing so by thirty minutes. We created a sensory augmentation device, the Whisker Beard, that provides a physical simulacrum of what it is like to have whiskers. We wanted our whiskers to be functionally and aesthetically reminiscent of cat whiskers, including the appearance and placement of the whiskers and the physical properties of the materials. In addition, we wanted the functionality to be authentic in the sense that a person wearing it would use the whiskers to enact behaviors similar to animals'. Whiskers, formally known as vibrissae, are rigid but flexible hairs that provide tactile sensory feedback in many mammals and rodents. Ahl describes a number of characteristics and roles for whiskers; whiskers differ from hair in that their follicles are much thicker than normal hair follicles, and that whiskers are sensitive and send signals to the brain. Whiskers are commonly localized to facial regions of animals, but can also be found on other parts of the bodies of animals. They are essential to survival, monitoring, communication, and aggression. They help serve as a sensory substitute for animals with poor close-sightedness; cats have difficulty seeing objects very close to their faces, so whiskers help them sense objects close to them, as well as protecting their face from harmful objects. A project close to our vision is a helmet that uses infrared sensors to detect close-by objects and provides vibrotactile feedback to the wearer; like our project,'s design was inspired by mammals' abilities to use whiskers to detect their surroundings. Our project and theirs are similar in that they are whisker-inspired designs for sensory augmenting wearable devices, however our goals differ in that we are not trying to offer an efficient solution for humans to navigate in low-light conditions, but rather to provide perspective-taking experiences for wearers to experience what it is like to have and use whiskers. The version of the Whisker Beard used in this study has a total of four whiskers, two for each side of the face. In order to mimic the genal (cheek) whiskers of the cat, our whiskers attach to the sides of two acrylic cheek plates that rest 2-3 cm above the skin of the wearer, therefore the whiskers protrude from the sides of the wearer's cheeks close to the mouth (see Figure 1). Each whisker's total length (38.5 cm) is approximately the average human's shoulder width (39 cm). We chose this length in order to make the whiskers extend beyond the shoulders of an average sized person, thereby imitating the appearance and functional reach of cats' whiskers. Figure 1 shows a researcher wearing the Whisker Beard. The whiskers are composed of three flexible components: a flex sensor that detects the deflection that the whiskers are receiving, polystyrene strips that extend the usable length of the flex sensor significantly and returns the flex sensor to a neutral unbent position, and Sugru, which is a moldable silicone glue that holds the polystyrene strip and flex sensor together. In addition, a small connector at the end of the whisker attaches to the cheek (see Figure 2). Our device detects when the wearer brushes their whiskers against a surface. When the whiskers are bent the change in resistance is measured and converted to a voltage. The calculated voltage measurement is then used to set the pulsewidth-modulation signal to the vibration motor control. The device provides vibrotactile feedback to the wearer by varying the intensity of vibration proportional to the bend angle through an array of vibration motors on the scalp, inspired by the work from. Each motor is coupled to one whisker. We chose to place the vibration motors on the scalp so the vibrotactile sensation would be felt on the wearer's head, and so the motors could be placed far enough apart to allow for better point discrimination (making it easier for the wearer to identify which specific motors are vibrating). The whiskers sense bidirectionally in order to better mimic a cat's actual sensory perceptions when approaching and backing out of different confined locations. The current hardware design allows for bidirectional sensing, but the software does not explicitly indicate direction of the whiskers' bends through haptic feedback, however, the participant can feel the direction of tension on the wearable. In order to study how the Whisker Beard can support animallike environmental exploration, we built a human-sized cardboard maze (4m x 6m) for our participants to explore while wearing the whiskers wearable. Mazes are a common tool in behavioral research involving rats and smaller mammals. Our hypothesis was that participants would be able to use the whiskers as a workable form of sensing during maze navigation. We wanted participants to rely on the whiskers as heavily as possible and other senses as little as possible. Therefore, we chose to blindfold the participants since vision is the dominant sense for non-visually impaired humans for gathering environmental information. We designed the maze to promote enough confusion for participants so they would need to use feedback from the whiskers to guide them. We did so in order to observe a more honest range of animal behavior. Before creating the human-size maze, we created a series of small-scale prototypes. We used these to consider different possible maze routes as well as what kinds of obstacles would promote explorations of the whiskers' affordances. We prototyped obstacles that would encourage participants to enact cat-like behaviors such as rubbing one's face on a surface, and having to back out of narrow spaces. We came to the that experimenting with unfamiliar topography was the best way to create an experience that removes participants from a human sensory experience within an exploration task. Therefore, the final maze design included multiple dead ends, corridors of different widths, vertically hinged flaps, and cutouts in the walls. In addition, due to the bidirectionality of the whiskers, we designed our obstacles such that wearers would mostly need to rely on the horizontal changes in the whiskers' shapes. Figure 3 shows a photo of the finished cardboard maze, and Figure 4 shows a digital rendering of an aerial view of the maze. We placed 10 toy mice throughout the maze to induce exploration and exploitation behavior. We gave participants this task in order to give them a survival-oriented goal to collect as many resources as possible from an unfamiliar environment in a limited time frame to strengthen the worldview of being an animal. We investigated how participants used the Whisker Beard to navigate the maze, and in our analysis we focus on what ways participants did or did not exhibit animal-like behaviors during the experimental activity, and how they reacted to the experience of cat-like sensory augmentation. To do so, we evaluate how participants behaved like cats, as well as what participants' impressions were about the activity. We draw on work from animal and biological sciences, particularly key areas of research related to navigation and animal behavior. In addition, we highlight moments from participant discussion that illustrate ideas and questions they had after the experience. We recruited six undergraduate college students who lived on campus in the dormitory that we constructed the maze in. Due to the circumstances of living in a dorm room, none of the participants currently owned a pet, however three of the participants had previously owned a cat, or still had a cat at home, and the participants who had not owned a cat had owned a dog and also knew family and friends who have had cats. All six participants completed the maze activity on the same day in subsequent 20 minute sessions. As participants arrived for their sessions, we greeted them individually in a separate room. There, we assisted each participant in attaching the Whisker Beard to their face. We spent a few minutes explaining how the device works and gave participants time to learn the correlation between whisker bend and vibration location and intensity. We did this by individually flexing each whisker while the participant was wearing it (prior to blindfolding them) and asking them to describe if they could feel the changes in the vibrotactile feedback. Once participants were familiar with the wearable, we explained the maze procedure to participants, and brought them -blindfolded -into the maze room. We replaced all 10 toy mice in the same locations before the start of each participant's session (see Figure 4). We gave each participant a maximum of 10 minutes to crawl through the maze and explore, and to collect as many toy mice as they could. We asked participants to think-aloud during the maze so we could have a better sense of their thought process throughout the activity to both validate our coding of the data, as well as collect information about participants' impressions of the activity. After each participant completed the maze activity, we asked them to refrain from discussing the experience with other participants until everyone was finished. We had them each fill out a worksheet to provide responses to questions like, how did it feel to navigate using whiskers, and what moments stood out to you when using your whiskers to navigate the maze? We selected open-ended questions such as these to gather information about participants' experiences on an individual level and to get their initial impressions right after the activity. After all participants were finished with the activity, we reconvened as a full group for a facilitated discussion. We began the discussion with three prompts on the board: "How did you feel when you first put the whiskers on?" "What particular moments in the maze stood out to you?" and "Other thoughts?" We asked participants to write their thoughts about these prompts on sticky-notes and place them on the board. Once participants completed this, we used the responses to facilitate discussion among the whole group. In order to investigate how the wearable and maze activity mediated participants' experiences in an animal-like way (RQ1), we analyzed participants' interactions, behaviors, and commentary throughout the activity through coding of the data and analysis of their movement. We assessed interrater reliability by having each coder annotate a map of the maze using the process shown in Figure 7. We compared each annotation on both maps. If the coders identified an action as a different event type as shown in Figure 6, or if one coder identified an event at a particular time and the other did not identify an event, these were marked as disagreements. We then added up all the agreements and disagreements to calculate with Figure 5. A visualization of participants' usage of their whiskers and hands over time. Note that Jodie's time is much shorter than other participants due to the fact she moved through the maze very quickly and returned to the starting point after about four minutes. Cohen's Kappa. For example, to assess agreement on where "whisker interactions" occurred, we looked at where the raters identified whisker interactions on their maps, and counted all matching identifications as agreements, and all discrepancies as disagreements. We did this for all symbols in the key and totaled the . After reaching an inter-rater reliability of k=0.62, which indicates substantial agreement, the two researchers coded the rest of the data in parallel. To analyze the paths of participants and their interactions, we captured audio and video recordings of participants' movements and interactions in the maze from five cameras: we placed four cameras in different positions along the maze's perimeter, and a researcher held one camera and followed the participant as they moved through the maze. We analyzed the videos to produce content logs and time stamps of different events. In addition, we used the videos to manually generate aerial maps that depict participants' movements and interactions through the maze. Figures 6 and 7 illustrate the orientation, position, and scale of the participant, as well as depict participants' interactions with their whiskers and hands at a given time. In addition, we denoted four subsections of the maze that we thought represented the different types of obstacles and areas (see Figure 4). The red zone is where the maze starts and contains two straight hallways. The yellow zone contains a large area of open space, as well as a wide opening that narrows to a dead end. The blue zone contains a straight hallway that opens into three smaller corridors. The green zone is a long hallway with cardboard flaps hanging on either side that participants had to push through. In Shapiro and Hall's museum mapping work, they created aerial maps to illustrate museum visitor movement and engagement to understand how the space created by the gallery facilitates learning. We apply their mapping methods in our study in order to identify how participants interacted with the space with the Whisker Beard and how their interactions and movement compared to animal-like behavior (see Figure 7 for examples of the paths we generated for analysis and refer to Figure 6 for our key). Figure 6 for our key. The map on the right contains color so that we could draw overlapping paths and keep track of the order in which it occurred (such as when participants doubled back to specific areas). In our analysis, we define moments of primarily whisker interaction as exploration, and moments of hand-swiping interactions as exploitation. We encouraged participants to solely rely on their whiskers to navigate the maze, and to only use their hands when they needed to collect mice. Because of this, we find the distinction between whisker use versus hand use as a reasonable indicator of exploration-exploitation behaviors. Finally, we coded moments of geocentric and egocentric strategies using the perspectives defined earlier, and coded wall-hugging to be moments where a participant makes con- tinued contact with their body and or whiskers against the wall. We separate our into four categories of participants': patterns of exploration and exploitation and how they related to whisker use; geocentric and egocentric strategies; wall-hugging behaviors; and reflections on the experience. The first three are aimed at providing information about the authenticity of the animal-like behaviors that participants exhibited (RQ1), while the fourth addresses what impressions people had about the wearable and the experience (RQ2). Participants alternated between periods of long exploration and relatively shorter exploitation. In Figure 5 we illustrate the exploration-exploitation search behaviors of each participant (listed with pseudonyms). Blue segments denote periods of navigation where participants were primarily relying on their whiskers to move around. Red segments denote moments where participants moved their arms across the floor in order to search for and obtain mice. Yellow segments are places where we coded participants as using both exploration and exploitation, which was rare because crawling and swiping is challenging. In Table 1 we show the breakdown of the amount of time participants spent exploring and exploiting the maze. Participants spent an average of 70.3% of time exploring and 35.8% of time exploiting. Participants switched back and forth between exploration and exploration, with an average exploration interval length of 15.2 seconds and an average exploitation interval length of 7.4 seconds. On average, participants caught seven out of the ten mice. During periods of whisker exploration, participants enacted three common whisker techniques that primarily occurred in the long, straight parts of the maze (red zone in 4). In Figure 8 we illustrate what these three techniques look like. In one technique (A) the wearer constantly drags the whiskers against the wall as they move. In the second (B) the wearer alternates between motion and pausing to brush the whiskers against the wall, and in the third (C) the wearer moves between walls alternating between the sides of the whiskers they use. All six participants enacted at least one form of A and B during the maze, and two participants utilized the C technique as well. The two participants that enacted all three techniques both have pet cats at home. Participants used a variety of geocentric and egocentric search strategies as they navigated the maze. Table 2 shows the of this categorization and demonstrates the different geocentric and egocentric strategies that participants enacted, with examples and quotes to illustrate how they used particular environmental feedback. We found that all six participants relied on enacting geocentric strategies throughout the entire activity, and used egocentric strategies more sparingly. During the think-aloud all six participants made comments about using touch and sound feedback from the whiskers, as well as tactile feedback from their hands and body. Only two participants made comments about attempting to use egocentric strategies, with one participant commenting that he was able to create an internal map during the maze (which he later admitted was incorrect after seeing the maze). All six participants exhibited wall-hugging behavior (thigmotaxis) as they moved through the maze blindfolded. Sound of the whiskers touching the walls "The navigation is also noise of the whiskers touching things." Egocentric Memory of the space from an earlier point in time "I know I'm in the center of the room, because I've been in this room before with this outlet on the floor" Memory of revisiting the same part of the maze "I think I'm going in circles" Internal recall of physical orientation "[I was] thinking about my previous and next moves, as well as an internal map." Table 2. A breakdown of geocentric and egocentric navigational strategies that participants used. Five of the six participants spent over half of their time in the maze hugging the wall, with one participant hugging the wall for 97.3% of her maze time. On average, participants spent 68.6% of their time in the maze hugging the wall (min=34.6, max=97.3, standard deviation=20.87). Participants' tendencies to hug the wall varied, but interestingly, the participant who hugged the wall the least had a higher tendency than others to bump into obstacles head first (14 times). Participants frequently wall-hugged while exploring using whisker technique A from Figure 8. Participants reflected on their experiences in the whisker activity through individual worksheet responses and a full group discussion. During their reflections, participants commented on the physical experience of the whisker activity, and drew connections to phenomena they have experienced with their own pet cats or others' cats. In response to the question, "How did it feel to wear the whiskers?" participants described both the physical sensation of how it felt to have whiskers, and the functional use of the whiskers and their ability to adapt to it. Responses that described the feeling of adapting to the whiskers include: • " As I got used to them, the whiskers started to become a part of me." • "I liked having another sense. They got much easier to use as I played around with angles and pressure on the sensed surfaces." • "It was an easy way to'see' side to side. Times in empty space though stood out, with nothing to feel in front of me made me more cautious" • "The first time I went through a small space and both whiskers activated [while] having to back out...stood out to me" • "It was a bit odd at first but I quickly got used to them. It was kinda nice having an additional aid apart from the feeling in my hands and feet. " Responses that described the more physical feeling of having vibrating whiskers include: • "I had to push through the flaps and it was almost overwhelming with vibrations." • "It was an interesting feeling having the vibration kind of tickle. When I was really close to something it was also kind of shocking and made me want to back away." Most of the above responses to the question of how it "felt" focused on the overall experience of using the whiskers to navigate, as opposed to the specific physical feeling of the vibrations on the skin. In addition, we were curious whether participants would think about their own pet cats, or other cats that they know. We asked, "Did you think about your own pet cat while you were in the maze? If so, what did you think about?" and participants responded in ways that included perspective-taking commentary that were empathetic. Their responses included: • "I thought about my friend's cat and how when she was five, she cut off the cat's whiskers thinking they were long hairs. For a good week the cat had to stumble around, falling over, and running into things often." • "How cats sometimes bump their heads into things then back-up confused. I could sympathize." • "Yes I did, I thought about how the vibration was a little like how they would use their whiskers. I also thought about how hard it would be to navigate without her whiskers" Most of the empathetic responses show participants acknowledging that it would be difficult to navigate as a cat stripped of its whiskers, similarly to how it was difficult for them to navigate without their sight. Participants described moments of feeling disoriented when they entered the large open area of the maze (yellow zone in Figure 4), and during moments of technical difficulties when a whisker fell out. According to participants, the open areas were disorienting because they lost their sense of physicality and location, suggesting some understanding of, and potential for empathy with, animals' thigmotactic strategies. The of the Whisker Beard and maze activity show examples of participants exhibiting behaviors and strategies similar to those that animals perform when searching for resources in unfamiliar environments. We separate our into discussions about their physical behaviors and strategies, as well as their impressions of the experience. As depicted in Figure 5 and Table 1, as participants explored the maze, they alternated between periods of explorative and exploitative behavior as they switched between using their whiskers and using their hands. Participants spent, on average, a longer amount of time exploring and moving through than maze than they spent hand swiping to look for mice. These are in line with animal foraging behaviors. Benichou et al. says that animals searching for resources switch between periods of motion and periods of searching. In addition, their work shows that intervals of exploration tend to be longer than intervals of exploitation. This aligns with the amount of time our participants dedicated to these behaviors. While we cannot claim that participants would not have enacted similar exploration-exploitation behaviors without whiskers, we can say that the behaviors that they enacted with whiskers were in line with foraging behaviors. Interestingly, several of the participants made use of the whiskers in ways that strikingly resembled cat behavior, as depicted in Figure 8. As participants moved down long passages, some used their whiskers to gauge the width of the passage by moving back and forth brushing each side of their whiskers on the opposing walls. This demonstrates that participants used the whiskers to enhance their spatial awareness, one of the supposed evolutionary factors behind the presence of whiskers. We noticed that when participants used techniques B and C, they mimicked the behavior of cats who rub their olfactory face glands on objects to mark their scent, as well to get a sense for the physical properties of a specific object. While this behavior in cats is not necessarily used for navigation purposes, it is used for gauging the size and shape of an object. Participants did this in order to look for hidden passageways and moveable obstacles. Our observations of participants' geocentric and egocentric behaviors provided us with a fuller picture of how participants used the whiskers in tandem with other strategies during the activity. Participants relied on the vibrotactile feedback from the Whisker Beard in determining their path of movement through the maze. In addition to the vibrotactile feedback, we found that participants also relied on the sounds the whiskers made as they brushed against the maze's cardboard surfaces. We validated this observation through think-aloud commentary that participants provided throughout the maze, and through post-maze group discussion. The fact that participants relied on additional tactics beyond the vibrations is not an inauthentic outcome, but rather a reasonable one. Participants' use of different egocentric and geocentric tactics is naturally aligned with how animals navigate the world-getting the most information from their environment by whatever means are accessible to them. The blindfolded maze procedure afforded participants the ability to experience the Whisker Beard in an unfamiliar environment. As expected, due to the unfamiliarity of the environment, participants relied on more geocentric strategies. These are in line with animal navigation research which suggests that egocentric strategies are too difficult to use when exploring new terrain, and therefore animals rely more heavily on geocentric strategies to gather real-time physical feedback. In time, participants who revisited areas of the maze began to recognize their surroundings, which led them to use internal recall from their memory to identify their approximate position; however, because they were blindfolded they still had to rely on geocentric strategies as well. Unsurprisingly, participants told us that being blindfolded and losing their sense of sight was disorienting for them; sight is one of humans', and cats', dominant senses for obtaining information about an environment. Participants described the open-space areas of the maze as "disorienting" and tended to try to find a wall as quickly as they could to reorient themselves. The level of consistent wall-hugging that participants exhibited is in line with experiments where increased levels of anxiety correlated to higher levels of thigmotaxis. Usually, animals' tendency to hug the wall would decrease as an experiment went on, except in circumstances where animals do not have enough time to fully process their environment. In our experiment, blindfolding the participants made it challenging for them to produce an accurate internal map of the space, leading them to continuously avoid open areas and rely on vibrotactile and audio feedback from the walls during navigation. The participants' reflections during the maze and post-maze show promising beginnings to meaningful discussions of animal empathy, as many drew connections to prior experiences of pets who were blind, deaf, or had their whiskers cut off and discussed how disorienting and difficult it would be for them to navigate with a sense removed. Participants described the whiskers as feeling like an extra sense, one that they were able to adapt to even in a short timeframe. Although losing their sight was disorienting, they were able to utilize the whiskers as a substitute for being able to "see." The combination of the Whisker Beard and maze activity suggests that through disorienting them and having them behave like a cat, they were able to consider what it would be like to be a cat relying on its whiskers every day, and how challenging it would be for a cat who has no whiskers at all. The most severe technology design issue we encountered during the study was whisker placement; each participant noted that the lack of having a front-facing whisker on the forehead made it difficult to avoid obstacles directly in front of the wearer. Cats have a set of superciliary or suborbital whiskers on their brow for this very purpose, and the lack of these front-facing whiskers on our wearable was frustrating for participants. We did not include them in the original design because we wanted to focus on the whiskers on the sides of the faces, not realizing how important it would be to include them in other areas on the head. Although participants were able to make use of the whiskers during this study, we believe that a longer study would give participants a more adequate amount of time to adapt to the whiskers and therefore exhibit more natural behaviors, such as participants relying more on the whiskers than their hands during resource collection. It might also allow a different set of behaviors to emerge, as wearers become more comfortable in the environment and reduce thigmotaxic behavior. We continue to investigate how the whisker wearable, and technologies like it, can remediate human experience in order to support deeper intersubjectivity with animals, and how such remediations can offer an experiential framework for science and science education. To address the aforementioned technological limitations and move towards a more authentic design, we are working towards a more modular and customizable design. The next generation of the wearable will include additional sensing capabilities that support cat senses beyond whiskers, like hearing. A new custom board, Dr. Bones (Figure 10), will serve as a connection hub and accommodate the micro:bit as the primary controller for all input and output modules. Incorporating the micro:bit into our design will enable participants to program their individual modules using the Makecode programming environment. In exposing the individual sensing and output elements of this project we aim to encourage young people to create their own sensory augmentation systems. We conjecture that this customizability will offer a variety of ways for people to engage with scientific ideas relating to animals. In addition to offering customizability for sensors, we will be offering people the ability to customize the placement of sensors and attachments on the wearable. For example, participants who wished they had a front-facing whisker during the activity will be able to create one. This will be more authentic to the way cats' whiskers work because cats and other mammals are able to direct control over the direction of their whiskers, as well as other parts of their bodies like their ears. Further, we are addressing the lack of vibrotactile directional feedback by adding three motors per whisker to our design; therefore, through motor sequencing and drive intensity, we can render the direction and angle of the bend human sensation. One intriguing aspect of our work in adding whiskers to humans was the ability to create new types of empathetic experiences. While our initial work has focused on developing new technology and the analytical infrastructure needed to test it, we see several opportunities for more deeply exploring the experiential aspects of this technology. First, we may consider how to explore the transhumanist aspects of gaining a new sensory capability. Can people find uses for wearable whiskers in their daily activities or as part of their everyday lived experience? By creating a more portable, lightweight version of the hardware, we may explore opportunities to send this device into the world to see what people make of it. Second, we may consider how wearable whiskers can increase understanding of, and empathy for, the experiences of nonhuman animals. For example, by leading a person through an experience similar to the everyday behavior of a feral cat (sneaking through backyards, chasing birds, searching for edible items), can wearers of the whiskers better comprehend and empathize with the experiences of feral cats? Finally, we may explore how experiences with wearable whiskers could increase an individual's understanding of, and relationship with, their own pets or other familiar animals. For example, children often must be taught what kinds of touch are liked and disliked by their pets; until they learn this, they may be frustrated by their pets' apparent distrust or fear when they are nearby. We may explore how to design experiences that can help a human understand a particular aspect of their pets' lived experience as a way of supporting a more respectful relationship between species. Wearable technologies and embodied learning experiences free humans from the confines of their biological limitations. This enables researchers to provide low-cost opportunities that offer firsthand perspective-taking experiences for people, allowing people to experiment with new sensory interactions, including ones that non-human animals have access to. We presented the design of the Whisker Beard, which does just that-provides humans with the opportunity to experience what it would be like to have a new sense, in this case, what it would be like to have whiskers. We introduced concepts from animal behavioral science research and described how we applied it to evaluating the experiences of participants' while immersed in an animal perspective-taking activity. Our observations of participants' enactment of animal-like behaviors, as well as their impressions about the experience suggest that they were immersed in the sensory experience of being a cat with whiskers. We are actively iterating on the designs of our hardware to offer more customizability. This will enable participants to design their own sensory augmenting technologies where they can explore their own curiosities about their pets' other senses. In near-future experiments we will iterate on the design of the wearable activity to offer a more immersive experience where participants can continue to enact animal-like behaviors. Our next steps will then be to investigate how participants developing increased awareness of animals' sensory experiences can support their enactment of empathetically-oriented design activities focused on improving animals' quality of life.
This paper explores using wearable sensory augmenting technology to facilitate first-hand perspective-taking of what it is like to have cat-like whiskers.
1,459
scitldr
Generalization error (also known as the out-of-sample error) measures how well the hypothesis learned from training data generalizes to previously unseen data. Proving tight generalization error bounds is a central question in statistical learning theory. In this paper, we obtain generalization error bounds for learning general non-convex objectives, which has attracted significant attention in recent years. We develop a new framework, termed Bayes-Stability, for proving algorithm-dependent generalization error bounds. The new framework combines ideas from both the PAC-Bayesian theory and the notion of algorithmic stability. Applying the Bayes-Stability method, we obtain new data-dependent generalization bounds for stochastic gradient Langevin dynamics (SGLD) and several other noisy gradient methods (e.g., with momentum, mini-batch and acceleration, Entropy-SGD). Our recovers (and is typically tighter than) a recent in and improves upon the in. Our experiments demonstrate that our data-dependent bounds can distinguish randomly labelled data from normal data, which provides an explanation to the intriguing phenomena observed in Zhang et al. (2017a). We also study the setting where the total loss is the sum of a bounded loss and an additiona l`2 regularization term. We obtain new generalization bounds for the continuous Langevin dynamic in this setting by developing a new Log-Sobolev inequality for the parameter distribution at any time. Our new bounds are more desirable when the noise level of the processis not very small, and do not become vacuous even when T tends to infinity. Non-convex stochastic optimization is the major workhorse of modern machine learning. For instance, the standard supervised learning on a model class parametrized by R d can be formulated as the following optimization problem: where w denotes the model parameter, D is an unknown data distribution over the instance space Z, and F: R d × Z → R is a given objective function which may be non-convex. A learning algorithm takes as input a sequence S = (z 1, z 2, . . ., z n) of n data points sampled i.i.d. from D, and outputs a (possibly randomized) parameter configurationŵ ∈ R d. A fundamental problem in learning theory is to understand the generalization performance of learning algorithms-is the algorithm guaranteed to output a model that generalizes well to the data distribution D? Specifically, we aim to prove upper bounds on the generalization error err gen (S) = L(ŵ, D) − L(ŵ, S), where L(ŵ, D) = Ez∼D[L(ŵ, z)] and L(ŵ, S) = 1 n n i=1 L(ŵ, z i) are the population and empirical losses, respectively. We note that the loss function L (e.g., the 0/1 loss) could be different from the objective function F (e.g., the cross-entropy loss) used in the training process (which serves as a surrogate for the loss L). Classical learning theory relates the generalization error to various complexity measures (e.g., the VC-dimension and Rademacher complexity) of the model class. Directly applying these classical complexity measures, however, often fails to explain the recent success of over-parametrized neural networks, where the model complexity significantly exceeds the amount of available training data (see e.g., Zhang et al. (2017a) ). By incorporating certain data-dependent quantities such as margin and compressibility into the classical framework, some recent work (e.g., ; ;) obtains more meaningful generalization bounds in the deep learning context. An alternative approach to generalization is to prove algorithm-dependent bounds. One celebrated example along this line is the algorithmic stability framework initiated by. Roughly speaking, the generalization error can be bounded by the stability of the algorithm (see Section 2 for the details). Using this framework, study the stability (hence the generalization) of stochastic gradient descent (SGD) for both convex and non-convex functions. Their work motivates recent study of the generalization performance of several other gradient-based optimization methods:;;;;;;. In this paper, we study the algorithmic stability and generalization performance of various iterative gradient-based method, with certain continuous noise injected in each iteration, in a non-convex setting. As a concrete example, we consider the stochastic gradient Langevin dynamics (SGLD) (see ; ;). Viewed as a variant of SGD, SGLD adds an isotropic Gaussian noise at every update step: where g t (W t−1) denotes either the full gradient or the gradient over a mini-batch sampled from training dataset. We also study a continuous version of, which is the dynamic defined by the following stochastic differential equation (SDE): where B t is the standard Brownian motion. Most related to our work is the study of algorithm-dependent generalization bounds of stochastic gradient methods. first study the generalization performance of SGD via algorithmic stability. They prove a generalization bound that scales linearly with T, the number of iterations, when the loss function is convex, but their for general non-convex optimization are more restricted. presents a generalization bound that also combines PAC-Bayesian analysis with stability. However, their prior and posterior are probability distributions on the hyperparameter space, while ours are distributions on the hypothesis space. Our work is a follow-up of the recent work by , in which they provide generalization bounds for SGLD from both stability and PAC-Bayesian perspectives. Another closely related work by derives similar bounds for noisy stochastic gradient methods, based on the information theoretic framework of. However, their bounds scale as O(T /n) (n is the size of the training dataset) and are sub-optimal even for SGLD. We acknowledge that besides the algorithm-dependent approach that we follow, recent advances in learning theory aim to explain the generalization performance of neural networks from many other perspectives. Some of the most prominent ideas include bounding the network capacity by the norms of weight matrices. Most of these are stated in the context of neural networks (some are tailored to networks with specific architecture), whereas our work addresses generalization in non-convex stochastic optimization in general. We also note that some recent work provides explanations for the phenomenon reported in Zhang et al. (2017a) from a variety of different perspectives (e.g., ; Arora et al. (2018; 2019) ). first consider stochastic gradient Langevin dynamics (SGLD) as a sampling algorithm in the Bayesian inference context. give a non-asymptotic analysis and establish the finite-time convergence guarantee of SGLD to an approximate global minimum. Zhang et al. (2017b) analyze the hitting time of SGLD and prove that SGLD converges to an approximate local minimum. These are further improved and generalized to a family of Langevin dynamics based algorithms by the subsequent work of. In this paper, we provide generalization guarantees for the noisy variants of several popular stochastic gradient methods. The Bayes-Stability method and data-dependent generalization bounds. We develop a new method for proving generalization bounds, termed as Bayes-Stability, by incorporating ideas from the PAC-Bayesian theory into the stability framework. In particular, assuming the loss takes value in [0, C], our method shows that the generalization error is bounded by both 2C Ez[2KL(P, Q z)] and 2C Ez[2KL(Q z, P)], where P is a prior distribution independent of the training set S, and Q z is the expected posterior distribution conditioned on z n = z (i.e., the last training data is z). The formal definition and the can be found in Definition 5 and Theorem 7. Inspired by , instead of using a fixed prior distribution, we bound the KLdivergence from the posterior to a distribution-dependent prior. This enables us to derive the following generalization error bound that depends on the expected norm of the gradient along the optimization path: Here S is the dataset and g e (t) is the expected empirical squared gradient norm at step t; see Theorem 11 for the details., where L is the global Lipschitz constant of the loss, our new bound depends on the data distribution and is typically tighter (as the gradient norm is at most L). In modern deep neural networks, the worstcase Lipschitz constant L can be quite large, and typically much larger than the expected empirical gradient norm along the optimization trajectory. Specifically, in the later stage of the training, the expected empirical gradient is small (see Figure 1(d) for the details). Hence, our generalization bound does not grow much even if we train longer at this stage. Our new bound also offers an explanation to the difference between training on correct and random labels observed by Zhang et al. (2017a). In particular, we show empirically that the sum of expected squared gradient norm (along the optimization path) is significantly higher when the training labels are replaced with random labels (Section 3, Remark 13, Figure 1, Appendix C.2). We would also like to mention the PAC-Bayesian bound (for SGLD with 2 -regularization) proposed by. (This bound is different from what we mentioned before; see Theorem 2 in their paper.) Their bound scales as O(1/ √ n) and the numerator of their bound has a similar sum of gradient norms (with a decaying weight if the regularization coefficient λ > 0). Their bound is based on the PAC-Bayesian approach and holds with high probability, while our bound only holds in expectation. Extensions. We remark that our technique allows for an arguably simpler proof of (, Theorem 1); the original proof is based on SDE and Fokker-Planck equation. More importantly, our technique can be easily extended to handle mini-batches and a variety of general settings as follows. 1. Extension to other gradient-based methods. Our naturally extends to other noisy stochastic gradient methods including momentum due to (Theorem 26), Nesterov's accelerated gradient method in (Theorem 26), and Entropy-SGD proposed by (Theorem 27). 2. Extension to general noises. The proof of the generalization bound in relies heavily on that the noise is Gaussian 1, which makes it difficult to generalize to other noise distributions such as the Laplace distribution. In contrast, our analysis easily carries over to the class of log-Lipschitz noises (i.e., noises drawn from distributions with Lipschitz log densities). 3. Pathwise stability. In practice, it is also natural to output a certain function of the entire optimization path, e.g., the one with the smallest empirical risk or a weighted average. We show that the same generalization bound holds for all such variants (Remark 12). We note that the analysis in an independent work of also satisfies this property, (see Corollary 1 in their work), which scales at a slower O(1/ √ n) rate (instead of O(1/n)) when dealing with C-bounded loss. Generalization bounds with 2 regularization via Log-Sobolev inequalities. We also study the setting where the total objective function F is the sum of a C-bounded differentiable objective F 0 and an additional 2 regularization term λ 2 w 2 2. In this case, F can be treated as a perturbation of a quadratic function, and the continuous Langevin dynamics (CLD) is well understood for quadratic functions. We obtain two generalization bounds for CLD, both via the technique of Log-Sobolev inequalities, a powerful tool for proving the convergence rate of CLD. One of our bounds is as follows (Theorem 15): The above bound has the following advantages: 1. Applying e −x ≥ 1 − x, one can see that our bound is at most O(√ T /n), which matches the previous bound in (, Proposition 8) 3. 2. As time T grows, the bound is upper bounded by and approaches to 2e 4βC CLn −1 β/λ (unlike the previous O( √ T /n) bound that goes to infinity as T → +∞). If the noise level is not so small (i.e., β is not very large), the generalization bound is quite desirable. Our analysis is based on a Log-Sobolev inequality (LSI) for the parameter distribution at time t, whereas most known LSIs only hold for the stationary distribution of the Markov process. We prove the new LSI by exploiting the variational formulation of the entropy formula. Notations. We use D to denote the data distribution. The training dataset S = (z 1, . . ., z n) is a sequence of n independent samples drawn from D. S, S ∈ Z n are called neighboring datasets if and only if they differ at exactly one data point (we could assume without loss of generality that z n = z n). Let F (w, z) and L(w, z) be the objective and the loss functions, respectively, where w ∈ R d denotes a model parameter and z ∈ Z is a data point. Define and L(w, D) are defined similarly. A learning algorithm A takes as input a dataset S, and outputs a parameter w ∈ R d randomly. Let G be the set of all possible mini-batches. G n = {B ∈ G : n ∈ B} denotes the collection of mini-batches that contain the n-th data point, while G n = G \ G n. Let diam(A) = sup x,y∈A x − y 2 denote the diameter of a set A. 1 In particular, their proof leverages the Fokker-Planck equation, which describes the time evolution of the density function associated with the Langevin dynamics and can only handle Gaussian noise. 2 They assume the loss is sub-Gaussian. By Hoeffding's lemma, C-bounded random variables are subGaussian with parameter C. 3 The proof of their O(√ T /n) bound can be easily extended to our setting with 2 regularization. Definition 2 (Expected generalization error). The expected generalization error of a learning algorithm A is defined as Algorithmic Stability. Intuitively, a learning algorithm that is stable (i.e., a small perturbation of the training data does not affect its output too much) can generalize well. In the seminal work of (see also), the authors formally defined algorithmic stability and established a close connection between the stability of a learning algorithm and its generalization performance. Definition 3 (Uniform stability). A randomized algorithm A is n -uniformly stable w.r.t. loss L, if for all neighboring sets S, S ∈ Z n, it holds that where w S and w S denote the outputs of A on S and S respectively. Lemma 4 (Generalization in expectation). Suppose a randomized algorithm A is n -uniformly stable. Then, |err gen | ≤ n. In this section, we incorporate ideas from the PAC-Bayesian theory (see e.g.,) into the algorithmic stability framework. Combined with the technical tools introduced in previous sections, the new framework enables us to prove tighter data-dependent generalization bounds. First, we define the posterior of a dataset and the posterior of a single data point. Definition 5 (Single-point posterior). Let Q S be the posterior distribution of the parameter for a given training dataset S = (z 1, . . ., z n). In other words, it is the probability distribution of the output of the learning algorithm on dataset S (e.g., for T iterations of SGLD in, Q S is the pdf of W T ). The single-point posterior Q (i,z) is defined as For convenience, we make the following natural assumption on the learning algorithm: Assumption 6 (Order-independent). For any fixed dataset S = (z 1, . . ., z n) and any permutation p, Q S is the same as Q S p, where S p = (z p1, . . ., z pn). Assumption 6 implies Q (1,z) = · · · = Q (n,z), so we use Q z as a shorthand for Q (i,z) in the following. Note that this assumption can be easily satisfied by letting the learning algorithm randomly permute the training data at the beginning. It is also easy to verify that both SGD and SGLD satisfy the order-independent assumption. Now, we state our new Bayes-stability framework, which holds for any prior distribution P over the parameter space that is independent of the training dataset S. Theorem 7 (Bayes-Stability). Suppose the loss function L(w, z) is C-bounded and the learning algorithm is order-independent (Assumption 6). Then for any prior distribution P not depending on S, the generalization error is bounded by both 2C E z 2KL(P, Q z) and 2C E z 2KL(Q z, P). Remark 8. Our Bayes-Stability framework originates from the algorithmic stability framework, and hence is similar to the notions of uniform stability and leave-one-out error (see). However, there are important differences. Uniform stability is a distribution-independent property, while Bayes-Stability can incorporate the information of the data distribution (through the prior P). Leave-one-out error measures the loss of a learned model on an unseen data point, yet Bayes-Stability focuses on the extent to which a single data point affects the outcome of the learning algorithm (compared to the prior). To establish an intuition, we first apply this framework to obtain an expectation generalization bound for (full) gradient Langevin dynamics (GLD), which is a special case of SGLD in (i.e., GLD uses the full gradient ∇ w F (W t−1, S) as g t (W t−1)). Theorem 9. Suppose that the loss function L is C-bounded. Then we have the following expected generalization bound for T iterations of GLD: where g e (t) ] is the empirical squared gradient norm, and W t is the parameter at step t of GLD. Proof The proof builds upon the following technical lemma, which we prove in Appendix A.2. Lemma 10. Let (W 0, . . ., W T) and (W 0, . . ., W T) be two independent sequences of random variables such that for each t ∈ {0, . . ., T}, W t and W t have the same support. Suppose W 0 and W 0 follow the same distribution. Then, where W ≤t denotes (W 0, . . ., W t) and W <t denotes W ≤t−1.,0) ], where 0 denotes the zero data point (i.e., f (w, 0) = 0 for any w). Theorem 7 shows that By the convexity of KL-divergence, for a fixed z ∈ Z, we have Let (W t) t≥0 and(W t) t≥0 be the training process of GLD for S = (S, z) and S = (S, 0), respectively. Note that for a fixed w <t, both W t |W <t = w <t and W t |W <t = w <t are Gaussian Applying Lemma 10 and Recall that W t−1 is the parameter at step t − 1 using S = (S, z) as dataset. In this case, we can rewrite z as z n since it is the n-th data point of S. Note that SGLD satisfies the order-independent assumption, we can rewrite z as z i for all i ∈ [n]. Together with,, and using x i, we can prove this theorem. More generally, we give the following bound for SGLD. The proof is similar to that of Theorem 9; the difference is that we need to bound the KL-divergence between two Gaussian mixtures instead of two Gaussians. This proof is more technical and deferred to Appendix A.3. Theorem 11. Suppose that the loss function L is C-bounded and the objective function f is Llipschitz. Assume that the following conditions hold: Then, the following expected generalization error bound holds for T iterations of SGLD: where g e (t) ] is the empirical squared gradient norm, and W t is the parameter at step t of SGLD. Furthermore, based on essentially the same proof, we can obtain the following bound that depends on the population gradient norm: The full proofs of the above are postponed to Appendix A, and we provide some remarks about the new bounds. Remark 12. In fact, our proof establishes that the above upper bound holds for the two sequences W ≤T and W ≤T:. Hence, our bound holds for any sufficiently regular function over the parameter sequences:. In particular, our generalization error bound automatically extends to several variants of SGLD, such as outputting the average of the trajectory, the average of the suffix of certain length, or the exponential moving average. Remark 13. Inspired by Zhang et al. (2017a), we run both GLD (Figure 1) and SGLD (Appendix C.2) to fit both normal data and randomly labelled data (see Appendix C for more experiment details). As shown in Figure 1 and Figure 2 in Appendix C.2, larger random label portion p leads to both much higher generalization error and much larger generalization error bound. Moreover, the shapes of the curves our bounds look quite similar to that of the generalization error curves. In this section, we study the generalization error of Continuous Langevin Dynamics (CLD) with 2 regularization. Throughout this section, we assume that the objective function over training set S is defined as F (w, S) = F 0 (w, S) + λ 2 w 2 2, and moreover, the following assumption holds. Assumption 14. The loss function L and the original objective F 0 are C-bounded. Moreover, F 0 is differentiable and L-lipschitz. The Continuous Langevin Dynamics is defined by the following SDE: where (B t) t≥0 is the standard Brownian motion on R d and the initial distribution µ 0 is the centered Gaussian distribution in R d with covariance We show that the generalization error of CLD is upper bounded by O e 4βC n −1 β/λ, which is independent of the training time T (Theorem 15). Furthermore, as T goes to infinity, we have a tighter generalization error bound O βC 2 n −1 (Theorem 39 in Appendix B). We also study the generalization of Gradient Langevin Dynamics (GLD), which is the discretization of CLD: where ξ k is the standard Gaussian random vector in R d. By leveraging a developed in , we show that, as Kη 2 tends to zero, GLD has the same generalization as CLD (see Theorems 15 and 39). We first formally state our first main in this section. dw) has the following expected generalization error bound: In addition, if L is M -smooth and non-negative, by setting λβ > 2, λ > 8M 2 ), GLD (running K iterations with the same µ 0 as CLD) has the expected generalization error bound: where C 1 is a constant that only depends on M, λ, β, b, L and d. The following lemma is crucial for establishing the above generalization bound for CLD. In particular, we need to establish a Log-Sobolev inequality for µ t, the parameter distribution at time t, for every time step t > 0. In contrast, most known LSIs only characterize the stationary distribution of the Markov process. The proof of the lemma can be found in Appendix B. Lemma 16. Under Assumption 14, let µ t be the probability measure of W t in CLD (with dµ 0 = 1 Z e −λβ w 2 2 dw). Let ν be a probability measure that is absolutely continuous with respect to µ t. Suppose dµ t = π t (w) dw and dν = γ(w) dw. Then, it holds that We sketch the proof of Theorem 15, and the complete proof is relegated to Appendix B. Proof Sketch of Theorem 15 Suppose S and S are two neighboring datasets. Let (W t) t≥0 and (W t) t≥0 be the process of CLD running on S and S, respectively. Let γ t and π t be the pdf of W t and W t. Let F S (w) denote F (w, S). We have The high level idea to prove this bound is very similar to that in. We first observe that the (stationary) Gibbs distribution µ has a small generalization error. Then, we bound the distance from µ t to µ. In our setting, we can use the Holley-Stroock perturbation lemma which allows us to bound the Logarithmic Sobolev constant, and we can thus bound the above distance easily. In this paper, we prove new generalization bounds for a variety of noisy gradient-based methods. Our current techniques can only handle continuous noises for which we can bound the KL-divergence. One future direction is to study the discrete noise introduced in SGD (in this case the KL-divergence may not be well defined). For either SGLD or CLD, if the noise level is small (i.e., β is large), it may take a long time for the diffusion process to reach the stable distribution. Hence, another interesting future direction is to consider the local behavior and generalization of the diffusion process in finite time through the techniques developed in the studies of metastability (see e.g., Lemma 17. Under Assumption 6, for any prior distribution P not depending on the dataset S = (z 1, . . ., z n), the generalization error is upper bounded by where L(w) denotes the population loss L(w, D). Let err train = ES Ew∼Q S L(w, S) and err test = ES Ew∼Q S L(w). We can rewrite generalization error as err gen = err test − err train, where and Thus, we have Now we are ready to prove Theorem 7, which we restate in the following. Theorem 7 (Bayes-Stability). Suppose the loss function L(w, z) is C-bounded and the learning algorithm is order-independent (Assumption 6), then for any prior distribution P not depending on S, the generalization error is bounded by both 2C E z 2KL(P, Q z) and 2C E z 2KL(Q z, P). Proof By Lemma 17, The other bound follows from a similar argument. Now we turn to the proof of Theorem 11. The following lemma allows us to reduce the proof of algorithmic stability to the analysis of a single update step. Proof By the chain rule of the KL-divergence, The lemma follows from a summation over t = 1,..., T. The following lemma (see e.g., (, Section 9)) gives a closed-form formula for the KLdivergence between two Gaussian distributions. The following lemma (, Theorem 3) helps us to upper bound the KL-divergence. Definition 19. Let P and Q be two probability distributions on R d. The directional triangular discrimination from P to Q is defined as where Lemma 20. For any two probability distributions P and Q on R d, Recall that G is the set of all possible mini-batches. G n = {B ∈ G : n ∈ B} denotes the collection of mini-batches that contain n, while G n = G \ G n. diam(A) = sup x,y∈A x − y denotes the diameter of set A. The following technical lemma upper bounds the KL-divergence between two Gaussian mixtures induced by sampling a mini-batch from neighbouring datasets. Lemma 21. Suppose that batch size b ≤ n/2. {µ B : B ∈ G} and {µ B : B ∈ G} are two collections of points in R d labeled by mini-batches of size b that satisfy the following conditions for some constant β ∈ [0, σ]: B∈G p µ B,σ and P = 1 |G| B∈G p µ B,σ be two mixture distributions over all mini-batches. Then, Proof of Lemma 21 By Lemma 20, KL(P, P) is bounded by The numerator of the above integrand is upper bounded by while the denominator can be lower bounded as follows: which implies, by the convexity of 1/x, that Inequalities and together imply Now we bound the right-hand side of for fixed A and B. By applying a translation and a rotation, we can assume without loss of generality that µ A = 0, and the last d − 2 coordinates of µ B and µ B are all zero. Note that the integral is unchanged when we project the space to the twodimensional subspace corresponding to the first two coordinates. Thus, it suffices to prove a bound for d = 2. We rewrite as Let I be the integral in the right-hand side of. Note that 2, we make two claims which we will prove later:, φ(y, δ) is non-increasing in δ. The above claims imply that: 1. For any r ∈ 0, 2. For any r ∈ Plugging the above into gives We conclude that n 2 σ 2. Finally, we prove the two claims used above:, we have e (b) 2 e −δ(δ+2y) (δ + y) < 0 for y ≥ 0, we conclude that for any y ≥ 1/ √ 2: Recall that SGLD on dataset S is defined as Here γ t is the step size. B t = {i 1, . . ., i b} is a subset of {1, . . ., n} of size b, and S Bt = (z i1, . . ., z i b) is the mini-batch indexed by B t. Recall that F (w, S) denotes. We restate and prove Theorem 11 in the following. Theorem 11. Suppose that the loss function L is C-bounded and the objective function F is Llipschitz. Assume that the following conditions hold: Then, the following expected generalization error bound holds for T iterations of SGLD: where g e (t) = Ew∼W t−1 [] is the empirical squared gradient norm, and W t is the parameter at step t of SGLD. Proof By Theorem 7, we have for any prior distribution P. In particular, we define the prior as P (w) = E S∼D n−1 [P S (w)], where P S (w) = Q (S,0). By the convexity of KL-divergence, Fix a data point z ∈ Z. Let (W t) t≥0 and (W t) t≥0 be the training process of SGLD for S = (S, z) and S = (S, 0), respectively. Fix a time step t and w <t = (w 0, . . ., w t−1). Let P t and P t denote the distribution of W t and W t conditioned on W <t = w <t and W <t = w <t, respectively. By the definition of SGLD, we have, and p µ denotes the Gaussian distribution for B ∈ G n and µ B = µ B for B ∈ G n. By applying Lemma 21 with β = γt ∇F (wt−1,z) 2 b and σ = σ t, By Lemma 10, which implies that Together with and, we have Since SGLD is order-independent, we can replace ∇F (w, z n) with ∇F (w, z i) for any i ∈ [n] in the right-hand side of the above bound. Our theorem then follows from the concavity of √ x. Furthermore, if we bound KL(P, Q z) instead of KL(Q z, P) in the above proof, we obtain the following bound that depends on the population squared gradient norm: We can extend the generalization bounds in previous sections, which require the noise to be Gaussian, to other general noises, namely the family of log-lipschitz noises. Definition 22 (Log-Lipschitz Noises). A probability distribution on R d with density p is L-loglipschitz if and only if ∇ ln p(w) ≤ L holds for any w ∈ R d. A random variable ζ is called an L-log-lipschitz noise if and only if it is drawn from an L-log-lipschitz distribution. The analog of SGLD, noisy momentum method (Definition 24), and noisy NAG (Definition 25) can be naturally defined by replacing the Gaussian noise ζ t at each iteration with an independent L-log-lipschitz noise in the definition. The following lemma is an analog of Lemma 21 under L-log-lipschitz noises. Recall that G denotes a collection of mini-batches of size b. Lemma 23 readily implies the analogs of Theorems 11, 26 and 27 under more general noise distributions. Lemma 23. Suppose that batch size b ≤ n/2 and N is an L noise -log-lipschitz distribution on R d. {µ B : B ∈ G} and {µ B : B ∈ G} are two collections of points in R d that satisfy the following conditions for some constant β ∈ 0, 1 Lnoise: For µ ∈ R d, let p µ denote the distribution of ζ +µ when ζ is drawn from N. Let P = 1 |G| B∈G p µ B and P = 1 |G| B∈G p µ B be mixture distributions over all mini-batches. Then, for some constant C 0 that only depends on L noise. Following the same argument as in the proof of Lemma 21, we have where Fixed A ∈ G n and B ∈ G n. Let p noise denote the density of the noise distribution N. Since µ A − µ B ≤ 1 and p noise is L noise -log-lipschitz, we have Similarly, since µ B − µ B ≤ β, we have Then, it follows from βL noise ≤ 1 that Therefore, the integral on the righthand side of can be upper bounded as follows: Plugging the above inequality into and gives and We adopt the formulation of Classical Momentum and Nesterov's Accelerated Gradient (NAG) methods in and consider the noisy versions of them. Definition 24 (Noisy Momentum Method). Noisy Momentum Method on objective function F (w, z) and dataset S is defined as Definition 25 (Noisy Nesterovs Accelerated Gradient). Noisy Nesterovs Accelerated Gradient (NAG) on objective function F (w, z) and dataset S is defined as In both definitions, γ t is the step size, mini-batch B t is drawn uniformly from G, ζ t is a Gaussian noise drawn from N (0,, and η ∈ is the momentum coefficient. Theorem 26. Under the same assumptions on the loss function, objective function, batch size and learning rate as in Theorem 11, the generalization bounds in Theorem 11 still hold for noisy momentum method and noisy NAG. For any time step t and w <t = (w 0, w 1, ..., w t−1), let P t and P t denote the distribution of W t and W t conditioned on W <t = w <t and W <t = w <t, respectively. By definition, we have If t = 1, for both noisy momentum method and noisy NAG, we have µ B = w t−1 − γ t ∇ w F (w t−1, S B), µ B = w t−1 − γ t ∇ w F (w t−1, S B). For t > 1, if noisy momentum method is used, we have µ B = w t−1 + η(w t−1 − w t−2) − γ t ∇ w F (w t−1, S B),. Similarly, the following holds under noisy NAG: In either case, it can be verified that the conditions of Lemma 21 hold for β = 2γtL b and σ = σ t. The rest of the proof is the same as the proof of Theorem 11. In the Entropy-SGD algorithm due to , instead of directly optimizing the original objective F (w), we minimize the negative local entropy defined as follows: Intuitively, a wider local minimum has a lower loss (i.e., −E(w, γ)) than sharper local minima. for more details. The Entropy-SGD algorithm invokes standard SGD to minimize the negative local entropy. However, the gradient of negative local entropy is hard to compute. Thus, the algorithm uses exponential averaging to estimate the gradient in the SGLD loop; see Algorithm 1 for more details. We have the following generalization bound for Entropy-SGD. Algorithm 1: Entropy-SGD Input: Training set S = (z 1, .., z n) and loss function g(w, z). Hyper-parameters: Scope γ, SGD learning rate η, SGLD step size η and batch size b. Theorem 27. Suppose that the loss function L is C-bounded and the objective function F is Llipschitz. If batch size b ≤ n/2 and √ η ≤ ε/(20L), the following expected generalization error bound holds for Entropy-SGD: where ] is the empirical squared gradient norm, and W t,k denotes the training process with respect to S. Since g e (t, k) is at most L 2, it further implies the generalization error of Entropy-SGD is bounded Proof of Theorem 27 Define the history before time step (t, k) as follows: Since µ is only determined by W, we only need to focus on W. This proof is similar to the proof of Theorem 11. By setting P = E S [Q (S,0) ]. Suppose S = (S, z) and S = (S, 0) are fixed, let W and W denote their training process, respectively. Considering the following 3 cases: 1. W t,0 ← W t−1,K+1: In this case, for a fixed w ≤(t−1,K+1), we have In this case, fix a w ≤(t,k), applying Lemma 21 gives In this case, for a fixed w ≤(t,K), we have By applying Lemma 10, we have and Where g e (t, k) is the empirical squared gradient norm of the k-th SGLD iteration in the t-th SGD iteration, respectively. The rest of the proof is the same as the proof of Theorem 11. The continuous version of the noisy gradient descent method is the Langevin dynamics, described by the following stochastic differential equation: where B t is the standard Brownian motion. To analyze the above Langevin dynamics, we need some preliminary knowledge about Log-Sobolev inequalities. Let p t (w, y) denote the probability density function (i.e., probability kernel) describing the distribution of W t starting from w. For a given SDE such as, we can define the associated diffusion semigroup P: Definition 28 (Diffusion Semigroup). (see e.g., ) Given a stochastic differential equation (SDE), the associated diffusion semigroup P = (P t) t≥0 is a family of operators that satisfy for every t ≥ 0, P t is a linear operator sending any real-valued bounded measurable function f on R d to The semigroup property P t+s = P t • P s holds for every t, s ≥ 0. Another useful property of P t is that it maps a nonnegative function to a nonnegative function. The carré du champ operator Γ of this diffusion semigroup (w.r.t) is We use the shorthand notation Γ(f) = Γ(f, f) = β −1 ∇f 2 2, and define (with the convention that 0 log 0 = 0) Definition 29 (Logarithmic Sobolev Inequality). (see e.g., ) A probability measure µ is said to satisfy a logarithmic Sobolev inequality LS(α) (with respect to Γ), if for all functions f: D(E) is the set of functions f ∈ L 2 (µ) for which the quantity dµ has a finite (decreasing) limit as t decreases to 0. A well-known Logarithmic Sobolev Inequality is the following for Gaussian measures. Lemma 30 (Logarithmic Sobolev Inequality for Gaussian measure). Let µ be the centered Gaussian measure on R d with covariance matrix σ 2 I d. Then µ satisfies the following LSI: Lemma 30 states that the centered Gaussian measure with covariance matrix σ 2 I d satisfies LS(βσ 2) (with respect to Γ), where Γ = β −1 ∇f, ∇g is the carré du champ operator of the diffusion semigroup defined above. Before proving our , we need some known from Markov diffusion process. It is well known that the invariant measure of the above CLD is the Gibbs measure dµ = 1 Zµ exp(−βF (w)) dw (, (1.3) ). In other words, µ satisfies R d P t f dµ = R d f dµ for every bounded positive measurable function f, where P t is the Markov semigroup in Definition 28. The following lemma by (see also ) allows us to determine the Logarithmic Sobolev constant of the invariant measure µ. Lemma 31 (Bounded perturbation). Assume that the probability measure ν satisfies LS(α) (with respect to Γ). Let µ be a probability measure such that 1/b ≤ dµ/dν ≤ b for some constant b > 1. Then µ satisfies LS(b 2 α) (with respect to Γ). In fact, Lemma 31 is a simple consequence of the following variational formula in the special case that φ(x) = x log x, which we will also need in our proof: Lemma 32 (Variational formula). (see .g., ) Let φ: I → R on some open interval I ⊂ R be convex of class C 2. For every (bounded or suitably integrable) measurable function f: R d → R with values in I, It is worth noting the integrand of the right-hand side is nonnegative due to the convexity of φ. Recall that F S (w) = F (w, S):= F 0 (w, S) + λ w 2 2 /2 is the sum of the empirical original objective F 0 (w, S) and 2 regularization. Let dµ = Lemma 33. Under Assumption 14, let Γ(f, g) = β −1 ∇f, ∇g be the carré du champ operator of the diffusion semigroup associated to CLD, and µ be the invariant measure of the SDE. Then, µ satisfies LS(e 4βC /λ) with respect to Γ. Let µ t be the probability measure of W t. By definition of P t, for any real-valued bounded measurable function f on R d and any s, t ≥ 0, In particular, if the invariant measure µ = µ ∞ exists, we have The following lemma is crucial for establishing the first generalization bound for CLD. In fact, we establish a Log-Sobolev inequality for µ t, the parameter distribution at time t, for any time t > 0. Note that our choice of the initial distribution µ 0 is important for the proof. Lemma 34. Under Assumption 14, let µ t be the probability measure of W t in (CLD) with initial dw. Let Γ be the carré du champ operator of diffusion semigroup associated to (CLD). Then, for any f: Proof Let µ be the invariant measure of CLD. By Lemma 33 and Definition 29, By applying Lemma 32 with φ(x) = x log x, we rewrite the left-hand side as where the last equation holds by the definition of invariant measure P t f dµ = f dµ. Thus, we have Let µ t be the probability measure of W t. Lemma 32 and together imply that Since dµ dµ0 ≤ exp(2βC) and µ is the invariant measure, we conclude that Lemma 16. Under Assumption 14, let µ t be the probability measure of W t in CLD (with dµ 0 = 1 Z e −λβ w 2 2 dw). Let ν be a probability measure that is absolutely continuous with respect to µ t. Suppose dµ t = π t (w) dw and dν = γ(w) dw. Then it holds that: Proof Let f (w) = γ(w)/π t (w), by Lemma 34 and We can see that the left-hand side is equal to KL(γ, π t) 6, and the right-hand side is equal to This concludes the proof.. We can rewrite F S (w) = 1 n n i=1 h(w, z i). Define µ S,k and ν S,t as the probability measure of W k (in GLD) and W t (in CLD), respectively. provided a bound of KL(µ S,k, ν S,ηK) under Assumption 35. This bound enables us to derive a generalization error bound for the discrete GLD from the bound for the continuous CLD. We use the assumption from. Their work considers the following SGLD: Where g S (w k) is a conditionally unbiased estimate of the gradient ∇F S (w k). In our GLD setting, 1. The function h takes non-negative real values, and there exist constants A, B ≥ 0, such that 2. For each z ∈ Z, the function h(·, z) is M -smooth: for some M > 0, 3. For each z ∈ Z, the function h(·, z) is (m, b)-dissipative: for some m > 0 and b ≥ 0, 4. There exists a constant δ ∈, such that, for each S ∈ Z n, 5. The probability law µ 0 of the initial hypothesis W 0 has a bounded and strictly positive density p 0 with respect to the Lebesgue measure on R d, and dw) has the following expected generalization error bound: 6 Indeed, 8M 2 ), the GLD (running K iterations with the same µ 0 as CLD) has the expected generalization error bound: where C 1 is a constant that only depends on M, λ, β, b, L and d. We apply the uniform stability framework. Suppose S and S are two neighboring datasets that differ on exactly one data point. Let (W t) t≥0 and (W t) t≥0 be the process of CLD running on S and S, respectively. Let γ t and π t be the pdf of W t and W t. We have According to Fokker-Planck equation (see) for CLD, we know that It follows that (integration by parts) and (integration by parts) Together with, we have Solving this differential inequality gives By Pinsker's inequality, we can finally see that By Lemma 4, the generalization error of CLD is bounded by the right-hand side of the above inequality. Now, we prove the second part of the theorem. Let (W k) k≥0 and (W k) k≥0 be the (discrete) GLD processes training on S and S, respectively. Then for any z ∈ Z: Since λβ > 2 and λ > and From, we have Combining, and, we have By Definition 3, GLD is n -uniformly stable. Applying Lemma 4 gives the generalization bound of GLD. Lemma 37 (Exponential decay in entropy). (, Theorem 5.2 .1) The logarithmic Sobolev inequality LS(α) for the probability measure µ is equivalent to saying that for every positive function ρ in L 1 (µ) (with finite entropy), for every t ≥ 0. The following Lemma shows that P t (dµ0 dµ) = µ t in our diffusion process. Lemma 38. Let P denote the diffusion semigroup of CLD. Let µ denote the invariant measure of P and let µ t denote the probability measure of W t. Then P t (dµ0 dµ) = µ t. Proof Let dµ = µ(x) dx and dµ t = µ t (x) dx. As shown in (, page 118), our diffusion process (Smoluchowski dynamics) is reversible, which means µ(x)p t (x, y) = µ(y)p t (y, x). Thus for any g(x), we have Since g is arbitrary, P t (dµ0 dµ) and µ t must be the same. dw) has the following expected generalization error bound: In addition, if F 0 is also M -smooth and non-negative, by setting λβ > 2, λ > 1 2 and η ∈ [0, 1 ∧ 2λ−1 8M 2), the GLD process (running K iterations with the same µ 0 as CLD) has the expected generalization error bound: where C 1 is a constant that only depends on M, λ, β, b, L and d. Proof of Theorem 39 Suppose S and S are two datasets that differ on exactly one data point. Let (W t) t≥0 and (W t) t≥0 be their processes, respectively. Let dµ t = π t (w) dw and dµ t = π t (w) dw be the probability measure of W t and W t, respectively. The invariant measure of CLD for S and S are denoted as µ and µ, respectively. Recall that dµ = 1 Z µ e −βF S (w) dw, dµ = 1 Z µ e −βF S (w) dw. The total variation distance of µ and µ is Since Zµ Z µ exp(−β(F S (w) − F S (w))) ∈ e Since µ and µ satisfy LS(e 4βC/λ) (Lemma 33), applying Lemma 37 with ρ = dµ0 dµ and ρ = dµ 0 dµ and Lemma 38 yields: KL(µ t, µ) ≤ exp −2λt e 4βC KL(µ 0, µ), KL(µ t, µ) ≤ exp −2λt e 4βC KL(µ 0, µ). Since KL(µ 0, µ) and KL(µ 0, µ) are upper bounded by 2βC, Pinsker's inequality implies that TV(µ t, µ) and TV(µ t, µ) are upper bounded by exp −2λt e 4βC βC. Combining with and note that TV(µ t, µ t) ≤ TV(µ t, µ) + TV(µ, µ) + TV(µ t, µ), we have −2λt e 4βC βC + 8βC 2 n. By Lemma 4, the generalization error of CLD is bounded by the right-hand side. The proof for GLD proceeds in the same way as the second part of the proof of Theorem 15. We first present the general setup of our experiments: • Small AlexNet: k is the kernel size, d is the depth of a convolution layer, fc(m) is the fullyconnected layer that has m neurons. The ReLU activation are used in the first 6 layers. • MLP: The MLP used in our experiment has 3 hidden layers, each having width 512. We also use ReLU as the activation function in MLP. Objective function: For a data point z = (x, y) in MNIST, the objective function is Random labels: Suppose the dataset contains n datapoint, and the corruption portion is p. We randomly select n · p data points, and replace their labels with random labels, as in Zhang et al. (2017a). The of this experiment (see Figure 1) is discussed in Section 3, Remark 13. Here we present our implementation details. We repeat our experiment 5 times. At every individual run, we first randomly sample 10000 data points from the complete MNIST training data. The initial learning rate γ 0 = 0.003. It decays 0.995 after every 60 steps, and it stops decaying when it is lower than 0.0005. During the training, we keep σ t = 0.2 √ 2γ t. Recall that the empirical squared gradient norm g e (t) in our bound (Theorem 9) Under review as a conference paper at ICLR 2020 Since it is time-consuming to compute the exact g e (t), in our experiment, we use an unbiased estimation instead. At every step, we randomly sample a minibatch B with batch size 200 from the training data, and use 1 200 i∈B ∇f (W t−1, z i) 2 as g e (t) to compute our bound in Figure 1. The estimation of g e (t) at every step t is shown in Figure 1(d). Since g e (t) is not very stable, in our figure, we plot its moving average over a window of size 100 to make the curve smoother (i.e., g avg (t) = 1 100 t+100 τ =t g e (τ)). In this subsection, we present some experiment for running SGLD on both MNIST and CIFAR10 datasets, to demonstrate that our bound (see Theorem 11), in particular the sum of the empirical squared gradient norms along the training path, can distinguish normal dataset from dataset that contains random labels. We note that in our experiments, the learn rate we choose is larger than that is required by the condition of Theorem 11. Due to the (non-optimal) constant in our bound, the bound is currently greater than 1, and hence we ignore the numbers on the y-axis. However, again, the curves of our bounds look quite similar to the generalization curves (see Figure 2). This indicates that the sum of squared empirical gradient norms is highly related to the generalization performance, and we believe by further optimizing the constants in our bound, we can achieve a generalization bound that is close to the real generalization error.
We give some generalization error bounds of noisy gradient methods such as SGLD, Langevin dynamics, noisy momentum and so forth.
1,460
scitldr
In this paper, a new intrinsic reward generation method for sparse-reward reinforcement learning is proposed based on an ensemble of dynamics models. In the proposed method, the mixture of multiple dynamics models is used to approximate the true unknown transition probability, and the intrinsic reward is designed as the minimum of the surprise seen from each dynamics model to the mixture of the dynamics models. In order to show the effectiveness of the proposed intrinsic reward generation method, a working algorithm is constructed by combining the proposed intrinsic reward generation method with the proximal policy optimization (PPO) algorithm. Numerical show that for representative locomotion tasks, the proposed model-ensemble-based intrinsic reward generation method outperforms the previous methods based on a single dynamics model. Reinforcement learning (RL) with sparse reward is an active research area (; de ; ;). In typical model-free RL, an agent learns a policy to maximize the expected cumulative reward under the circumstance that the agent receives a non-zero reward from the environment for each action of the agent. On the contrary, in sparse reward RL, the environment does not return a non-zero reward for every action of the agent but returns a non-zero reward only when certain conditions are met. Such situations are encountered in many action control problems (; . As in conventional RL, exploration is important at the early stage of learning in sparse reward RL, whereas the balance between exploration and exploitation is required on the later stage. Methods such as the -greedy strategy and the control of policy gradient with Gaussian random noise a) have been applied to various tasks for exploration. However, these methods have been revealed to be insufficient for successful learning when reward is sparse . In order to overcome such difficulty, intrinsically motivated RL has been studied to stimulate better exploration by generating intrinsic reward for each action by the agent itself, even when reward is sparse. Recently, many intrinsically-motivated RL algorithms have been devised to deal with the sparsity of reward, e.g., based on the notion of curiosity ) and surprise . It is shown that these algorithms are successful and outperform the previous approaches. In essence, these algorithms use a single estimation model for the next state or the environment dynamics to generate intrinsic reward. In this paper, in order to further improve the performance of sparse reward model-free RL, we propose a new method to generate intrinsic reward based on an ensemble of estimation models for the environment dynamics. The rationale behind our approach is that by using a mixture of several distributions, we can increase degrees of freedom for modeling the unknown underlying model dynamics and designing a better reward from the ensemble of estimation models. Numerical show that the proposed model-ensemble-based intrinsic reward generation method yields improved performance as compared to existing reward generation methods for continuous control with sparse reward setting. In this paper, we consider a discrete-time continuous-state Markov Decision Process (MDP), denoted as (S, A, P, r, ρ 0, γ), where S and A are the sets of states and actions, respectively, P: S × A × S → is the transition probability function (called model dynamics), r: S ×A×S → R is the extrinsic reward function, ρ 0: S → is the distribution of the initial state, and γ is the discounting factor. A (stochastic) policy is represented by π: S × A →, where π(a|s) represents the probability of choosing action a ∈ A for given state s ∈ S. In sparse reward RL, the environment does not return a non-zero reward for every action but returns a non-zero reward only when certain conditions are met by the current state, the action and the next state (; . The goal of this paper is to optimize the policy π to maximize the expected cumulative return η(π) by properly generating intrinsic reward in such sparse reward environments. We assume that the true transition model P is unknown to the agent. Intrinsically-motivated RL adds a properly designed intrinsic reward to the actual extrinsic reward to yield a non-zero total reward for training even when the extrinsic reward returned by the environment is zero (de ; ;). One way to design such an intrinsic reward for action control is based on surprise, which is a measure of the unexpectedness of observing the next state for a given current state and action pair and is especially useful to yield better exploration . In this context, a recent work proposed a promising direction of intrinsic reward design in which the agent tries to optimize its policy π according to for some constant c > 0, where P φ is the learning model parameterized by φ that the agent has regarding the true unknown transition probability P of the environment. is the Kullback-Leibler divergence (KLD) between two distributions P and P φ of the next state for given current state-action pair (s, a), and E (s,a)∼π is the expectation over (s, a) following the policy π. Thus, the surprise is quantified as & ). Furthermore, the KLD D KL (P ||P φ)|(s t, a t) at timestep t can be lower-bounded as with an arbitrary choice of the parameter φ. Therefore, the intrinsic reward at timestep t is determined as r t,int (s t, a t, s t+1) = log P φ (st+1|st,at) P φ (st+1|st,at) , where P φ needs to be designed properly. With φ = φ(t) and φ = φ(t −), where P φ(t) and P φ(t −) are respectively the agent's model for P at timestep t and the model before the update at timestep t, the intrinsic reward is given by the computable quantity named as the 1-step surprise: The proposed 1-step intrinsic reward performs well compared to the previously designed intrinsic reward, and it is based on a single model P φ for P, where P φ for given (s, a) is modeled as Gaussian distribution . In this paper, we take the principle that D KL (P ||P φ)|(s, a) is a reasonable measure for surprise to promote exploration, and generalize the intrinsic reward design under this measure. However, instead of using a single learning model for P as in the previous approach, we propose using an ensemble of K dynamics models P φ 1, · · ·, P φ K for P, constructing the mixture distribution with the mixing coefficients q i ≥ 0 and K i=1 q i = 1, and using P K in as an estimate for the true unknown P. The rationale behind this is that by using a mixture of several distributions we increase degrees of freedom for modeling the underlying model dynamics and designing a better intrinsic reward. For the j-th model P φ j, j = 1, · · ·, K, we have as in. Thus, for P φ j, the intrinsic reward at timestep t is determined as Furthermore, can be modified to yield a 1-step surprise intrinsic reward as where P φ j l (t) and P φ j l(t)−1 are the j-th model at the update period l corresponding to timestep t and the previous update period l − 1, respectively (l(t) will become clear in the subsection 2.3). Since the mixture model has the increased model order for modeling the underlying dynamics distribution beyond single-mode distributions, we have more freedom to design intrinsic reward. That is, we now have K values, r j t,int (s t, a t, s t+1), j = 1, · · ·, K, for candidates for intrinsic reward. In order to devise a proper use of this extra freedom, we consider the following two objective functions: where τ is a sample trajectory, c is a positive constant, and P (·|s, a) and P (·|s, a) are the true transition probability of an environment and its estimation model, respectively. The first objective function η(π) is the actual desired expected cumulative return for policy π and the second objective functionη(π) is the expected cumulative sum of the actual reward and intentionally-added surprise for policy π. We define π * andπ * as optimal solutions which maximize the objective functions and, respectively. Note that with additional intrinsic reward, the agent learnsπ *. Regarding η(π *) and η(π *), we have the following proposition: Proposition 1. Let η(π) be the actual expected discounted sum of extrinsic rewards defined in. Then, the following inequality holds: where c is a positive constant. Proposition 1 implies that better estimation of the true transition probability P by model P makes η(π *) closer to η(π *), whereπ * is learned based onη(π). Thus, for given P we want to minimize 10) over our estimation model P so that we have a tighter gap between η(π *) and η(π *), and the policyπ * learned with the aid of surprise intrinsic reward well approximates the true optimal policy π *. Regarding this minimization for tight gap, we have the following proposition: Proposition 2. Let P φ i (·|s, a), i = 1,..., K be the ensemble of model distributions, and P (·|s, a) be an arbitrary true transition probability distribution. Then, the minimum of average KLD between P (·|s, a) and the mixture model P = i q i P φ i (·|s, a) over the mixture weights {q 1, · · ·, q K |q i ≥ 0, i q i = 1} is upper bounded by the minimum of average KLD between P and P φ i over {i}: i.e., As seen in the proof of Proposition 2 in Appendix A, ] within the class of linear combinations of the individual surprise values {E (s,a)∼ π * D KL P ||P φ i |(s, a), i = 1, 2, · · ·, K}. Propositions 1 and 2 motivate us to use the minimum among the K available individual surprises for our intrinsic reward to reduce the gap between the actual target reward sum η(π *) of the intrinsic reward-aided learned policyπ * and η(π *) of the true optimal policy π *. Note that with the aid of intrinsic reward, we optimizeη(π) in fact and this makes our policy (try to) approachπ * and the sample trajectory approach (s, a) ∼π *. So, with E (s,a)∼ π * in the right-hand side of replaced simply by the computable instantaneous sample-based value and D KL P ||P φ i |(s, a) replaced by the approximation, we propose using the minimum of r j t,int (s t, a t, s t+1), j = 1, · · ·, K as the single value of intrinsic reward from the K candidates. That is, the agent selects the index j * as where r j t,int is given by, and the intrinsic reward is determined as For the dynamics models P φ 1, · · ·, P φ K, we adopted the fully-factorized Gaussian distributions (; . Then, P K in becomes the class of K-modal Gaussian mixture distributions. We first update the model ensemble P φ 1, · · ·, P φ K and the corresponding mixing coefficients q 1,..., q K. At the beginning, the parameters φ 1, · · ·, φ K are independently initialized, and q i's are set to 1 K for all i = 1, · · ·, K. At every batch period l, in order to jointly learn φ i and q i, we apply maximum-likelihood estimation with an L 2 -norm regularizer with KL constraints : where φ i old is the parameter of the i-th model before the update, α is the regularization coefficient, and κ is a positive constant. To solve this optimization problem with respect to {φ i}, we apply the method based on second-order approximation (a). For the update of {q i}, we apply the method proposed by and set q i as follows: where q old i is the mixing coefficient of the i-th model before the update. For numerical stability, we use the "log-sum-exp" trick for computing as well as L likelihood and ∇ φ i L likelihood. In addition, we apply simultaneous update of all φ i's and q i's, which was found to perform better than one-by-one alternating update of the K models for our problem. Although the proposed intrinsic reward generation method can be combined with general RL algorithms, we here consider the PPO algorithm, which is a popular on-policy algorithm and generates a batch of experiences of length L with every current policy. Let D be the batch of experiences for training the policy, i.e., D = (s t, a t, r total t, s t+1, · · ·, r total t+L−2, s t+L−1, a t+L−1, r total t+L−1), where a t ∼ π θ l (·|s t), s t+1 ∼ P (·|s t, a t), and r total t is the total reward. Here, π θ l is the parameterized policy at the batch period l corresponding to timestep t, · · ·, t + L − 1 (the batch period index l is now included in π θ l for clarity). The total reward at timestep t for training the policy is given by M AX: the maximum index of batch period l, K: the number of dynamics models. 5: Initialize the policy π θ0, the K transition probability models, and the corresponding mixing coefficients q 1, · · ·, q K. 6: Generate trajectories with π θ0 and add them to the initially empty replay buffer M. 7: for Batch period l = 0, · · ·, M AX − 1 do 8: by performing gradient updates for, and update q 1, · · ·, q K by performing iterations with. For this, we draw a batch D of size L randomly and uniformly from M, and perform the updates with minibatches of size L mini drawn from D for N epochs. 9: Collect s t from the environment and a t with the policy π θ l. Collect s t+1 and the extrinsic reward r t from the environment and add (s t, a t, s t+1) to M. Calculate the preliminary intrinsic reward r t,int in. 14: Acquire the normalized intrinsic rewards of the current batch D of size L by using. Train π θ l by using PPO with the total rewards and minibatch size L mini for N epochs. 16: end for where r t (s t, a t, s t+1) is the actual sparse extrinsic reward at timestep t from the environment, r t,int (s t, a t, s t+1) is the normalized intrinsic reward at timestep t, and β > 0 is the weighting factor. Note that the actually-used intrinsic rewardr t,int (s t, a t, s t+1) is obtained by applying normalization to improve numerical stability as where the unnormalized intrinsic reward r t,int (s t, a t, s t+1) is given by. Then, the policy π θ l can be updated at every batch period l with D by following the standard PPO procedure based on the total reward. Summarizing the above, we provide the pseudocode of our algorithm in Algorithm 1, which assumes PPO as the algorithm. Note that the proposed intrinsic reward generation method can also be applied to other RL algorithms. In order to evaluate the performance, we considered sparse reward environments for continuous control. The considered tasks were five environments of Mujoco , OpenAI Gym : Ant, Hopper, HalfCheetah, Humanoid, and Walker2d. To implement sparse reward setting, we adopted the delay method. We first accumulate extrinsic rewards generated from the considered environments for every ∆ timesteps or until the episode ends. Then, we provide the accumulated sum of rewards to the agent at the end of the ∆ timesteps or at the end of the episode, and repeat this process. We set ∆ = 40 for our experiments. All simulations were conducted over 10 fixed random seeds. The y-axis in each figure with the title "Average Return" represents the mean value of the extrinsic returns of the most recent 100 episodes averaged over the 10 random seeds. Each colored band in figure represents the interval of ±σ around the mean curve, where σ is the standard deviation of the 10 instances of data from the 10 random seeds. (Please see Appendix B for detailed description of the overall hyperparameters for simulations.) First, in order to validate the proposed approach in which the intrinsic reward is given by the minimum surprise over the ensemble, we investigated several methods of obtaining a single intrinsic reward value from the multiple preliminary reward values r 1 t,int, · · ·, r K t,int in from the K models: the proposed minimum selection- and other possible methods such as the maximum selection, the average value taking method, and a pure 1-step surprise method with the mixture with the intrinsic reward defined as from the idea that we simply replace the unimodal model P φ in with the mixture model P K in. Figure 1: Impact of single intrinsic reward value extraction for K = 2: minimum selection (min), maximum selection (max), average (avg), and 1-step surprise with ensemble (1-step). Fig. 1 shows the mean performance of the four single intrinsic reward value extraction methods for K = 2: the proposed minimum selection-, the maximum selection, the average method, and the pure 1-step surprise with mixture. As inferred from Propositions 1 and 2, it is seen that the minimum selection yields the best performance in all the environments. (The average method yields similar performance in HalfCheetah and Humanoid.) Interestingly, the proposed approach motivated by Propositions 1 and 2 outperforms the simple mixture replacement in. With this validation, we use the minimum selection method- for all remaining studies. Next, we investigated the impact of the model order K. Since we adopt Gaussian distributions for the dynamics models P φ 1, · · ·, P φ K, the mixture P K in is a Gaussian mixture for given state-action pair (s, a). According to a recent , the model order of Gaussian mixture need not be too large to capture the underlying dynamics effectively in practice. Thus, we evaluated the performance for K = 1, 2, 3, 4. Fig. 2 shows the mean performance as a function of K for the considered sparse reward environments. It is observed that in general the performance improves as K increases, and once the proper model order is reached, the performance does not improve further or degrades a bit due to more difficult model estimation for higher model orders, as expected from our intuition. From this , we found that K = 2 seems reasonable for our model order, so we used K = 2 for all the five environments in the following performance comparison 3.3. yielded similar performance to that of K = 3, so we omitted the curve of K = 4 for simplicity) With the above verification, we compared the proposed method with existing intrinsic reward generation methods by using PPO as the algorithm. We considered the existing intrinsic reward generation methods: curiosity , hashing , information gain approximation (de), and single-model surprise . We also considered the method using intrinsic reward module among the most recent works introduced in Appendix C, which uses delayed sparse reward setup and provides an implementation code. For fair comparison, we used PPO with the same neural network architecture and common hyperparameters, and applied the same normalization technique in for all the considered intrinsic reward generation methods so that the performance difference only from the intrinsic reward generation method. The weighting factor β in between the extrinsic reward and the intrinsic reward should be determined for all intrinsic reward generation methods. Since each of the considered methods yields different scale of the intrinsic reward, we used an optimized β for each algorithm for each environment. In the case of the single-model surprise method and the proposed method, the hyperparameters of the single-model surprise method are tuned to yield best performance and then the proposed method employed the same hyperparameters as the single-model surprise method. We also confirmed that the hyperparameters associated with the other four methods were well-tuned in the original papers (de ; ; ;, and we used the hyperparameters provided by these methods. (Please see Appendix B for detailed description of the hyperparameters for simulations.) Fig. 3 shows the comparison . It is seen that the proposed model-ensemble-based intrinsic reward generation method yields top-level performance. Note that the performance gain by the proposed method is significant in sparse Hopper and sparse Walker2d. Various types of intrinsic motivation such as curiosity, information gain, and surprise have been investigated in cognitive science , and intrinsically-motivated RL has been inspired from these studies. used the information gain on the dynamics model as additional reward based on the notion of curiosity. defined an intrinsic reward's work with the idea of homeostasis in biology. The concept of surprise was exploited to yield intrinsic rewards . In parallel with intrinsically motivated RL, researchers developed model-based approaches for learning itself, in which the agent uses the trained dynamics model and fictitious samples generated from the model for training. suggested using the trained dynamics model to initialize the policy network at the beginning of model-free learning. proposed the policy optimization using trust-region method with a model ensemble, in which multiple prediction models for the next state for given pair of current state and action are constructed and trained by using actual samples, and the policy is trained by multiple fictitious sample trajectories from the multiple models. Our work differs from these works in that we use a model ensemble for the environment transition probability distribution and generates intrinsic reward based on this ensemble of dynamics models to enhance the performance of model-free RL with sparse reward. (Please see Appendix C for more related works.) In this paper, we have proposed a new intrinsic reward generation method based on an ensemble of dynamics models for sparse-reward reinforcement learning. In the proposed method, the mixture of multiple dynamics models is used to better approximate the true unknown transition probability, and the intrinsic reward is designed as the minimum of the intrinsic reward computed from each dynamics model to the mixture to capture the most relevant surprise. The proposed intrinsic reward generation method was combined with PPO to construct a working algorithm. Ablation study has been performed to investigate the impact of the hyperparameters associated with the proposed ensemblebased intrinsic reward generation. Numerical show that the proposed model-ensemble-based intrinsic reward generation method outperforms major existing intrinsic reward generation methods in the considered sparse environments. A PROOFS Proposition 1. Let η(π) be the actual expected discounted sum of extrinsic rewards defined in. Then, the following inequality holds: where c is a positive constant. Proof. The inequality (a) is trivial from the definition of π *, that is, π * is an optimal policy maximizing η(π). The inequality (b) holds since Proposition 2. Let P φ i (·|s, a), i = 1,..., K be the ensemble of model distributions, and P (·|s, a) be an arbitrary true transition probability distribution. Then, the minimum of average KLD between P (·|s, a) and the mixture model P = i q i P φ i (·|s, a) over the mixture weights {q 1, · · ·, q K |q i ≥ 0, i q i = 1} is upper bounded by the minimum of average KLD between P and P φ i over {i}: i.e., Proof. Here, is valid due to the convexity of the KL divergence in terms of the second argument for a fixed first argument. is valid due to the linearity of expectation. is valid since the minimum in the right-hand side of is achieved when we assign all the mass to q i that has the minimum value of E (s,a)∼ π * D KL P ||P φ i |(s, a). (Note that the optimal {q i} in is not the same as the optimal {q i} achieving the minimum in.) Note that each step in the proof is tight except in which the convexity of the KL divergence in terms of the second argument is used. This part involves the function f (x) = − log x for 0 < x ≤ 1 since D KL (p 1 ||p 2) = p 1 (y) log p1(y) p2(y) dy, but the convexity of f (x) = − log x for 0 < x ≤ 1 is not so severe if x is not so close to zero. For the actual implementation, the code implemented by is used. The policy and dynamics models were designed by fully-connected neural networks all of which had two hidden layer of size ). The tanh activation function was used for all of the networks (; . The means of the fully factorized Gaussian dynamics models were the outputs of our networks, and the variances were trainable variables which were initialized to 1 . Other than the variances, all initialization is randomized so that each of dynamics models was set differently . For the implementation of the policy model, our method and all the considered intrinsic reward generation method used the same code for the module method. Although a recent work used TRPO (a) as the baseline learning engine, we used PPO, one of the currently most popular algorithms for continuous action control, as our baseline algorithm. While the same basic hyperparameters as those in the previous work were used, some hyperparameters were tuned for PPO. λ for the GAE method (b) was fixed to 0.95, while the discounting factor was set to γ = 0.99. The batch size L for the training of the policy was fixed to 2048. For the policy update using PPO, the minibatch size L mini was set to 64, the epoch number N 10, the clipping constant 0.2, and the entropy coefficient 0.0. The maximum number of timesteps was 1M for all five environments, and the maximum index of batch period, M AX, is Each of the single-model surprise method, the hashing method and our proposed method requires a replay buffer. The size of the used replay buffer for all these three methods is 1.1M. Before the beginning of the iterations, 2048 × B samples from real trajectories generated by the initial policy were added to the replay buffer. We set B = 40 for our experiments. For the methods not requiring a replay buffer, i.e., Curiosity, Information Gain, Module, and PPO Only, we ran 2048 × B = 81920 timesteps before measuring performance for fair comparison. (Therefore, every x-axis in Fig. 1, 2, and 3 shows the total timesteps of 1.08192M.) For the dynamics model learning, we set the batch size L = 2048, L mini = 64, and N = 4. The optimization was solved based on second-order approximation (a). When K = 1, the optimization reduces to the model learning problem in. , the constraint constant κ in the second-order optimization was well-tuned as 0.001. So, we used this value of κ not only to the case of K = 1 but also to the case of K ≥ 2. We further tuned the value of α in for each environment, and we set α = 0.01. For the information gain method, we need another hyperparameter h which is the weight to balance the original intrinsic reward and the homeostatic regulation term (de). We tuned this hyperparameter for each environment and the used value of h is shown in Table 1. Table 1 summarizes the weighting factor β as well as the hyperparameter h in information gain method. Here, we used the optimized weighting factor β in for each algorithm for each environment. As aforementioned, the major hyperparameters for the proposed model-ensemble-based method are the same as those used for the single-model surprise method. For the intrinsic reward module method, we checked that the provided source code in github reproduced in, as shown in Fig. 4.' Module 0.01' represents the module method with training using the sum of intrinsic reward and scaled extrinsic reward with scaling factor 0.01.' Module 0' represents training using intrinsic reward only (no addition of extrinsic reward). Both methods are introduced in, and we checked reproducibility when B = 0, i.e., we ran 2048 × B = 0 timesteps before measuring performance. We observed that our used code yielded the same as those in. Recent advanced exploration methods can be classified mainly into two categories. One is to generate intrinsic reward explicitly and to train the agent with the total reward which is the sum of the extrinsic reward and the adequately scaled intrinsic reward. The other is indirect methods which do not explicitly generate intrinsic reward. Our work belongs to the first category. There exist many exploration techniques on image spaces (; a; b;) but these works are not directly related to our work here. suggested a new intrinsic reward for sparse and binary extrinsic reward environments, based on sampling additional states from the replay buffer and setting those data as new goals. In their work the policy was based on the input of both state and goal. In our work, on the other hand, the concept of goal is not necessary. A recent work by used a delayed reward environment to propose training the module to generate intrinsic reward apart from training the usual policy. This delayed reward environment for sparse reward setting is different from the previous sparse reward environment based on thresholding, i.e., the agent get non-zero reward when the agent achieves a certain physical quantity (such as the distance from the origin) larger than the predefined threshold. proposed generating intrinsic reward by applying a generative model with the Wasserstein-1 distance. With the concept of state-action embedding, adopted the Jensen-Shannon divergence (JSD) to construct a new variational lower bound of the corresponding mutual information, guaranteeing numerical stability. Our work differs from these two recent works in that we used a model ensemble to generate intrinsic reward. Recent indirect methods can further be classified mainly into two groups: (i) revising the original objective function to stimulate exploration, which exploits intrinsic motivation implicitly, and (ii) perturbing the parameter space of policy. In the first group, proposed that exploration can be stimulated by exploiting novel state-action pairs from the past, and used sparse reward environments by delaying extrinsic rewards. revised the original objective function for training by considering maximization of the divergence between the current policy and recent policies, with an adaptive scaling technique. The concept of dropout was applied to the PPO algorithm to encourage the stochastic behavior of the agent episode-wisely . Convex combination of the target policy and any given policy is considered as a new exploratory policy , which corresponds to solving a surrogate Markov Decision Process, generalizing usual exploration methods such as -greedy or Gaussian noise. In the second group, proposed a goal-based exploration method for continuous control, which alternates generating parameter-outcome pair and perturbing certain parameters based on randomly drawn goal from the outcome space. Recently, inspired by , considered pure exploration MDP without any extrinsic reward with the notion of utility, where utility is based on JSD and the Jensen-Rényi divergence (Rényi et al., 1961). In this work, they consider a number of models for transition function but they used this to compute utility based on average entropy of the multiple models. Our work uses the minimum of surprise from the multiple dynamics models under the existence of explicit reward whether it is extrinsic or intrinsic.
For sparse-reward reinforcement learning, the ensemble of multiple dynamics models is used to generate intrinsic reward designed as the minimum of the surprise.
1,461
scitldr
Given the fast development of analysis techniques for NLP and speech processing systems, few systematic studies have been conducted to compare the strengths and weaknesses of each method. As a step in this direction we study the case of representations of phonology in neural network models of spoken language. We use two commonly applied analytical techniques, diagnostic classifiers and representational similarity analysis, to quantify to what extent neural activation patterns encode phonemes and phoneme sequences. We manipulate two factors that can affect the outcome of analysis. First, we investigate the role of learning by comparing neural activations extracted from trained versus randomly-initialized models. Second, we examine the temporal scope of the activations by probing both local activations corresponding to a few milliseconds of the speech signal, and global activations pooled over the whole utterance. We conclude that reporting analysis with randomly initialized models is crucial, and that global-scope methods tend to yield more consistent and interpretable and we recommend their use as a complement to local-scope diagnostic methods. As end-to-end architectures based on neural networks became the tool of choice for processing speech and language, there has been increased interest in techniques for analyzing and interpreting the representations emerging in these models. A large array of analytical techniques have been proposed and applied to diverse tasks and architectures. Given the fast development of analysis techniques for NLP and speech processing systems, relatively few systematic studies have been conducted to compare the strengths and weaknesses of each methodology and to assess the reliability and explanatory power of their outcomes in controlled settings. This paper reports a step in this direction: as a case study, we examine the representation of phonology in neural network models of spoken language. We choose three different models that process speech signal as input, and analyze their learned neural representations. We use two commonly applied analytical techniques: (i) diagnostic models and (ii) representational similarity analysis to quantify to what extent neural activation patterns encode phonemes and phoneme sequences. In our experiments, we manipulate two important factors that can affect the outcome of analysis. One pitfall not always successfully avoided in work on neural representation analysis is the role of learning. Previous work has shown that sometimes non-trivial representations can be found in the activation patterns of randomly initialized, untrained neural networks . Here we investigate the representations of phonology in neural models of spoken language in light of this fact, as extant studies have not properly controlled for role of learning in these representations. The second manipulated factor in our experiments is the scope of the extracted neural activations. We control for the temporal scope, probing both local activations corresponding to a few milliseconds of the speech signal, as well as global activations pooled over the whole utterance. When applied to global-scope representations, both the methods detect a robust difference between the trained and randomly initialized target models. However we find that in our setting, RSA applied to local representations shows low correlations between phonemes and neural activation patterns for both trained and randomly initialized target models, and for one of the target models the local diagnostic classifier only shows a minor difference in the decodability of phonemes from randomly initialized versus trained network. This highlights the importance of reporting analy-sis with randomly initialized models as a baseline. Many current neural models of language learn representations that capture useful information about the form and meaning of the linguistic input. Such neural representations are typically extracted from activations of various layers of a deep neural architecture trained for a target task such as automatic speech recognition or language modeling. A variety of analysis techniques have been proposed in the academic literature to analyze and interpret representations learned by deep learning models of language as well as explain their decisions; see and for a review. Some of the proposed techniques aim to explain the behavior of a network by tracking the response of individual or groups of neurons to an incoming trigger (e.g., ;). In contrast, a larger body of work is dedicated to determining what type of linguistic information is encoded in the learned representations. This type of analysis is the focus of our paper. Two commonly used approaches to analyzing representations are Probing techniques, or diagnostic classifiers, i.e. methods which use the activations from different layers of a deep learning architecture as input to a prediction model (e.g., ; ;); Representational Similarity Analysis (RSA) borrowed from neuroscience and used to correlate similarity structures of two different representation spaces (; ;) We use both techniques in our experiments to systematically compare their output. Research on the analysis of neural encodings of language has shown that in some cases, substantial information can be decoded from activation patterns of randomly initialized, untrained recurrent networks. It has been suggested that the dynamics of the network together with the characteristics of the input signal can in non-random activation patterns . Using activations generated by randomly initialized recurrent networks has a history in speech recognition and computer vision. Two betterknown families of such techniques are called Echo State Networks (ESN) and Liquid State Machines (LSM) . The general approach (also known as reservoir computing) is as follows: the input signal is passed through a randomly initialized network to generate a nonlinear response signal. This signal is then used as input to train a model to generate the desired output at a reduced cost. We also focus on representations from randomly initialized neural models but do so in order to show how training a model changes the information encoded in the representations according to our chosen analysis methods. Since the majority of neural models of language work with text rather than speech, the bulk of work on representation analysis has been focused on (written) word and sentence representations. However, a number of studies analyze neural representations of phonology learned by models that receive a speech signal as their input. As examples of studies that track responses of neurons to controled input, analyze local representations acquired from a deep model of phoneme recognition and show that both individual and groups of nodes in the trained network are selective to various phonetic features, including manner of articulation, place of articulation, and voicing. use a similar approach and suggest that phonemes are learned as an intermediate representation for predicting graphemes, especially in very deep layers. Others predominantly use diagnostic classifiers for phoneme and grapheme classification from neural representations of speech. In one of the their experiments use a linear classifier to predict phonemes from local activation patterns of a grounded language learning model, where images and their spoken descriptions are processed and mapped into a shared semantic space. Their show that the network encodes substantial knowledge of phonology on all its layers, but most strongly on the lower recurrent layers. use diagnostic classifiers to study the encoding of phonemes in an end-to-end ASR system with convolutional and recurrent layers, by feeding local (frame-based) representations to an MLP to predict a phoneme label. They show that phonological information is best represented in lowest input and convolutional layers and to some extent in low-to-middle recurrent layers. extend their previous work to multiple languages (Arabic and English) and different datasets, and show a consistent pattern across languages and datasets where both phonemes and graphemes seem to be encoded best in the middle recurrent layers. None of these studies report on phoneme classification from randomly initialized versions of their target models, and none use global (i.e., utterancelevel) representations in their analyses. In this section we first describe the speech models which are the targets of our analyses, followed by a discussion of the methods used here to carry out these analyses. We will release source code to run all our analyses on the publication of this paper. We tested the analysis methods on three target models trained on speech data. The first model is a transformer model trained on the automatic speech recognition (ASR) task. More precisely, we used a pretrained joint CTC-Attention transformer model from the ESPNet toolkit , trained on the Librispeech dataset . 1 The architecture is based on the hybrid CTC-Attention decoding scheme presented by but adapted to the transformer model. The encoder is composed of two 2D convolutional layers (with stride 2 in both time and frequency) and a linear layer, followed by 12 transformer layers, while the decoder has 6 such layers. The convolutional layers use 512 channels, which is also the output dimension of the 1 We used ESPnet code from commit 8fdd8e9 with the pretrained model available from https://drive.google. com/open?id=1BtQvAnsFvVi-dp_qsaFP7n4A_ 5cwnlR6 linear and transformer layers. The dimension of the flattened output of the two convolutional layers (along frequencies and channel) is then 20922 and 10240 respectively: we omit these two layers in our analyses due to their excessive size. The input to the model is made of a spectrogram with 80 coefficients and 3 pitch features, augmented with the SpecAugment method . The output is composed of 5000 SentencePiece subword tokens . The model is trained for 120 epochs using the optimization strategy from , also known as Noam optimization. Decoding is performed with a beam of size 60 for reported word error rates of 2.6% and 5.7% on the test set (for the clean and other subsets respectively). The Visually Grounded Speech (VGS) model is trained on the task of matching images with their corresponding spoken captions, first introduced by and. We use the architecture of which implemented several improvements over the RNN model of, and train it on the Flickr8K Audio Caption Corpus . The speech encoder consists of one 1D convolutional layer (with 64 output channels) which subsamples the input by a factor of two, and four bidirectional GRU layers (each of size 2048) followed by a self-attention-based pooling layer. The image encoder uses features from a pre-trained ResNet-152 model followed by a linear projection. The loss function is a margin-based ranking objective. we trained the model using the Adam optimizer with a cyclical learning rate schedule . The input are MFCC features with total energy and delta and double-delta coefficients with combined size 39. This model is a middle ground between the two previous ones. It is trained as a speech recognizer similarly to the transformer model but the architecture of the encoder follows the RNN-VGS model (except that the recurrent layers are one-directional in order to fit the model in GPU memory). The last GRU layer of the encoder is fed to the attention-based decoder from , here composed of a single layer of 1024 GRU units. The model is trained with the Adadelta optimizer . The input features are identical to the ones used for the VGS model; it is also trained on the Flickr8k dataset spoken caption data, using the original written captions as transcriptions. The architecture of this model is not optimized for the speech recognition task: rather it is designed to be as similar as possible to the RNN-VGS model while still performing reasonably on speech recognition. We consider two analytical approaches: • Diagnostic model is a simple, often linear, classifier or regressor trained to predict some information of interest given neural activation patterns. To the extent that the model successfuly decodes the information, we conclude that this information is present in the neural representations. • Representational similarity analysis (RSA) is a second-order approach where similarities between pairs of some stimuli are measured in two representation spaces: e.g. neural activation pattern space and a space of symbolic linguistic representations such as sequences of phonemes or syntax trees (see . Then the correlation between these pairwise similarity measurements quantifies how much the two representations are aligned. The diagnostic models have trainable parameters while the RSA-based models do not, except when using a trainable pooling operation. We also consider two ways of viewing activation patterns in hidden layers as representations: • Local representations at the level of a single frame or time-step; • Global representations at the level of the whole utterance. Combinations of these two facets give rise to the following concrete analysis models. Local diagnostic classifier. We use single frames of input (MFCC or spectrogram) features, or activations at a single timestep as input to a logistic diagnostic classifier which is trained to predict the phoneme aligned to this frame or timestep. Local RSA. We compute two sets of similarity scores. For neural representations, these are cosine similarities between neural activations from pairs of frames. For phonemic representations our similarities are binary, indicating whether a pair of frames are labeled with the same phoneme. Pearson's r coefficient computed against a binary variable, as in our setting, is also known as point biserial correlation. Global diagnostic classifier. We train a linear diagnostic classifier to predict the presence of phonemes in an utterence based on global (pooled) neural activations. For each phoneme j the predicted probability that it is present in the utterance with representation h is denoted as P(j|h) and computed as: where Pool is one of the pooling function in Section 3.2.1. Global RSA. We compute pairwise similarity scores between global (pooled; see Section 3.2.1) representations and measure Pearson's r with the pairwise string similarities between phonemic transcriptions of utterances. We define string similarity as: where | · | denotes string length and Levenshtein is the string edit distance. The representations we evaluate are sequential: sequences of input frames, or of neural activation states. In order to pool them into a single global representation of the whole utterance we test two approaches. Mean pooling. We simply take the mean for each feature along the time dimension. Attention-based pooling. Here we use a simple self-attention operation with parameters trained to optimize the score of interest, i.e. the RSA score or the error of the diagnostic classifier. The attentionbased pooling operator performs a weighted average over the positions in the sequence, using scalar weights. The pooled utterance representation Pool(h) is defined as: with the weights α computed as: where w are learnable parameters, and h t is an input or activation vector at position t. 2 For RSA we use Pearson's r to measure how closely the activation similarity space corresponds to the phoneme or phoneme string similarity space. For the diagnostic classifiers we use the relative error reduction (RER) over the majority class baseline to measure how well phoneme information can be decoded from the activations. Effect of learning In order to be able to assess and compare how sensitive the different methods are to the effect of learning on the activation patterns, it is important to compare the score on the trained model to that on the randomly initialized model; we thus always display the two jointly. We posit that a desirable property of an analytical method is that it is sensitive to the learning effect, and that the scores on trained versus randomly initialized models are clearly separated. Coefficient of partial determination Correlation between similarity structures of two representational spaces can, in principle, be partly due to the fact that both these spaces are correlated to a third space. For example, were we to get a high value for global RSA for one of the top layers of the RNN-VGS model, we might suspect that this is due to the fact that string similarities between phonemic transcriptions of captions are correlated to visual similarities between their corresponding images, rather than due to the layer encoding phoneme strings. In order to control for this issue, we can carry out RSA between two spaces while controling for the third, confounding, similarity space. We do this by computing the coefficient of partial determination defined as the relative reduction in error caused by including variable X in a linear regression model 2 Note that the visually grounded speech models of; Chrupała; use similar mechanisms to aggregate the activations of the final RNN layer; here we use it as part of the analytical method to pool any sequential representation of interest. A further point worth noting is that we use scalar weights αt and apply a linear model for learning them in order to keep the analytic model simple and easy to train consistently. for Y: where e Y ∼X+Z is the sum squared error of the model with all variables, and e Y ∼Z is the sum squared error of the model with X removed. Given the scenario above with the confounding space being visual similarity, we identify Y as the pairwise similarities in phoneme string space, X as the similarities in neural activation space, and Z as similarities in the visual space. The visual similarities are computed via cosine similarity on the image feature vectors corresponding to the stimulus utterances. All analytical methods are implemented in Pytorch . The diagnostic classifiers are trained using Adam with learning rate schedule which is scaled by 0.1 after 10 epochs with no improvement in accuracy. We terminate training after 50 epochs with no improvement. Global RSA with attention-based pooling is trained using Adam for 60 epochs with a fixed learning rate (0.001). For all trainable models we snapshot model parameters after every epoch and report the for the epoch with best validation score. In all cases we sample half of the available data for training (if applicable), holding out the other half for validation. Sampling data for local RSA. When computing RSA scores it is common practice in neuroscience research to use the whole upper triangular part of the matrices containing pairwise similarity scores between stimuli, presumably because the number of stimuli is typically small in that setting. In our case the number of stimuli is very large, which makes using all the pairwise similarities computationally taxing. More importantly, when each stimulus is used for computing multiple similarity scores, these scores are not independent, and score distribution changes with the number of stimuli. We therefore use an alternative procedure where each stimulus is sampled without replacement and used only in a single similarity calculation. Figures 1-3 display the outcome of analyzing our target models. All three figures are organized in a 2 × 3 matrix of panels, with the top row showing the diagnostic methods and the bottom row the RSA methods; the first column corresponds to local scope; column two and three show global scope with mean and attention pooling respectively. The data points are displayed in the order of the hierarchy of layers for each architecture, starting with the input (layer id = 0). In all the reported experiments, the score of the diagnostic classifiers corresponds to relative error reduction (RER), whereas for RSA we show Pearson's correlation coefficient. Figure 4 shows the of global RSA with mean pooling on the RNN-VGS target model, while controling for visual similarity as a confound. We will discuss the patterns of observed for each model separately in the following sections. As can be seen in Figure 1, most reported experiments (with the exception of the local RSA) suggest that phonemes are best encoded in pre-final layers of the deep network. The also show a strong impact of learning on the predictions of the analytical methods, as is evident by the difference between the performance using representations of the trained versus randomly initialized models. Local RSA shows low correlation values overall, and does not separate the trained versus random conditions well. Most experimental findings displayed in Figure 2 suggest that phonemes are best encoded in RNN layers 3 and 4 of the VGS model. They also show that the representations extracted from the trained model encode phonemes more strongly than the ones from the random version of the model. However, the impact of learning is more salient with global than local scope: the scores of both local classifier and local RSA on random vs. trained representations are close to each other for all layers. For the global representations the performance on trained representations quickly diverges from the random representations from the first RNN layer onward. Furthermore, as demonstrated in Figure 4, for top RNN layers of this architecture, the correlation between similarities in the neural activation space and the similarities in the phoneme string space is not solely due to both being correlated to visual similarities: indeed similarities in activation space contribute substantially to predicting string similarities, over and above the visual similarities. The overall qualitative patterns for this target model are the same as for RNN-VGS. The absolute scores for the global diagnostic variants are higher, and the curves steeper, which may reflect that the objective for this target model is more closely aligned with encoding phonemes than in the case of RNN-VGS. Here we discuss the impact of each factor in the outcome of our analyses. Choice of method. The choice of RSA versus diagnostic classifier interacts with scope, and thus these are better considered as a combination. Specifically, local RSA as implemented in this study shows only weak correlations between neural activations and phoneme labels. It is possibly related to the range restriction of point biserial correlation with unbalanced binary variables. Impact of learning. Applied to the global representations, both analytical methods are equally sensitive to learning. The on random vs. trained representations for both methods start to diverge noticeably from early recurrent layers. The separation for the local diagnostic classifiers is weaker for the RNN models. Representation scope. Although the temporal scale of the extracted representations (especially those of spoken language) has not received much attention and scrutiny, our experimental findings suggest that this is an important choice. Specifically, global representations seem to be more sensitive to learning, and more consistent across different analysis methods. Results with attention-based learned pooling are in general more erratic than with mean pooling. This reflects the fact that analytical models which incorporate learned pooling are more difficult to optimize and require more careful tuning compared to mean pooling. Given the above findings, we now give tentative recommendations regarding how to carry out representational analyses of neural models. • Analyses should be run on randomly initialized target models. We saw that in most cases scores on such models are substantially above zero, and in some cases relatively close to scores on trained models. • It is unwise to rely on a single analytical approach, even a widely used one such as the local diagnostic classifier. With solely this method we would have concluded that, in RNN models, learning has only a weak effect on the encoding of phonology. • Global methods applied to pooled representations should be considered as a complement to standard local diagnostic methods. In our experiments they show more consistent . We carried out a systematic study of analysis methods for neural models of spoken language and offered some suggestions on best practices in this endeavor. Nevertheless our work is only a first step, and several limitations remain. The main challenge is that it is often difficult to completely control for the many factors of variation present in the target models, due to the fact that a particular objective function, or even a dataset, may require relatively important architectural modifications. In future we plan to sample target models with a larger number of plausible combinations of factors. Likewise, a choice of an analytical method may often entail changes in other aspects of the analysis: for example unlike a global diagnostic classifier, global RSA captures the sequential order of phonemes. In future we hope to further disentangle these differences.
We study representations of phonology in neural network models of spoken language with several variants of analytical techniques.
1,462
scitldr
Super Resolution (SR) is a fundamental and important low-level computer vision (CV) task. Different from traditional SR models, this study concentrates on a specific but realistic SR issue: How can we obtain satisfied SR from compressed JPG (C-JPG) image, which widely exists on the Internet. In general, C-JPG can release storage space while keeping considerable quality in visual. However, further image processing operations, e.g., SR, will suffer from enlarging inner artificial details and in unacceptable outputs. To address this problem, we propose a novel SR structure with two specifically designed components, as well as a cycle loss. In short, there are mainly three contributions to this paper. First, our research can generate high-qualified SR images for prevalent C-JPG images. Second, we propose a functional sub-model to recover information for C-JPG images, instead of the perspective of noise elimination in traditional SR approaches. Third, we further integrate cycle loss into SR solver to build a hybrid loss function for better SR generation. Experiments show that our approach achieves outstanding performance among state-of-the-art methods. With the marvelous achievement of deep learning (DL) in computer vision (CV), Super Resolution (SR) attracts much attention for its crucial value as the basis of many high-level CV tasks ). Deep learning Super Resolution (DL-SR) algorithms (; ; ; c; b) strive for finding the complex nonlinear mapping between low resolution (LR) images and their high resolution (HR) counterparts. However, the learned model only reflects the inverse of down-scaled mapping, which is used to obtain LR images from their HR fathers. In other words, if there are some spots/stains in LR inputs, the SR model will treat them as inherent elements, and the corresponding SR outputs will enlarge these undesirable details. In reality, on the Internet, JPG compression is probably the most commonly used pattern for storage space reduction. That is to say, the LR image will be further processed into a compressed JPG (C-JPG) image. The quality of C-JPG will greatly drop, and the compression may yield unpleasant artifacts, for example, the presence of obvious partition lines, which vastly deteriorates the overall visual feeling. Hence, directly solving high-level CV tasks with these C-JPG images will lead to poor performance. In this paper, we propose a lossless SR model to obtain images with satisfying quality from the low-quality inputs (C-JPG). The deterioration in C-JPG makes the SR processing a huge challenge. In this paper, we focus on the more realistic C-JPG SR problem. Many SR methods regarding to the real-world condition images have been already developed, such as Zhang et al. (2018a);. Among them, some models regard the noise as a kernel estimating problem which can be solved by addictive Gaussian noises. However, the distribution of most real images are inconsistent with the hypothetical Gaussian distribution. Taking C-JPG images as a example, the image compression operation is related to decreasing information from original image instead of adding specific noises. Other models learn the related information from irrelevant LR-HR images to obtain similar representations by unsupervised strategy. All of them cannot solve the problem well. In general, most LR images are produced through performing traditional interpolation method (mostly bicubic) on their HR fathers. The SR training process should recover this down-scaled mapping in a reverse manner. Referring to our C-JPG SR issue, when searching images from Google, a lot of unpleasant details are displayed, especially in the edges of objects. However, the low quality of image makes former SR methods fail to generate applicable images. As shown in Fig. 1, it is shown that the SR generations of traditional bicubic interpolation, leading SR algorithm RCAN, and RCAN with pre-denoising input all demonstrate poor quality with the low quality C-JPG inputs. Damaged grids are apparently enlarged by the approaches designed for traditional non-JPG datasets. More specialized analysis can be found in the research of Köhler et al.. Note that the image pairs with fixed down-scaled kernel have been successfully learnt by SR models, such as SRGAN , EDSR , and RDN (c). In this study, we deliberately build a more complicated dataset by adding JPG format LR images to the training data. To be specific, we have three kinds of training inputs: C-JPG LR, LR, and HR images. The whole training process includes two separate functional components: missing detail recuperative part (JPG recovering stage) and SR mapping learning part (SR generating stage). In order to remove ring, checkerboard effects, as well as other noise, the former half sub-model is trained with pre-processed C-JPG LR images as inputs, and original LR ones as the supervised information. The function of this stage is to recover LR image from its compression counterpart. Hence, the outputs (LR(C − JP G)) of the first part are greatly improved and free of partition lines phenomenon. Based on these improved LR images, the latter sub-model continues to learn the mapping between (LR(C −JP G)) and HR. Therefore, an integrated pipeline for SR representation between C-JPG and HR images is achieved through the jointly two sub-models. In short, there are mainly three contributions in this study: • Our research can be regarded as an universal SR method that generates SR images from C-JPG inputs, which is empirically proved to be more difficult than SR with non-JPG inputs. • We regard this specific SR task as a recovering information process for the inputs, compared with the former denoising assumption including down-sampling and degradation parts. In this viewpoint, a recovering model is firstly introduced to generate satisfied intermediates from C-JPG inputs. • We further propose an integrated SR model training pipeline with two-level data, i.e., C-JPG LR and LR images, as well as a new integrated loss function. The experimental demonstrate our method can surpass traditional SR models. Single image super-resolution (SISR) is a widely concerned task. A bundle of researches is developed to resolve this challenge, and great achievements have been achieved. At first, researchers obtain SR generations with an estimated pixel-value interpolation function. For instance, Bicubic regards pixel spreading in a limited patch as a specified linear parametric function. However, these methods are prone to generate fuzzy SR . Later, depending on external and internal dataset, two different classes of SR methods (external and internal based models) are developed. In detail, external models learn the mapping upon many image pairs with various learning algorithms, such as nearest neighbor , kernel ridge regression , sparse learning (;, cluster learning (;, manifold learning , and neural networks . Internal models leverage similar patches within the image or across scales, and the approaches mainly focus on obtaining more self-similar patches, such as;;;;. Recently, deep learning method, which has achieved excellent performance in many high-level CV tasks, is introduced to SR. DL-SR focuses on learning the relationship between LR image and its corresponding HR one. To our knowledge, the most cutting-edge SR algorithms DL-SR, thanks to the powerful learning ability of deep learning. In detail, firstly propose end-to-end DL-SR network, SRCNN. Though there are only three convolutional layers in SRCNN, it greatly surpasses the former traditional methods in peak-signal-to-nosie-ratio (PSNR). Ever since SRCNN, a number of novel models arise for better generation ability, such as FSRCNN , VDSR , SRGAN , EDSR , RDN (c), DBPN , ESRGAN , and RCAN (b). In DL-SR, the goal is developed into two different aspects, i.e., higher accuracy factor (high PSNR and SSIM values) and more photo-realistic (better feature similarity) in visual sense. Based on this consideration, various structures are designed to extract more crucial features from the LR image. For example, VDSR introduces the global residual learning and trains a multi-scale model through sharing parameters across different scales. Inspired by the stunning performance of ResNet , the SRGAN replaces most basic convolutional blocks with residual ones. Moreover, it combines MSE loss, GANs loss loss, perceptual loss , and global loss for photo-realistic SR generations. Experiments show the great power in generating photo-realistic images. Based on the generator of SRGAN, EDSR removes all batch normalization to reduce the computation complexity, and replaces MSE loss with L1 norm loss. Moreover, the model increases the channel number to 256 in each layer and achieves state-of-the-art in NTIRE2017. However, there are over 40 million parameters in EDSR. Benefit from the joint of densely architecture, such as Densely Network , RDN (c) surpasses EDSR in accuracy. RCAN (b) upgrades the densely block to residual in residual (RIR) block and combines with channel attention mechanism. To our knowledge, RCAN is the leading method in accuracy pursuing SISR. Meanwhile, some researchers aim to build a lightweight and accurate model, such as CARN , s-LWSR, and FALSR . These methods try to reduce parameters and operations while keeping the decent performance. CARN leverages the cascading mechanism upon a residual block for obtaining better feature representation from multi-level layers. At the same time, to decrease the model size, some residual blocks are replaced by group convolution parts, which is similar to depth-wise convolution in MobileNets . The s-LWSR tries to introduce a more flexible SR model with an additional information pool. Inspired by the NAS , FALSR automatically searches a desirably lightweight and accurate SR model without human-designing. All methods mentioned above try to solve the SR problem on algorithmic LR-HR image pairs. However, the scale-kernel and degradation function are usually undefined in real-world, which in the SR image accompanied by amplified noise. To overcome this weakness, ZSSR proposes an unsupervised Zero-shot SR method. Firstly, the method generates many derived HR and LR images from the input and builds a simple CNN network to learn the mapping between pre-proposed LR image and its HR counterpart. So far as we know, ZSSR greatly surpasses other supervised SR models on the non-ideal condition mentioned above. Learning from propose a cycle-in-cycle structure called CincGAN to address the blind SR issue. Inputs with noise are firstly processed to generate intermediate LR images with less noise. Then, these LR images are jointly restored and scaled up with the help of an extra pre-trained SR model. Besides, proposes the multi-gram losses in UMGSR to generate perceptual satisfying images. For supervised method, propose a new pipeline to learn to generate SR images from the generated realistic data of raw image. Challenge Formulation In general, SISR issue can be formulated as: y = F(x) + z, where y and x represent HR and LR images respectively. Here, F is the SR mapping in the hypothesis set space (e.g., neural networks), and z denotes the additional irrelevant information, such as noises and blurred details. Normally, most SISR models are trained on the standard dataset, where the input LR is directly downsampled from HR by chosen method (e.g., Bicubic). As a , we assume that z equals to zero in the dataset. In this paper, we investigate a specific situation where the LR is low-quality C-JPG image. This situation commonly exists on the Internet, due to reducing storage or protecting copyright. Since LR inputs are firstly deteriorated to low-quality images, we redefine the above SISR formulation as: y = F(x + w). Here, w refers to the missing information due to the compression. Based on the specialization of low-quality JPG-SR issue, our model can be separated into two functional stages: JPG recovering and SR generating. Hence, our SISR formulation consists in two stages: As shown in Figure 2, training data are: C-JPG LR images (LR(C − JP G)); normal LR images (LR(normal)), which provide supervised information in the first stage; and HR images, which are the supervised information in the second stage. Stage I: JPG Recovering. For low quality JPG images, the compression operation discards most of useful details. As a , to rebuild details according to the original LR image plays a key role in this stage. To this end, we design a specialized network to learn the mapping between LR(C−JP G) and LR(normal). The comparison of LR(C − JP G) and LR(normal) is shown in Figure 3. Given the original LR image, we firstly re-save it to its low quality JPG version as our training material. In detail, the JPG compression abandon information in Cb/Cr channel on every 8 × 8 patch. As shown in Figure 3, the size of the original LR image is 82K bytes, while its C-JPG version (80 percent lower in quality) (b) C-JPG (20% quality) 12K Figure 3: The comparison of compressed JPG image and its LR counterpart. We choose a typical building image from Urban100 dataset . Given the LR image, we leverage Pillow to compress it with 80 percent lower in quality. Benefiting from the compression process, C-JPG takes only 12K storage space, compared to the 82K LR image. However, the visual feeling of C-JPG displays more unpleasant details than LR. is only 12K bytes. However, the quality of image are greatly decreased. This phenomenon can be clearly inferred from the visual contrast in Figure 3 that mussy details are generated in C-JPG image, such as the irregular shape of windows and the blurry object edge. Normally, to achieve better balance between storage cost and customer satisfaction, many websites provide images with partialquality JPG ones. When performing SR operation on these images, the effect of these deuterogenic noises is enlarged, leading to aggressively unpleasant feeling in visual. Since this recovering task shares similar goal with SR task, that is, trying to restore accurate information in pixel level, we employ an effective SR model, s-LWSR, to address the details recovering issue. Learning from stage I in Figure 2, LR(C-JPG) images firstly go through the recovering model. In detail, a simple convolutional layer transfers the basic RGB layers into fixed numbers. The main body part of recovering stage shares the same structure with our SR generating: stacking 26 residual blocks with additional information pool to intensively extract information. Experiments in s-LWSR have proven the powerful learning ability of this architecture. Finally, a convolution layer inversely realize the transformation between middle layers and final 3 RGB layers. Eventually, the recovered LR image for C-JPG achieves visual satisfactory in certain degree, and the artificial trace appears smooth, which greatly reduces the pixel inconsistency. Furthermore, SR generation will benefit from these better quality LR input deriving from C-JPG in stage I. Stage II: SR Generating. Because SR generating acts as a complementary task and is not the main contribution in our paper, we just bring in a state-of-the-art SR method called s-LWSR in our model. As is mentioned, s-LWSR leverages the combination of multi-level information from the front half of model. More details are shown in Figure 4. Specifically, chosen layers are stacked to form the information pool, which transfers the combination of abundant low-level information to high-level layers through a series of activation operations. However, in the last part, a specified upsampling block is applied to scale up the image to the ideal one. Normally, 2×, 3×, 4×, and 8× are the most frequent SR tasks. Some models, such as MDSR , RCAN (b), can deal with all these scale-up tasks upon only one block with sharing parameters. In this paper, following most SR algorithms, we deal with single scale-up factor 4× SR problem. As a , the upsampling stage in our model contains two subpixel interpolation layers as in ESPCN . In detail, each upsample layer performs one 2× scale-up. Finally, we obtain SR generations with satisfactory visual quality. Figure 4: The s-LWSR architecture. The framework of our approach includes two typical tasks: JPG recovering and SR generating, involving three similarity loss functions as follows. In stage I, each pixel in C-JPG image corresponds to one counterpart in non-JPG LR image. The aim is to recover accurate value in pixels. As a , we choose the following L1 loss: where W and H refer to the width and height of image respectively. G Rec is the transformation of stage I. On the basis of pre-processed C-JPG, we further scale the intermediate (LR rec) up to the default size of HR image through SR operation. Since our SR model pursues a better accurate SR generation, loss function in stage II inherits from that of former outstanding SR models: where s is the scale factor. Different from normal SR task, C-JPG SR involves more difference between the input C-JPG and its final supervised HR image. Inspired by marvelous development of unsupervised style-to-style learning, like CycleGAN, CinCGAN , and WESPE , we apply cycle loss as the third component in our final loss function. Taking CycleGAN as an example, images in different domains are transferred with inconsistent contents. Cycle loss can keep same content of the original image, while changing the style features, such as colors and texture. In this paper, SR generations are downscaled to input size to further compare with the corresponding non-JPG LR images. Here, we also use L1 loss, and Bic refers to bicubic interpolation downsampling. The final loss function is the combination of three L1 losses above with equal weight: In this section, we first describe the implementation details of our model. Then, we further analyze effects of all the proposed strategies in our model by ablation study. Finally, we compare our method with other leading SR models to validate the advantage. Dataset and Pre-processing Models. As mentioned in the modeling part, the training process is based on compressed LR (C-JPG), LR, and HR images. Following other leading SR methods, we choose DIV2K as our training dataset. There are 800 HR-LR training image pairs and 100 validation images. To get the C-JPG LR images, all LR images are firstly processed by pillow package in python. In order to exhibit more contrast with different compression levels, we respectively save JPG images with 20% and 50%. Moreover, we pre-process C-JPG with MatLab2018, which contains the default deblurring model. All images (C-JPG LR and LR) are processed by data augmentation strategy which randomly rotates images by 90 •, 180 •, 270 • and flips horizontally., BSD100 , and Urban100. Moreover, in order to prove the practical effectiveness, we perform our approach on the downloaded JPG images from the Internet. For both of the two stages, we apply Adam optimizer , where β 1 = 0.9 and β 2 = 0.999. The learning rate is initialized as 1 × 10 −4 and half decreased every 2 × 10 2 iterations of back-propagation. Our model is trained over 1 × 10 3 times until reaching its convergence. We implement our model by Pytorch with a Titan 2080Ti GPU. Evaluation Standards. For accurate related SR models, the most common evaluation standards are. Aligning with these algorithms, we evaluate our SR on Y channel (i.e., luminance) of transformed YCbCr space instead of direct RGB channels. To clarify the effectiveness of the recovering stage in our model, we apply ablation study to our model. Two strategies are employed. First, the recovering stage is removed from the model. Second, we use denoising with Matlab2018 to replace the recovering stage. The recovering stage plays the crucial role in our model. In this part, we remove it from the model. As a , C-JPG inputs directly go through the stage II: SR generating. Correspondingly, we just train the model with L 2 L1 loss. The remaining architecture is the same. As shown in Fig.5, there are a lot of undesired artifacts compared with C-JPG model. Both of the PSNR and the SSIM decrease a large number. In fact, single model is hard to solve SR and recovering the C-JPG image simultaneously. The huge difference in supervised information leads to large variance among middle layers, which represent ideal details serving the final SR model. More comparisons are shown in the Appendix. To further analyze the learning ability of our recovering stage, we replace it by the denoising code in Matlab2018. The C-JPG images are firstly processed to a set of clean intermediate inputs: inter. Then, we train the SR model with I inter and I HR pairs. The corresponding are shown in Fig.5. It clearly illustrates that pre-denoisng operation can remove many artifacts, but it is still obviously worse than our model. In our opinion, more details should be recovered instead of just removing noise. Denoising only makes these C-JPG inputs clearer, while recovering brings accurate information to C-JPG images. , BSD100 , and Urban100 . Because our research is SR generation from compressed JPG image, we firstly process images of the mentioned datasets to 20% quality JPG ones. Then, we compare our model with the sate-of-the-art SR methods: EDSR , DBPN , and RCAN (b). We present all comparison in Figure 5 and In this paper, we propose a lossless SISR model for low-quality C-JPG images which is extensive used on the Internet. Based on our redefined C-JPG SR pipeline, two functional stages are integrated to fulfill the SR task on C-JPG images. In addition, we employ cycle loss to guarantee the consistency after above two stages. The intensive experiments demonstrate that our model can learn capable representations of LR inputs for C-JPG SR task and outperform other cutting edges in SISR. More exploration should be executed on other CV tasks with C-JPG images inputs as the future work.
We solve the specific SR issue of low-quality JPG images by functional sub-models.
1,463
scitldr
Keyword spotting—or wakeword detection—is an essential feature for hands-free operation of modern voice-controlled devices. With such devices becoming ubiquitous, users might want to choose a personalized custom wakeword. In this work, we present DONUT, a CTC-based algorithm for online query-by-example keyword spotting that enables custom wakeword detection. The algorithm works by recording a small number of training examples from the user, generating a set of label sequence hypotheses from these training examples, and detecting the wakeword by aggregating the scores of all the hypotheses given a new audio recording. Our method combines the generalization and interpretability of CTC-based keyword spotting with the user-adaptation and convenience of a conventional query-by-example system. DONUT has low computational requirements and is well-suited for both learning and inference on embedded systems without requiring private user data to be uploaded to the cloud. A more natural method for custom wakeword detection is "query-by-example" keyword spotting. In a query-by-example system, the user teaches the system the desired wakeword by recording a few training examples, and the keyword spotter uses some form of template matching to compare incoming audios with these training examples to detect the wakeword. In dynamic time warping (DTW)-based keyword spotting, for example, a variable-length sequence of feature vectors, such as Mel-filterbank cepstral coefficients (MFCCs) BID6 or phoneme posteriors [8; 9; 10], is extracted from the query audio and test audio, and the DTW alignment score between query and test is used as the detection score. Other template-matching approaches compare fixed-length feature vectors, such as the final hidden states of a pre-trained recurrent neural network (RNN) BID10 or the output of a Siamese network [12; 13], using the cosine distance. Systems that use template matching are difficult to interpret, and therefore difficult to debug and optimize. For instance, it is hard to say why a keyword is incorrectly detected or not detected in a system based on dynamic time warping (DTW) simply by inspecting the DTW matrix. Likewise, the hidden states of RNNs can sometimes be interpreted (c.f. BID13, BID14), but this is currently only possible with some luck and ingenuity. In contrast, a CTC-based model is easy to interpret. The wakeword model itself is interpretable: it consists simply of a human-readable string, like "ALEXA" or "AH L EH K S AH", rather than a vector of real numbers. Inference is interpretable because the neural network outputs are peaky and sparse (the "blank" symbol has probability ≈1 at almost all timesteps), so it is easy to determine what the network "hears" for any given audio and whether it hears the labels of the wakeword BID15. This is a useful property because it enables the system designer to take corrective action. For instance, one might identify that a particular label is not well-recognized and augment the training data with examples of this label. In this paper, we propose a new method for custom wakeword detection that combines the convenience and speaker-adaptive quality of query-by-example methods with the generalization power and interpretability of CTC-based keyword spotting. We call our method "DONUT", since detection requires O(N U T) operations given the neural network output, where N, U, and T are small numbers defined later in the paper. The method works as follows: the user records a few training examples of the keyword, and a beam search is used to estimate the labels of the keyword. The algorithm maintains an N -best list of label sequence hypotheses to minimize the error that may be incurred by incorrectly estimating the labels. At inference time, each hypothesis is scored using the forward algorithm, and the hypothesis scores are aggregated to obtain a single detection score. In the rest of the paper, we describe the proposed method and show that it achieves good performance compared with other query-by-example methods, yet generates easily interpretable models and matches the user's pronunciation better than when the label sequence is supplied through text. This section describes the model, learning, and inference for DONUT FIG0, as well as the memory, storage, and computational requirements. The proposed method uses a model composed of a wakeword model and a label model. Here we give more detail on these two components. We can model the user's chosen wakeword as a sequence of labels y = {y u ∈ A | u = 1, . . ., U}, where A is the set of possible labels, and U is the length of the sequence. The labels could be phonemes, graphemes, or other linguistic subunits; in this work, we use phonemes. It is generally not possible to perfectly estimate y from only a few training examples. Therefore, we maintain multiple hypotheses as to what the true sequence might be, along with a confidence for each hypothesis, and make use of all of these hypotheses during inference. A trained wakeword model thus consists of a set of label sequences and confidences. Beam Search DISPLAYFORM0 score:-3.1 The label model φ is a neural network trained using CTC on a speech corpus where each audio has a transcript of labels from the label set A. The network accepts an audio in the form of a sequence of acoustic feature vectors x = {x t ∈ R d | t = 1, . . ., T}, where d is the number of features per frame, and T is the number of frames. The network outputs a posteriorgram π = f φ (x) = {π t ∈ R 1+|A| | t = 1, . . ., T} representing the posterior probabilities of each of the labels and the CTC "blank" symbol at each timestep. Algorithm 1 describes the learning phase. The user records three examples of the wakephrase, here denoted by x train,1, x train,2, and x train,3. Once the user has recorded the audios, the label posteriors π train,i for each audio are computed using the label model φ. The CTCBeamSearch function then runs a beam search of width B over the label posteriors and returns a list of B probable label sequences and their corresponding log probabilities. More details on the beam search algorithm for CTC models can be found in BID16. The top N hypothesesŷ train,i 1,...,N are kept, and their log probabilities are converted to "confidences", which are also stored. Since not every hypothesis is equally good, the confidences can be used to weight the hypotheses during inference. We use an "acoustic-only" approach, in the sense that we do not use any sort of language model or pronunciation dictionary to prune the N -best list. Algorithm 2 describes how the wakeword is detected after the wakeword model has been learned. A voice activity detector (VAD) is used to determine which frames contain speech audio; only these frames are sent to the label model. The VAD thus reduces power consumption by reducing the amount of computation performed by the label model. After the label posteriors are computed by the network, the log probability of each hypothesis in the wakeword model is computed. The CTCForward function returns the log probability of a hypothetical label sequence given the audio by efficiently summing over all possible alignments of the label sequence to the audio BID2. The log probabilities are weighted by their respective confidences before they are summed to obtain a score. If the score is above a certain pre-determined threshold, the wakeword is detected. For clarity, we have written Algorithm 2 as though the posteriors are only computed after a complete audio x test has been acquired; it is preferable to reduce latency by computing the posteriors and Require: DISPLAYFORM0 beam, beam_scores:= CTCBeamSearch(π train,i, B)5:for j = 1 to N do end for 11: end for 12: return wake_model updating the hidden states as each speech frame becomes available from the VAD. Likewise, the forward algorithm can ingest a slice of π test at each timestep to compute that timestep's forward probabilities. Require: DISPLAYFORM0 score:= score + log p φ (ŷ|x test) · w 6: end for 7: return score DONUT is fast and suitable for running online on an embedded device. The memory, storage, and computational requirements of running DONUT online can be broken down into two parts: running the label model and running the wakeword model. The runtime requirements are dominated by the label model (the neural network). The complexity of running the neural network is O(nT), where n is the number of parameters and T is the duration of the audio in frames. We use an RNN with frame stacking BID17: that is, pairs of contiguous acoustic frames are stacked together so that the RNN operates at 50 Hz instead of 100 Hz, cutting the number of operations in half at the expense of slightly more input-hidden parameters in the first layer. The wakeword model requires little storage, as it consists of just 3N short strings and one real-valued confidence for each string. The CTC forward algorithm requires O(U T) operations to process a single label sequence. If the algorithm is run separately for N hypotheses, and the hypotheses have length U on average, then O(N U T) operations are required. The number of operations could be reduced by identifying and avoiding recomputing shared terms for the forward probabilities (e.g. using a lattice BID18), at the cost of a more complicated implementation. However, since N and U are small values, this kind of optimization is not crucial. (In the experiments described below, n is 168k, N is 10, and U is on average 10, so it is apparent that in general O(nT) » O(N U T).) The system requires O(N U) memory to store the forward probabilities for a single timestep; the memory for the previous timestep can be overwritten with the current timestep after the current forward probabilities have been computed. All audio data in our experiments is sampled at 16,000 Hz and converted to sequences of 41-dimensional Mel filterbank (FBANK) feature vectors using a 25 ms window with a stride of 10 ms. Here, we describe the two types of datasets used in our experiments: the dataset used to train the label models, and the datasets used to train and test the wakeword detectors. Label dataset We used LibriSpeech BID19, an English large vocabulary continuous speech recognition (LVCSR) dataset, to train label models. We used the Montreal Forced Aligner BID20 to obtain phoneme-level transcripts written in ARPAbet of the 100-and 360-hour subsets of the dataset. We trained a unidirectional GRU network with 3 layers and 96 hidden units per layer (168k parameters) on LibriSpeech with CTC using the phoneme-level transcripts. Wakeword datasets We created two wakeword datasets: one based on the 500-hour subset of LibriSpeech (LibriSpeech-Fewshot) and one based on crowdsourced English recordings (EnglishFewshot). Both datasets are composed of a number of few-shot learning "episodes". Each episode contains support examples and test examples. The support set contains three examples of the target phrase spoken by a single speaker. The test set contains a number of positive and negative examples. An example of an episode is shown in FIG2. The episodes are split into one subset for hyperparameter tuning and another subset for reporting performance. To create the LibriSpeech-Fewshot dataset, we split the LibriSpeech recordings into short phrases between 500 ms and 1,500 ms long, containing between one and four words. These short phrases were selected and grouped together to form 6,047 episodes. The test set contains eight positive examples by the same speaker and 24 negative examples by random speakers. Of the negative examples, twenty are phonetically similar ("confusing"), and four are phonetically dissimilar ("non-confusing"). To produce the confusing examples, we generated a phoneme-level transcript for each example, calculated the phoneme edit distance between the target phrase and all other available phrases, and chose the 20 phrases with the lowest phoneme edit distance. The non-confusing examples were chosen at random from the remaining phrases. To create the English-Fewshot dataset, we used crowdsourcing to record speakers saying phrases consisting of "Hello" followed by another word: for example, "Hello Computer". Like the LibriSpeech- Fewshot dataset, this dataset has positive examples from the same speaker and negative examples from different speakers; however, here there are also negative examples from the same speaker, so as to show that the models are not simply performing speaker verification. Due to data-gathering constraints, we were unable to obtain "imposter" examples in which a different speaker says the target phrase, but we plan to explore this in the future. All wakeword models used beam width B = 100 and kept N = 10 hypotheses per training example. We use the receiver operating characteristic (ROC) curve to measure the performance of a wakeword detector. A single detection threshold is used across all episodes. Two performance metrics are reported: the equal error rate (EER; lower is better) and the area-under-ROC-curve (AUC; higher is better) metric. An EER of 0% or an AUC of 1 indicates a perfect classifier. In the first experiment, we compare the performance of DONUT with two other query-by-example keyword spotting methods: dynamic time warping (DTW) based on the raw FBANK input and DTW based on the posteriorgram (the output of the label model). We used the 2 norm to compare FBANK features, and we used the distance-like metric suggested in BID7 to compare posteriorgram features: DISPLAYFORM0 where λ is a small positive number (we used 1e−5) and u is a uniform distribution (a vector with entries equal to 1 1+|A| ; used to prevent log by smoothing the peaky output distribution). We also tried removing the softmax, using the 2 norm as the distance metric, and using a label model trained using the framewise cross-entropy loss instead of the CTC loss. None of these modifications improved performance; we report the best with the CTC model here. TAB0 shows the performance of the query-by-example methods on English-Fewshot. We report the performance for three separate cases, in decreasing order of difficulty: the cases when the negative examples are 1) confusing and taken from the same speaker, 2) non-confusing and taken from the same speaker, and 3) non-confusing and taken from different speakers. DONUT outperforms both DTW methods in all three cases. In this experiment, we compare the performance of our method with the performance of conventional CTC keyword spotting when the "true" label sequence is provided (e.g., by the user through a text interface). The phoneme sequence for each phrase in the LibriSpeech-Fewshot dataset was obtained using forced alignment and used as the wakeword model for each episode. TAB1 shows that for phonetically confusing examples, DONUT outperforms the text-based approach, and for non-confusing examples, the two approaches perform roughly the same, with the text-based approach performing very slightly better. This indicates that not only does DONUT provide a more convenient interface than query-by-string keyword spotting, it also has the same or even better performance. Like conventional CTC keyword spotting, DONUT is interpretable, which makes it easy for a system designer to identify problems with the model and improve it. For example, FIG2 shows an example of a wakeword model learned for the phrase "of dress". In the first two training examples, the network hears an "N" sound where one would expect the "V" phoneme in "of". This information can be used to improve the model: one could retrain the label model with more examples short words such as "of" and "on", to help the model distinguish short sounds more easily. Alternately, it could become apparent after listening to the training examples that for the speaker's personal accent the phrase does indeed contain an "N" sound. Debugging the inference phase is also made easier by the use of CTC. It is possible to decode phoneme sequences from the test audio using a beam search, although this is not necessary to do during inference. One could inspect the decoded sequences from an audio that causes a false accept to identify hypotheses that should be removed from the model to make the false accept less likely to occur. If a false reject occurs, one could check whether the wakeword model hypotheses are found in the decoded sequences or if the network hears something completely different. DONUT has a few hyperparameters: the beam width B, the number of hypotheses kept from the beam search N, the label model φ, and the way in which the hypothesis scores are aggregated. Here we explore the impact of these hyperparameters on performance using the English-Fewshot dataset. Increasing the number of hypotheses generally improves performance (Table 3), though we have found that this may yield diminishing returns. Even a simple greedy search (B = 1, N = 1), which can be implemented by picking the top output at each timestep, works fairly well for our system. With respect to the impact of the choice of label model, we find that label models with lower phoneme error rate (edit distance between the true label sequence and the model's prediction) for the original corpus they were trained on have a lower error rate for wakeword detection TAB2 ). This suggests that making an improvement to the label model can be expected to translate directly to a decrease in EER/increase in AUC.In the inference algorithm described above (Algorithm 2), the hypotheses' scores are aggregated by taking a weighted sum, where each weight is the inverse of the log probability of that hypothesis given its corresponding training example. Without the weighting, performance was hurt because some hypotheses are a worse fit to the data than others. A more principled approach to aggregating the scores might be to treat the hypotheses' log probabilities from training as log priors and add them to the scores, since multiplying by a prior is equivalent to adding a log prior, and to take the logsumexp of the scores plus their log priors, since adding two probabilities is equivalent to taking the logsumexp of two log probabilities. However, we have found that this does not work as well as the weighted sum approach, perhaps because the logsumexp function acts like max and tends to pick out a single hypothesis instead of smoothly blending the hypotheses. In this paper, we proposed DONUT, an efficient algorithm for online query-by-example keyword spotting using CTC. The algorithm learns a list of hypothetical label sequences from the user's speech during enrollment and uses these hypotheses to score audios at test time. We showed that the model is interpretable, and thus easy to inspect, debug, and tweak, yet at the same time has high accuracy. Because training a wakeword model amounts to a simple beam search, it is possible to train a model on the user's device without uploading a user's private voice data to the cloud. Our technique is in principle applicable to any domain in which a user would like to teach a system to recognize a sequence of events, such as a melody (a sequence of musical notes) or a gesture (a sequence of hand movements). It would be interesting to see how well the proposed technique transfers to these other domains.
We propose an interpretable model for detecting user-chosen wakewords that learns from the user's examples.
1,464
scitldr
To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed. One way of constructing such representations is by focusing on the important events in a sequence. In this paper, we propose a model that learns both to discover such key events (or keyframes) as well as to represent the sequence in terms of them. We do so using a hierarchical Keyframe-Inpainter (KeyIn) model that first generates keyframes and their temporal placement and then inpaints the sequences between keyframes. We propose a fully differentiable formulation for efficiently learning the keyframe placement. We show that KeyIn finds informative keyframes in several datasets with diverse dynamics. When evaluated on a planning task, KeyIn outperforms other recent proposals for learning hierarchical representations. When thinking about the future, humans focus their thoughts on the important things that may happen (When will the plane depart?) without fretting about the minor details that fill each intervening moment (What is the last word I will say to the taxi driver?). Because the vast majority of elements in a temporal sequence contains redundant information, a temporal abstraction can make reasoning and planning both easier and more efficient. How can we build such an abstraction? Consider the example of a lead animator who wants to show what happens in the next scene of a cartoon. Before worrying about every low-level detail, the animator first sketches out the story by keyframing, drawing the moments in time when the important events occur. The scene can then be easily finished by other animators who fill in the rest of the sequence from the story laid out by the keyframes. In this paper, we argue that learning to discover such informative keyframes from raw sequences is an efficient and powerful way to learn to reason about the future. Our goal is to learn such an abstraction for future image prediction. In contrast, much of the work on future image prediction has focused on frame-by-frame synthesis . This strategy puts an equal emphasis on each frame, irrespective of the redundant content it may contain or its usefulness for reasoning relative to the other predicted frames. Other recent work has considered predictions that "jump" more than one step into the future, but these approaches either used fixed-offset jumps or used heuristics to select the predicted frames (; ;). In this work, we propose a method that selects the keyframes that are most informative about the full sequence, so as to allow us to reason about the sequence holistically while only using a small subset of the frames. We do so by ensuring that the full sequence can be recovered from the keyframes with an inpainting strategy, similar to how a supporting animator finishes the story keyframed by the lead. One possible application for a model that discovers informative keyframes is in long-horizon planning. Recently, predictive models have been employed for model-based planning and control ). However, they reason about every single future time step, limiting their applicability to short horizon tasks. In contrast, we show that a model that reasons about the future using a small set of informative keyframes enables visual predictive planning for horizons much greater than previously possible by using keyframes as subgoals in a hierarchical planning framework. Figure 1: Keyframing the future. Instead of predicting one frame after the other, we propose to represent the sequence with the keyframes that depict the interesting moments of the sequence. The remaining frames can be inpainted given the keyframes. To discover informative frames in raw sequence data, we formulate a hierarchical probabilistic model in which a sequence is represented by a subset of its frames (see Fig. 1). In this two-stage model, a keyframing module represents the keyframes as well as their temporal placement with stochastic latent variables. The images that occur at the timepoints between keyframes are then inferred by an inpainting module. We parametrize this model with a neural network and formulate a variational lower bound on the sequence log-likelihood. Optimizing the ing objective leads to a model that discovers informative future keyframes that can be easily inpainted to predict the full future sequence. Our contributions are as follows. We formulate a hierarchical approach for the discovery of informative keyframes using joint keyframing and inpainting (KEYIN). We propose a soft objective that allows us to train the model in a fully differentiable way. We first analyze our model on a simple dataset with stochastic dynamics in a controlled setting and show that it can reliably recover the underlying keyframe structure on visual data. We then show that our model discovers hierarchical temporal structure on more complex datasets of demonstrations: an egocentric gridworld environment and a simulated robotic pushing dataset, which is challenging for current approaches to visual planning. We demonstrate that the hierarchy discovered by KEYIN is useful for planning, and that the ing approach outperforms other proposed hierarchical and non-hierarchical planning schemes on the pushing task. Specifically, we show that keyframes predicted by KEYIN can serve as useful subgoals that can be reached by a low-level planner, enabling long-horizon, hierarchical control. Hierarchical temporal structure. Hierarchical neural models for efficiently modeling sequences were proposed in;. These approaches were further extended to predict with an adaptive step size so as to leverage the natural hierarchical structure in language data (; Kádár et al., 2018). However, these models rely on autoregressive techniques for text generation and applying them to structured data, such as videos, might be impractical. The video processing community has used keyframe representations as early as 1991 in the MPEG codec . adapted this algorithm in the context of neural compression; however, these approaches use constant offsets between keyframes and thus do not fully reflect the temporal structure of the data. Recently, several neural methods were proposed to leverage such temporal structure. and propose models that find and predict the least uncertain "bottleneck" frames. construct a representation that can be used to predict any number of frames into the future. In contrast, we propose an approach for hierarchical video representation that discovers the keyframes that best describe a certain sequence. In parallel to our work, propose a related method for video segmentation via generative modeling. focus on using the discovered task boundaries for training hierarchical RL agents, while we show that our model can be used to perform efficient hierarchical planning by representing the sequence with only a small set of keyframes. Also concurrently, propose a similar method to KEYIN for learning temporal abstractions. focuses on learning hierarchical state-space models, we propose a model that operates directly in the observation space and performs joint keyframing and inpainting. Video modeling. Early approaches to probabilistic video modeling include autoregressive models that factorize the distribution by considering pixels sequentially ). To reason about the images in the video holistically, latent variable approaches were developed based on variational inference (; ;), including (; ;) and large-scale models such as . is a recently proposed approach that uses exact inference based on normalizing flows . We build on existing video modeling approaches and show how they can be used to learn temporal abstractions with a novel keyframe-based generative model. Visual planning and model predictive control. We build on recent work that explored applications of learned visual predictive models to planning and control. Several groups (; ;) have proposed models that predict the consequences of actions taken by an agent given its control output. Recent work (; ; has shown that visual model predictive control based on such models can be applied to a variety of different settings. In this work, we show that the hierarchical representation of a sequence in terms of keyframes improves planning performance in the hierarchical planning setting. Figure 2: A probabilistic model for jointly keyframing and inpainting a future sequence. First, a sequence of keyframes K 1:N is generated, as well as corresponding temporal indices τ 1:N, defining the structure of the underlying sequence. In the second stage, for each pair of keyframes K n and K n+1, the frames I τ n :τ n+1 −1 are inpainted. Our goal is to develop a model that generates sequences by first predicting key observations and the time steps when they occur and then filling in the remaining observations in between. To achieve this goal, in the following we (i) define a probabilistic model for joint keyframing and inpainting, and (ii) show how a maximum likelihood objective leads to the discovery of keyframe structure. We first describe a probabilistic model for joint keyframing and inpainting of a sequence I 1:T. The model consists of two parts: the keyframe predictor and the sequence inpainter (see Fig. 2). The keyframe predictor takes in C conditioning frames I co and produces N keyframes K 1:N as well as the corresponding time indices τ 1:N: From each pair of keyframes, the sequence inpainter generates the sequence of frames in between: which completes the generation of the full sequence. The inpainter additionally observes the number of frames it needs to generate τ n+1 − τ n. The temporal spacing of the most informative keyframes is data-dependent: shorter keyframe intervals might be required in cases of rapidly fluctuating motion, while longer intervals can be sufficient for steadier motion. Our model handles this by predicting the keyframe indices τ and inpainting τ n+1 −τ n frames between each pair of keyframes. We parametrize the prediction of τ n in relative terms by predicting offsets δ n: τ n = τ n−1 + δ n. To produce a complex multimodal distribution over K we use a per-keyframe latent variable z with prior distribution p(z) and approximate posterior q(z|I, I co). 1 We construct a variational lower bound on the likelihood of both I and K as follows 2: In practice, we use a weight β on the KL-divergence term, as is common in amortized variational inference (; ;). If a simple model is used for inpainting, most of the representational power of the model has to come from the keyframe predictor. We use a relatively powerful latent variable model for the keyframe predictor and a simpler Gaussian distribution produced with a neural network for inpainting. Because of this structure, the keyframe predictor has to predict keyframes that describe the underlying sequence well enough to allow a simpler inpainting process to maximize the likelihood. We will show that pairing a more flexible keyframe predictor with a simpler inpainter allows our model to discover semantically meaningful keyframes in video data. Ground Truth Sequence For each predicted keyframeK n we compute a target imagẽ K n as the sum of the ground truth images weighted with the corresponding distribution over index τ n. Finally, we compute the reconstruction loss between the estimated imagê K n and the soft targetK n. Our model can dynamically predict the keyframe placement τ n. However, learning a distribution over the discrete variable τ n is challenging due to the expensive evaluation of the expectation over p(τ n |z 1:n, I co) in the objective in Eq. 3. To be able to evaluate this term efficiently and in a differentiable manner while still learning the keyframe placement, we propose a continuous relaxation of the objective. The placement distribution τ n defines a probability for each predicted frame to match to a certain frame in the ground truth sequence. Instead of sampling from this distribution to pick a target frame we produce a soft target for each predicted frame by computing the expected target frame, i.e. the weighted sum of all frames in the true sequence, each multiplied with the probability of matching to the predicted frame. When the entropy of τ n converges to zero, the continuous relaxation objective is equivalent to the original, discrete objective. Keyframe targets. To produce a keyframe target,K n, we linearly interpolate between the ground truth images according to the predicted distribution over the keyframe's temporal placement τ n: K n = t τ n t I t, where τ n t is the probability that the n th keyframe occurs at timestep t. This process is depicted in Fig. 3. We parametrize temporal placement prediction in terms of offsets δ with a maximum offset of J. Because of this, the maximum possible length of the predicted sequence is N J. It is desirable for J to be large enough to be able to capture the distribution of keyframes in the data, but this may lead to Figure 4: Sequences generated by KEYIN and a method with constant temporal keyframe offset (Jumpy) on Brownian Motion data. Generation is conditioned on the first five frames. The first half of the sequence is shown. Movement direction changes are marked red in the ground truth sequence and predicted keyframes are marked blue. We see that KEYIN can correctly reconstruct the motion as it selects an informative set of keyframes. The sequence generated by the Jumpy method does not reproduce the direction changes since they cannot be inferred from the selected keyframes. the generation of sequences longer than the target N J > T. To correctly compute the value of the relaxed objective in this case, we discard predicted frames at times > T and normalize the placement probability output by the network so that it sums to one over the first T steps. Specifically, for each keyframe we compute this probability as c n: The loss corresponding to the last two terms of Eq. then becomes: Inpainting targets. To complete our relaxed objective, for each ground truth frame, we produce a target image composed from the inpainted frames. 4 We note that as offsets δ have a maximum range of J, and in general have non-zero probability on each timestep, the inpainting network needs to produce J framesÎ n 1:J between each pair of keyframes K n, K n+1. As in the previous section, the targets for ground truth images are given as an interpolation between generated images weighted by the probability of the predicted frameÎ n j being matched to ground truth frame I t: Here, m n j,t is the probability that the j-th predicted image in segment n has an offset of t from the beginning of the predicted sequence, which can be computed from τ n. To obtain a probability distribution over produced frames, we normalize the with n,j m n j,t. The full loss for our model is: We show how to instantiate KEYIN with deep neural networks and train it on high-dimensional observations, such as images. We further describe an effective training procedure for KEYIN. We use a common encoder-recurrent-decoder architecture . Video frames are first processed with a convolutional encoder module to produce image embeddings ι t = CNN enc (I t). Inferred frame embeddingsι are decoded with a convolutional decoderÎ The keyframe predictor p(K 1:N, τ 1:N |z 1:N, I co) is parametrized with a Long Short-Term Memory network . To condition the keyframe predictor on past frames, we initialize its state with the final state of another LSTM that processes the conditioning frames. Similarly, we parametrize the sequence inpainter p(I τ n :τ n+1 |K n, K n+1, τ n+1 − τ n) with an LSTM. We condition the inpainting on both keyframe embeddings,κ n−1 andκ n, as well as the temporal offset between the two, δ n, by passing these inputs through a multi-layer perceptron that produces the initial state of the inpainting LSTM. The generation is conditioned on a single ground truth frame. Twelve of the 30 predicted frames are shown. We observe that for each transition between pushes and each action of the Gridworld agent our network predicts a keyframe either exactly at the timestep of the event or one timestep apart. Note, although agent position is randomized, objects not visible in the first image can be predicted in Gridworld because the maze is fixed across episodes. We use a Gaussian distribution with identity variance as the output distribution for both the keyframe predictor and the inpainting model and a multinomial distribution for δ n. We parametrize the inference q(z 1:N |I −C+1:T) with an LSTM with attention over the entire input sequence. The inference distribution is a diagonal covariance Gaussian, and the prior p(z 1:N) is a unit Gaussian. Further details of the inference procedure are given in Sec. B and Fig. 8 of the Appendix. We train our model in two stages. First, we train the sequence inpainter to inpaint between ground truth frames sampled with random offsets, thus learning interpolation strategies for a variety of different inputs. In the second stage, we train the keyframe predictor using the loss from Eq. 5 by feeding the predicted keyframe embeddings to the inpainter. In this stage, the weights of the inpainter are frozen and are only used to backpropagate errors to the rest of the model. We found that this simple two-stage procedure improves optimization of the model. We use L1 reconstruction losses to train the keyframe predictor. We found that this and adding a reconstruction loss on the predicted embeddings of the keyframes, weighted with a factor β κ, improved the ability of the model to produce informative keyframes. Target embeddings are computed using the same soft relaxation used for the target keyframes. More details of the loss computation are given in Sec. E and Algorithm 1 of the Appendix. We evaluate the quality of KEYIN's representation for future sequences by addressing the following questions: (i) Can it discover and predict informative keyframes? (ii) Can it model complex data distributions? (iii) Is the discovered hierarchy useful for long-horizon hierarchical planning? Datasets. We evaluate our model on three datasets containing structured long-term behavior. The Structured Brownian motion (SBM) dataset consists of binary image sequences of size 32 × 32 pixels in which a ball randomly changes directions after periods of straight movement of six to eight frames. The Gridworld Dataset consists of 20k sequences of an agent traversing a maze with different objects. The agent sequentially navigates to objects and interacts with them following a task sketch. We use the same maze for all episodes and randomize the initial position of the agent and the task sketch. We use 64 × 64 pixel image observations and further increase visual complexity by constraining the field of view to a 5 × 5-cells egocentric window. The Pushing Dataset consists of 50k sequences of a robot arm pushing a puck towards a goal on the opposite side of a wall. Each sequence consists of six consecutive pushes. We vary start and target position of the puck, as well as the placement of the wall. The demonstrations were generated with the MuJoCo simulator at a resolution of 64 × 64 pixels. For more details on the data generation process, see Sec. D of the Appendix. Further details about the experimental setup are given in Sec. C of the Appendix. 6.1 KEYFRAME DISCOVERY To evaluate KEYIN's ability to discover keyframes, we train KEYIN on all three datasets with N = 6, which can be interpreted as selecting the N most informative frames from a sequence. We show qualitative examples of keyframe discovery for the SBM dataset in Fig. 4 and for the Gridworld and Pushing datasets in Fig. 5. On all datasets the model discovers meaningful keyframes which mark direction changes of the ball, transitions between pushes or interactions with objects, adapting its keyframe prediction patterns to the data. Consequently, the inpainter network is able to produce frames of high visual quality. Misplaced keyframes yield blurry interpolations, as can be seen for the jumpy prediction in Fig. 4. This suggests that keyframes found by KEYIN describe the overall sequences better. Ground Truth Generated We see that our model covers both modes of the distribution, producing both trajectories that go to the right and to the left of the obstacle. To show that KEYIN discovers informative keyframes, we compare keyframe predictions against an alternative approach that measures the surprise associated with observing a frame given the previous frames. This approach selects keyframes as the N frames with the largest peaks in "surprise" as measured by the KL-divergence D KL [q(z t |I 1:t)||p(z t)] between the prior and the posterior of a stochastic predictor based on (see Sec. F and Algorithm 2 of the Appendix for details). We provide comparisons to alternative formulations of surprise in Appendix Sec. F, Tab. 3. For quantitative analysis, we define approximate ground truth keyframes to be the points of direction change for the SBM dataset, the moments when the robot lifts its arm to transitions between pushes, or when the agent interacts with objects in the gridworld. We report F1 scores that capture both the precision and recall of keyframe discovery. We additionally compare to random keyframe placement, and a learned but static baseline that is the same for all sequences. The evaluation in Tab. 1 shows that KEYIN discovers better keyframes than alternative methods. The difference is especially large on the more complex Pushing and Gridworld datasets. The surprise-based method does not reason about which frames are most helpful to reconstruct the entire trajectory and thus is unable to discover the correct structure on the more complex datasets. In addition to the F1 scores, we report temporal distance between predicted and annotated keyframes in Appendix, Tab. 4, also indicating that KEYIN is better able to discover the temporal structure in both datasets. Even though the focus of this work is on discovering temporal structure via keyframing and not on improving video prediction quality, we verify that KEYIN can represent complex data distributions in terms of discovered keyframes and attains high diversity and visual quality. We show sample generations from our model on the Pushing and Gridworld datasets on the supplementary website 5. We see that KeyIn is able to faithfully model complex distributions of video sequences. We further visualize multiple sampled Pushing sequences from our model conditioned on the same start position in Fig. 6, showing that KEYIN is able to cover both modes of the demonstration distribution. We further show that KEYIN compares favorably to prior work on video prediction metrics on sequence modeling in Tab. 5 of the Appendix, and outperforms prior approaches in terms of keyframe modeling in Appendix, Tab. 6. 6.3 ROBUSTNESS OF KEYFRAME DETECTION In the previous sections, we showed that when the sequence can indeed be summarized with N keyframes, KEYIN predicts the keyframes that correspond to our notion of salient frames. However, what happens if we train KEYIN to select a larger or a smaller amount of keyframes? To evaluate this, we measure KEYIN recall with extra and precision with fewer available keyframes. We note that high precision is unachievable in the first case and high recall is unachievable in the second case, since these problems are misspecified. As these numbers are not informative, we do not report them. In Tab. 2, we see that KEYIN is able to find informative keyframes even when N does not exactly match the structure of the data. We further qualitatively show that KEYIN selects a superset or a subset of the original keyframes respectively in Sec. G. This underlines that our method's ability to discover keyframe structure is robust to the choice of the number of predicted keyframes. As a first step towards analyzing the robustness of KEYIN under more realistic conditions we report keyframe discovery when trained and tested on sequences with additive Gaussian noise, a noise characteristic commonly found in real-world camera sensors. We find that KEYIN is still able to discover the temporal structure on both the Pushing and the Gridworld dataset. For qualitative and quantitative , see Appendix Fig. 11 and Tab. 7. We have seen that KEYIN can find frames that correspond to an intuitive notion of keyframes. This demonstrates that the keyframes discovered by KEYIN do indeed capture an abstraction that compactly describes the sequence. In light of this, we hypothesize that an informative set of keyframes contains sufficient information about a sequence to effectively follow the trajectory it shows. To test this, we use the inferred keyframes as subgoals for hierarchical planning in the pushing environment. During task execution, we first plan a sequence of keyframes that reaches the target using our learned keyframe predictor. Specifically, we generate keyframe trajectories from our model by sampling latent variables z from the prior and using them to roll out the keyframe prediction model. We optimize for a sequence of latent variables z that in a keyframe trajectory which reaches the goal using the Cross-Entropy Method . We then execute the plan by using the keyframes as subgoals for a low-level planner. This planner reaches each subgoal via model predictive control using ground truth dynamics, again employing CEM for optimization of the action trajectory. This planning procedure is illustrated in Fig. 7 (left). For more details, see Sec. I and Algs. 3 and 4 of the Appendix. We find that KEYIN is able to plan coherent subgoal paths towards the final goal that often lead to successful task execution (executions are shown on the supplementary website 6). To quantitatively evaluate the keyframes discovered, we compare to alternative subgoal selection schemes: fixed time offset (Jumpy, similar to), a method that determines points of peak surprise (Surprise, see Sec. 6.1), and a bottleneck-based subgoal predictor (time-agnostic prediction or). We additionally compare to an approach that plans directly towards the final goal using the low-level planner (Flat). We evaluate all methods with the shortest path between the target and the actual position of the object after the plan is executed. All compared methods use the same low-level planner as we only want to measure the quality of the predicted subgoals. As shown in Fig. 7 (right), our method outperforms all prior approaches. TAP shows only a moderate increase in performance over the Flat planner, which we attribute to the fact that it fails to predict good subgoals and often simply predicts the final image as the bottleneck. This is likely due to the relatively large stochasticity of our dataset and the absence of the clear bottlenecks that TAP is designed to find. Our method outperforms the planners that use Jumpy and Surprise subgoals. This further confirms that KEYIN is able to produce keyframes that are informative about the underlying trajectory, such that planning toward these keyframes makes it easier to follow the trajectory. We presented KEYIN, a method for representing a sequence by its informative keyframes by jointly keyframing and inpainting. KEYIN first generates the keyframes of a sequence and their temporal placement and then produces the full sequence by inpainting between keyframes. We showed that KEYIN discovers informative keyframes on several datasets with stochastic dynamics. Furthermore, by using the keyframes for planning, we showed our method outperforms several other hierarchical planning schemes. Our method opens several avenues for future work. First, an improved training procedure that allows end-to-end training is desirable. Second, more powerful hierarchical planning approaches can be designed using the keyframe representation to scale to long-term real-world tasks. Finally, the proposed keyframing method can be applied to a variety of applications, including video summarization, video understanding, and multi-stage hierarchical video prediction. We include video on the supplementary website at https://sites.google.com/ view/keyin. The website includes inference samples, prior samples, and exectued trajectories for all methods. We found simple attention over LSTM outputs to be an effective inference procedure. Our approximate inference network LSTM inf outputs (κ inf t, ζ t) t≤T, where κ inf is an embedding used to compute an attention weight and the ζ t are values to be attended over. We compute the posterior distribution over z t using a key-value attention mechanism : µ n, σ n = (t a n,t ζ t)/ t a n,t. The distance metric, d, is the standard inner product. The architecture used for keyframe inference, including the attention mechanism, is depicted in Supplemental Fig. 8. We set the prediction horizon to T = 30 frames and predict N = 6 segments with J = 10 frames each for the SBM dataset and 6 frames each for the Pushing dataset. We pre-train the interpolator on segments of two to eight frames for Structured Brownian motion data, and two to six frames for Pushing data. The weight on the KL-divergence term for the interpolator VAE is 1e−3. For training the keyframe predictor, we set β K = 0, β κ = 1, β = 5e−2. The hyperparameters were hand-tuned. We activate the generated images with a sigmoid and use BCE losses on each color channel to avoid saturation. The convolutional encoder and decoder both have three layers for the Structured Brownian motion dataset and four layers for the Pushing dataset. We use a simple two-layer LSTM with a 256-dimensional state in each layer for all recurrent modules. Each LSTM has a linear projection layer before and after it that projects the observations to and from the correct dimensions. We use the Adam optimizer with β 1 = 0.9 and β 2 = 0.999, batch size of 30, and a learning rate of 2e−4. For more details please refer to the appendix. Each network was trained on a single high-end NVIDIA GPU. We trained the interpolator for 100K iterations, and the keyframe predictor for 200K iterations. The toal training time took about a day. In the Pushing environment, we use a held-out test set of 120 of sequences. The Structured Brownian Motion dataset is generated automatically and is potentially infinite. We used 1000 testing samples on the Structured Brownian Motion generated using a different random seed. The data collection for our pushing dataset was performed in an environment simulated in. In the environment, a robot arm initialized at the center of the table pushes an object to a goal position located at the other side of a wall-shaped obstacle. The demonstrations followed a rule-based algorithm that first samples subgoals between the initial position of the object and the goal and then runs a deterministic pushing procedure to the subgoals in order. The ground truth keyframes of the demonstrations were defined by frames at which subgoals were completed. We subsampled demonstration videos by a factor of two when saving them to the dataset, dropping every other frame in the trajectory and averaging actions of every two consecutive frames. For all datasets we generated for this environment following a rule-based algorithm, we only kept successful demonstrations and dropped the ones that fail to push the object to the goal position within a predefined horizon. We describe the details of the continuous relaxation loss computation in Algorithm 1. Note that we efficiently implement computing of the cumulative distributions τ as a convolution, which allows us to vectorize much of the computation. Computational complexity of the proposed implementation scales linearly with the number of keyframes N, and number of allowed frames per segment J and number of ground truth frames T. The final complexity is O(N T J), which we find in practice to be negligible compared to the time needed for the forward and backward pass. Standard stochastic video prediction methods do not attempt to estimate keyframes, as they are designed to densely estimate future videos frame-by-frame. Accordingly, they cannot be used directly as baselines for keyframe prediction methods, such as KEYIN. observe that the variance of the learned prior of a stochastic video prediction model tends to spike before an uncertain event happens. We exploit this observation to find the points of high uncertainty for our strong Surprise baseline. We use the KL divergence between the prior and the approximate posterior KL[q(z t |I 1:t)||p(z t)] to measure the surprise. This quantity can be interpreted as the number of bits needed to encode the latent variable describing the next state, it will be larger if the next state is more stochastic. We train a stochastic video prediction network with a fixed prior with the same architectures of encoder, decoder, and LSTM as our model. We found that selecting the peaks of suprise works the best for finding true keyframes. The procedure we use to select the keyframes is described in Algorithm 2. In order to find the keyframes in a sequence sampled from the prior, we run the inference network on the generated sequence. Parameters: Number of ground truth frames T, Number of keyframes N Input: Ground truth frames I 1:T, Generated framesÎ t i, generated offset distributions δ n Convert the distributions of interframe offsets δ n to keyframe timesteps τ n. For the first keyframe, Compute further τ n with chain rule. This can be efficiently computed via convolution: end for Compute probabilities of keyframes being within the predicted sequence: c n = t≤T τ n t. Compute soft keyframe targets:K n = t τ n t I t. Compute the keyframe loss: Get probabilities of segments ending after particular frames: e Compute the sequence loss: Algorithm 2 Get the surprise measure: Find the set of peak surprise points S where: s t > s t+1 ∧ s t < s t−1. if |S| < M then add M − |S| maximum surprise points to S. end if Return: The M keyframes from S with maximum surprise. We show qualitative of training KEYIN with N = 4, 6 (optimal number), and 8 on the SBM dataset in Fig. 9. We observe that if we train KEYIN to select a smaller or a larger number of keyframes than needed, it learns to predict a subset or a superset of the true keyframes, respectively. This property follows from the structure of the model, which encourages the model to predict the keyframes that allow the full sequence to be inpainted. When too few keyframes are available, the model will be unable to put keyframes at all important times, but those it picks must still be good for inpainting. When more keyframes are available than necessary, the model can place the additional keyframes at any time, as only a subset of the keyframes are needed to ensure good inpainting. Sample L sequences of latent variables: Figure 12: The top row shows the planned subgoals using KEYIN. The bottom row shows snapshots from a successful trajectory between the start state on the left and goal state (depicted transparently in each frame). The low-level execution closely follows the given subgoals and successfully reaches the goal. To apply the KEYIN model for planning, we use the approach for visual planning outlined in Algorithm 1. At the initial timestep, we use the cross-entropy method (CEM) to select subgoals for the task. To do so, we sampleM latent sequences z 0 from Figure 11: Example keyframe detections on noisy sequences. Red frames mark annotated keyframes. Top: Pushing dataset. Bottom: Gridworld dataset. Each triplet depicts top: ground truth sequence with additive Gaussian noise, middle: predicted keyframes at the predicted time steps, bottom: predicted full sequence. KEYIN is reliably able to detect keyframes and reconstruct the full sequence. Table 7: F1 score and distance to closest annotated / predicted keyframe when trained and tested on sequences with additive Gaussian noise. KEYIN is able to reliably find keyframes on both datasets even when trained and tested on noisy sequences. Even though the F1 score is lower on the Pushing dataset, the distances indicate that the discovered keyframes are well aligned with the annotated keyframes even under noise. the prior N (0, I) and use the keyframe model to retrieveM corresponding keyframe sequences, each with L frames. We define the cost of an image trajectory as the distance between the target image and the final image of each keyframe sequence defined under a domain-specific distance function (see below). In the update step of the CEM algorithm, we rank the trajectories based on their cost and fit a diagonal Gaussian distribution to the latents z that generated thẽ M = rM best sequences, where r is the elite ratio. We repeat the procedure above for a total of N iterations. We define the cost between two frames used during planning as the Euclidean distance between the center pixels of the object in both frames. We recover the center pixel via color-based segmentation of the object. While this cost function is designed for the particular planning environment we are testing on, our algorithm can be easily extended to use alternative, more domain-agnostic cost formulations that have been proposed in the literature; Ebert et al. (2017; . After subgoals are selected, we use a CEM based planner to produce rollout trajectories. Similar to the subgoal generation procedure, at each time step, we initially sample M action sequences u 0 from the prior N (0, I) and use the ground truth dynamics of the simulator to retrieve M corresponding image sequences, each with l frames 7. We define the cost of an image trajectory as the distance between the target image and the final image of each trajectory. In the update step, we rank the trajectories based on their cost and fit a diagonal Gaussian distribution to the actions u that generated the M = rM best sequences. After sampling a new set of actions u n+1 from the fitted Gaussian distributions we repeat the procedure above for a total of N iterations. Finally, we execute the first action in the action sequence corresponding to the best rollout of the final CEM iteration. The action at the next time step is chosen using the same procedure with the next observation as input and reinitialized action distributions. The algorithm terminates when the specified maximal number of planning steps T max has been executed or the distance to the goal is below a set threshold.
We propose a model that learns to discover informative frames in a future video sequence and represent the video via its keyframes.
1,465
scitldr
This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation's suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals. One core objective of deep learning is to discover useful representations, and the simple idea explored here is to train a representation-learning function, i.e. an encoder, to maximize the mutual information (MI) between its inputs and outputs. MI is notoriously difficult to compute, particularly in continuous and high-dimensional settings. Fortunately, recent advances enable effective computation of MI between high dimensional input/output pairs of deep neural networks . We leverage MI estimation for representation learning and show that, depending on the downstream task, maximizing MI between the complete input and the encoder output (i.e., global MI) is often insufficient for learning useful representations. Rather, structure matters: maximizing the average MI between the representation and local regions of the input (e.g. patches rather than the complete image) can greatly improve the representation's quality for, e.g., classification tasks, while global MI plays a stronger role in the ability to reconstruct the full input given the representation. Usefulness of a representation is not just a matter of information content: representational characteristics like independence also play an important role (; Hyvärinen & ; ; ; ;). We combine MI maximization with prior matching in a manner similar to adversarial autoencoders to constrain representations according to desired statistical properties. This approach is closely related to the infomax optimization principle , so we call our method Deep InfoMax (DIM). Our main contributions are the following:• We formalize Deep InfoMax (DIM), which simultaneously estimates and maximizes the mutual information between input data and learned high-level representations.• Our mutual information maximization procedure can prioritize global or local information, which we show can be used to tune the suitability of learned representations for classification or reconstruction-style tasks.• We use adversarial learning (à la) to constrain the representation to have desired statistical characteristics specific to a prior.• We introduce two new measures of representation quality, one based on Mutual Information Neural Estimation and a neural dependency measure (NDM) based on the work by , and we use these to bolster our comparison of DIM to different unsupervised methods. There are many popular methods for learning representations. Classic methods, such as independent component analysis and self-organizing maps , generally lack the representational capacity of deep neural networks. More recent approaches include deep volume-preserving maps (;, deep clustering BID8), noise as targets , and self-supervised or co-learning (; ;).Generative models are also commonly used for building representations BID5;;; ), and mutual information (MI) plays an important role in the quality of the representations they learn. In generative models that rely on reconstruction (e.g., denoising, variational, and adversarial autoencoders, ; ; ;), the reconstruction error can be related to the MI as follows:I e (X, Y) = H e (X) − H e (X|Y) ≥ H e (X) − R e,d (X|Y),where X and Y denote the input and output of an encoder which is applied to inputs sampled from some source distribution. R e,d (X|Y) denotes the expected reconstruction error of X given the codes Y. H e (X) and H e (X|Y) denote the marginal and conditional entropy of X in the distribution formed by applying the encoder to inputs sampled from the source distribution. Thus, in typical settings, models with reconstruction-type objectives provide some guarantees on the amount of information encoded in their intermediate representations. Similar guarantees exist for bi-directional adversarial models , which adversarially train an encoder / decoder to match their respective joint distributions or to minimize the reconstruction error .Mutual-information estimation Methods based on mutual information have a long history in unsupervised feature learning. The infomax principle , as prescribed for neural networks, advocates maximizing MI between the input and output. This is the basis of numerous ICA algorithms, which can be nonlinear (Hyvärinen & ; BID1 but are often hard to adapt for use with deep networks. Mutual Information Neural Estimation learns an estimate of the MI of continuous variables, is strongly consistent, and can be used to learn better implicit bi-directional generative models. Deep InfoMax (DIM) follows MINE in this regard, though we find that the generator is unnecessary. We also find it unnecessary to use the exact KL-based formulation of MI. For example, a simple alternative based on the Jensen-Shannon divergence (JSD) is more stable and provides better . We will show that DIM can work with various MI estimators. Most significantly, DIM can leverage local structure in the input to improve the suitability of representations for classification. Leveraging known structure in the input when designing objectives based on MI maximization is nothing new BID3 BID4 BID7, and some very recent works also follow this intuition. It has been shown in the case of discrete MI that data augmentations and other transformations can be used to avoid degenerate solutions . Unsupervised clustering and segmentation is attainable by maximizing the MI between images associated by transforms or spatial proximity . Our work investigates the suitability of representations learned across two different MI objectives that focus on local or global structure, a flexibility we believe is necessary for training representations intended for different applications. Proposed independently of DIM, Contrastive Predictive Coding is a MIbased approach that, like DIM, maximizes MI between global and local representation pairs. CPC shares some motivations and computations with DIM, but there are important ways in which CPC and DIM differ. CPC processes local features sequentially to build partial "summary features", which are used to make predictions about specific local features in the "future" of each summary feature. This equates to ordered autoregression over the local features, and requires training separate estimators for each temporal offset at which one would like to predict the future. In contrast, the basic version of DIM uses a single summary feature that is a function of all local features, and this "global" feature predicts all local features simultaneously in a single step using a single estimator. Note that, when using occlusions during training (see Section 4.3 for details), DIM performs both "self" predictions and orderless autoregression. Figure 1: The base encoder model in the context of image data. An image (in this case) is encoded using a convnet until reaching a feature map of M × M feature vectors corresponding to M × M input patches. These vectors are summarized into a single feature vector, Y. Our goal is to train this network such that useful information about the input is easily extracted from the high-level features. Figure 2: Deep InfoMax (DIM) with a global MI(X; Y) objective. Here, we pass both the high-level feature vector, Y, and the lower-level M ×M feature map (see Figure 1) through a discriminator to get the score. Fake samples are drawn by combining the same feature vector with a M × M feature map from another image. Here we outline the general setting of training an encoder to maximize mutual information between its input and output. Let X and Y be the domain and range of a continuous and (almost everywhere) differentiable parametric function, E ψ: X → Y with parameters ψ (e.g., a neural network). These parameters define a family of encoders, E Φ = {E ψ} ψ∈Ψ over Ψ. Assume that we are given a set of training examples on an input space, X: DISPLAYFORM0, with empirical probability distribution P. We define U ψ,P to be the marginal distribution induced by pushing samples from P through E ψ. I.e., U ψ,P is the distribution over encodings y ∈ Y produced by sampling observations x ∼ X and then sampling y ∼ E ψ (x).An example encoder for image data is given in Figure 1, which will be used in the following sections, but this approach can easily be adapted for temporal data. Similar to the infomax optimization principle , we assert our encoder should be trained according to the following criteria:• Mutual information maximization: Find the set of parameters, ψ, such that the mutual information, I(X; E ψ (X)), is maximized. Depending on the end-goal, this maximization can be done over the complete input, X, or some structured or "local" subset.• Statistical constraints: Depending on the end-goal for the representation, the marginal U ψ,P should match a prior distribution, V. Roughly speaking, this can be used to encourage the output of the encoder to have desired characteristics (e.g., independence).The formulation of these two objectives covered below we call Deep InfoMax (DIM). Our basic mutual information maximization framework is presented in Figure 2. The approach follows Mutual Information Neural Estimation , which estimates mutual information by training a classifier to distinguish between samples coming from the joint, J, and the First we encode the image to a feature map that reflects some structural aspect of the data, e.g. spatial locality, and we further summarize this feature map into a global feature vector (see Figure 1). We then concatenate this feature vector with the lower-level feature map at every location. A score is produced for each local-global pair through an additional function (see the Appendix A.2 for details).product of marginals, M, of random variables X and Y. MINE uses a lower-bound to the MI based on the Donsker-Varadhan representation of the KL-divergence, DISPLAYFORM0 where T ω: X × Y → R is a discriminator function modeled by a neural network with parameters ω. At a high level, we optimize E ψ by simultaneously estimating and maximizing I(X, E ψ (X)), DISPLAYFORM1 where the subscript G denotes "global" for reasons that will be clear later. However, there are some important differences that distinguish our approach from MINE. First, because the encoder and mutual information estimator are optimizing the same objective and require similar computations, we share layers between these functions, so that DISPLAYFORM2 where g is a function that combines the encoder output with the lower layer. Second, as we are primarily interested in maximizing MI, and not concerned with its precise value, we can rely on non-KL divergences which may offer favourable trade-offs. For example, one could define a Jensen-Shannon MI estimator (following the formulation of), DISPLAYFORM3 where x is an input sample, x is an input sampled fromP = P, and sp(z) = log(1+e z) is the softplus function. A similar estimator appeared in in the context of minimizing the total correlation, and it amounts to the familiar binary cross-entropy. This is well-understood in terms of neural network optimization and we find works better in practice (e.g., is more stable) than the DV-based objective (e.g., see App. A.3). Intuitively, the Jensen-Shannon-based estimator should behave similarly to the DV-based estimator in Eq. 2, since both act like classifiers whose objectives maximize the expected log-ratio of the joint over the product of marginals. We show in App. A.1 the relationship between the JSD estimator and the formal definition of mutual information. Noise-Contrastive Estimation (NCE, Gutmann & Hyvärinen, 2010; 2012) was first used as a bound on MI in Oord et al. (and called "infoNCE", 2018), and this loss can also be used with DIM by maximizing: DISPLAYFORM4 For DIM, a key difference between the DV, JSD, and infoNCE formulations is whether an expectation over P/P appears inside or outside of a log. In fact, the JSD-based objective mirrors the original NCE formulation in Gutmann & Hyvärinen, which phrased unnormalized density estimation as binary classification between the data distribution and a noise distribution. DIM sets the noise distribution to the product of marginals over X/Y, and the data distribution to the true joint. The infoNCE formulation in Eq. 5 follows a softmax-based version of NCE , similar to ones used in the language modeling community , and which has strong connections to the binary cross-entropy in the context of noise-contrastive learning . In practice, implementations of these estimators appear quite similar and can reuse most of the same code. We investigate JSD and infoNCE in our experiments, and find that using infoNCE often outperforms JSD on downstream tasks, though this effect diminishes with more challenging data. However, as we show in the App. (A.3), infoNCE and DV require a large number of negative samples (samples fromP) to be competitive. We generate negative samples using all combinations of global and local features at all locations of the relevant feature map, across all images in a batch. For a batch of size B, that gives O(B × M 2) negative samples per positive example, which quickly becomes cumbersome with increasing batch size. We found that DIM with the JSD loss is insensitive to the number of negative samples, and in fact outperforms infoNCE as the number of negative samples becomes smaller. The objective in Eq. 3 can be used to maximize MI between input and output, but ultimately this may be undesirable depending on the task. For example, trivial pixel-level noise is useless for image classification, so a representation may not benefit from encoding this information (e.g., in zero-shot learning, transfer learning, etc.). In order to obtain a representation more suitable for classification, we can instead maximize the average MI between the high-level representation and local patches of the image. Because the same representation is encouraged to have high MI with all the patches, this favours encoding aspects of the data that are shared across patches. Suppose the feature vector is of limited capacity (number of units and range) and assume the encoder does not support infinite output configurations. For maximizing the MI between the whole input and the representation, the encoder can pick and choose what type of information in the input is passed through the encoder, such as noise specific to local patches or pixels. However, if the encoder passes information specific to only some parts of the input, this does not increase the MI with any of the other patches that do not contain said noise. This encourages the encoder to prefer information that is shared across the input, and this hypothesis is supported in our experiments below. Our local DIM framework is presented in FIG0. First we encode the input to a feature map, DISPLAYFORM0 that reflects useful structure in the data (e.g., spatial locality), indexed in this case by i. Next, we summarize this local feature map into a global feature, E ψ (x) = f ψ • C ψ (x). We then define our MI estimator on global/local pairs, maximizing the average estimated MI: DISPLAYFORM1 We found success optimizing this "local" objective with multiple easy-to-implement architectures, and further implementation details are provided in the App. (A.2). Absolute magnitude of information is only one desirable property of a representation; depending on the application, good representations can be compact , independent (Hyvärinen & ; ; ;), disentangled (; ; ; ;), or independently controllable . DIM imposes statistical constraints onto learned representations by implicitly training the encoder so that the push-forward distribution, U ψ,P, matches a prior, V. This is done (see Figure 7 in the App. A.2) by training a discriminator, D φ: Y → R, to estimate the divergence, D(V||U ψ,P), then training the encoder to minimize this estimate: DISPLAYFORM0 This approach is similar to what is done in adversarial autoencoders , but without a generator. It is also similar to noise as targets , but trains the encoder to match the noise implicitly rather than using a priori noise samples as targets. All three objectives -global and local MI maximization and prior matching -can be used together, and doing so we arrive at our complete objective for Deep InfoMax (DIM): DISPLAYFORM1 where ω 1 and ω 2 are the discriminator parameters for the global and local objectives, respectively, and α, β, and γ are hyperparameters. We will show below that choices in these hyperparameters affect the learned representations in meaningful ways. As an interesting aside, we also show in the App. (A.8) that this prior matching can be used alone to train a generator of image data. We test Deep InfoMax (DIM) on four imaging datasets to evaluate its representational properties:, 2018). Note that we take CPC to mean ordered autoregression using summary features to predict "future" local features, independent of the constrastive loss used to evaluate the predictions (JSD, infoNCE, or DV). See the App. (A.2) for details of the neural net architectures used in the experiments. DISPLAYFORM0 Evaluation of representations is case-driven and relies on various proxies. Linear separability is commonly used as a proxy for disentanglement and mutual information (MI) between representations and class labels. Unfortunately, this will not show whether the representation has high MI with the class labels when the representation is not disentangled. Other works have looked at transfer learning classification tasks by freezing the weights of the encoder and training a small fully-connected neural network classifier using the representation as input. Others still have more directly measured the MI between the labels and the representation , which can also reveal the representation's degree of entanglement. Class labels have limited use in evaluating representations, as we are often interested in information encoded in the representation that is unknown to us. However, we can use mutual information neural estimation to more directly measure the MI between the input and output of the encoder. Next, we can directly measure the independence of the representation using a discriminator. Given a batch of representations, we generate a factor-wise independent distribution with the same per-factor marginals by randomly shuffling each factor along the batch dimension. A similar trick has been used for learning maximally independent representations for sequential data . We can train a discriminator to estimate the KL-divergence between the original representations (joint distribution of the factors) and the shuffled representations (product of the marginals, see Figure 12). The higher the KL divergence, the more dependent the factors. We call this evaluation method Neural Dependency Measure (NDM) and show that it is sensible and empirically consistent in the App. (A.6).To summarize, we use the following metrics for evaluating representations. For each of these, the encoder is held fixed unless noted otherwise:• Linear classification using a support vector machine (SVM). This is simultaneously a proxy for MI of the representation with linear separability.• Non-linear classification using a single hidden layer neural network (200 units) with dropout. This is a proxy on MI of the representation with the labels separate from linear separability as measured with the SVM above.• Semi-supervised learning (STL-10 here), that is, fine-tuning the complete encoder by adding a small neural network on top of the last convolutional layer (matching architectures with a standard fully-supervised classifier).• MS-SSIM BID6, using a decoder trained on the L 2 reconstruction loss. This is a proxy for the total MI between the input and the representation and can indicate the amount of encoded pixel-level information.• Mutual information neural estimate (MINE), I ρ (X, E ψ (x)), between the input, X, and the output representation, E ψ (x), by training a discriminator with parameters ρ to maximize the DV estimator of the KL-divergence.• Neural dependency measure (NDM) using a second discriminator that measures the KL between E ψ (x) and a batch-wise shuffled version of E ψ (x).For the neural network classification evaluation above, we performed experiments on all datasets except CelebA, while for other measures we only looked at CIFAR10. For all classification tasks, we built separate classifiers on the high-level vector representation (Y), the output of the previous fully-connected layer (fc) and the last convolutional layer (conv). Model selection for the classifiers was done by averaging the last 100 epochs of optimization, and the dropout rate and decaying learning rate schedule was set uniformly to alleviate over-fitting on the test set across all models. In the following experiments, DIM(G) refers to DIM with a global-only objective (α = 1, β = 0, γ = 1) and DIM(L) refers to DIM with a local-only objective (α = 0, β = 1, γ = 0.1), the latter chosen from the of an ablation study presented in the App. (A.5). For the prior, we chose a compact uniform distribution on 64, which worked better in practice than other priors, such as Gaussian, unit ball, or unit sphere. TAB0, and 3. In general, DIM with the local objective, DIM(L), outperformed all models presented here by a significant margin on all datasets, regardless of which layer the representation was drawn from, with exception to CPC. For the specific settings presented (architectures, no data augmentation for datasets except for STL-10), DIM(L) performs as well as or outperforms a fully-supervised classifier without fine-tuning, which indicates that the representations are nearly as good as or better than the raw pixels given the model constraints in this setting. Note, however, that a fully supervised classifier can perform much better on all of these benchmarks, especially when specialized architectures and carefully-chosen data augmentations are used. Competitive or better on CIFAR10 also exist (albeit in different settings, e.g., ;), but to our knowledge our STL-10 are state-of-the-art for unsupervised learning. The in this setting support the hypothesis that our local DIM objective is suitable for extracting class information. Our show that infoNCE tends to perform best, but differences between infoNCE and JSD diminish with larger datasets. DV can compete with JSD with smaller datasets, but DV performs much worse with larger datasets. For CPC, we were only able to achieve marginally better performance than BiGAN with the settings above. However, when we adopted the strided crop architecture found in , both CPC and DIM performance improved considerably. We chose a crop size of 25% of the image size in width and depth with a stride of 12.5% the image size (e.g., 8 × 8 crops with 4 × 4 strides for CIFAR10, 16 × 16 crops with 8 × 8 strides for STL-10), so that there were a total of 7 × 7 local features. For both DIM(L) and CPC, we used infoNCE as well as the same "encode-and-dot-product" architecture (tantamount to a deep bilinear model), rather than the shallow bilinear model used in. For CPC, we used a total of 3 such networks, where each network for CPC is used for a separate prediction task of local feature maps in the next 3 rows of a summary predictor feature within each column. 2 For simplicity, we omitted the prior term, β, from DIM. Without data augmentation on CIFAR10, CPC performs worse than DIM(L) with a ResNet-50 type architecture. For experiments we ran on STL-10 with data augmentation (using the same encoder architecture as TAB1), CPC and DIM were competitive, with CPC performing slightly better. CPC makes predictions based on multiple summary features, each of which contains different amounts of information about the full input. We can add similar behavior to DIM by computing less global features which condition on 3 × 3 blocks of local features sampled at random from the full 7 × 7 sets of local features. We then maximize mutual information between these less global features and the full sets of local features. We share a single MI estimator across all possible 3 × 3 blocks of local features when using this version of DIM. This represents a particular instance of the occlusion technique described in Section 4.3. The ing model gave a significant performance boost to . These experiments used a strided-crop architecture similar to the one used in. For CIFAR10 we used a ResNet-50 encoder, and for STL-10 we used the same architecture as for TAB1. We also tested a version of DIM that computes the global representation from a 3x3 block of local features randomly selected from the full 7x7 set of local features. This is a particular instance of the occlusions described in Section 4.3. DIM(L) is competitive with CPC in these settings. DIM for STL-10. Surprisingly, this same architecture performed worse than using the fully global representation with CIFAR10. Overall DIM only slightly outperforms CPC in this setting, which suggests that the strictly ordered autoregression of CPC may be unnecessary for some tasks. TAB4 on linear separability, reconstruction (MS-SSIM), mutual information, and dependence (NDM) with the CIFAR10 dataset. We did not compare to CPC due to the divergence of architectures. For linear classifier (SVC), we trained five support vector machines with a simple hinge loss for each model, averaging the test accuracy. For MINE, we used a decaying learning rate schedule, which helped reduce variance in estimates and provided faster convergence. MS-SSIM correlated well with the MI estimate provided by MINE, indicating that these models encoded pixel-wise information well. Overall, all models showed much lower dependence than BiGAN, indicating the marginal of the encoder output is not matching to the generator's spherical Gaussian input prior, though the mixed local/global version of DIM is close. For MI, reconstructionbased models like VAE and AAE have high scores, and we found that combining local and global DIM objectives had very high scores (α = 0.5, β = 0.1 is presented here as DIM(L+G)). For more in-depth analyses, please see the ablation studies and the nearest-neighbor analysis in the App. (A.4, A.5). Maximizing MI between global and local features is not the only way to leverage image structure. We consider augmenting DIM by adding input occlusion when computing global features and by adding auxiliary tasks which maximize MI between local features and absolute or relative spatial coordinates given a global feature. These additions improve classification (see TAB5).For occlusion, we randomly occlude part of the input when computing the global features, but compute local features using the full input. Maximizing MI between occluded global features and unoccluded local features aggressively encourages the global features to encode information which is shared across the entire image. For coordinate prediction, we maximize the model's ability to predict the coordinates (i, j) of a local feature c (i,j) = C (i,j) ψ (x) after computing the global features DISPLAYFORM0 To accomplish this, we maximize E[log p θ ((i, j)|y, c (i,j) )] (i.e., minimize the crossentropy). We can extend the task to maximize conditional MI given global features y between pairs of local features (c (i,j), c (i,j) ) and their relative coordinates (i − i, j − j). This objective can be written as DISPLAYFORM1 We use both these objectives in our . Additional implementation details can be found in the App. (A.7). Roughly speaking, our input occlusions and coordinate prediction tasks can be interpreted as generalizations of inpainting and context prediction tasks which have previously been proposed for self-supervised feature learning. Augmenting DIM with these tasks helps move our method further towards learning representations which encode images (or other types of inputs) not just in terms of compressing their low-level (e.g. pixel) content, but in terms of distributions over relations among higher-level features extracted from their lower-level content. In this work, we introduced Deep InfoMax (DIM), a new method for learning unsupervised representations by maximizing mutual information, allowing for representations that contain locally-consistent information across structural "locations" (e.g., patches in an image). This provides a straightforward and flexible way to learn representations that perform well on a variety of tasks. We believe that this is an important direction in learning higher-level representations. Here we show the relationship between the Jensen-Shannon divergence (JSD) between the joint and the product of marginals and the pointwise mutual information (PMI). Let p(x) and p(y) be two marginal densities, and define p(y|x) and p(x, y) = p(y|x)p(x) as the conditional and joint distribution, respectively. Construct a probability mixture density, m(x, y) = 1 2 (p(x)p(y) + p(x, y)). It follows that m(x) = p(x), m(y) = p(y), and m(y|x) = 1 2 (p(y) + p(y|x)). Note that: DISPLAYFORM0 Discarding some constants: DISPLAYFORM1 The quantity inside the expectation of Eqn. 10 is a concave, monotonically increasing function of the ratio p(y|x) p(y), which is exactly e PMI(x,y). Note this relationship does not hold for the JSD of arbitrary distributions, as the the joint and product of marginals are intimately coupled. We can verify our theoretical observation by plotting the JSD and KL divergences between the joint and the product of marginals, the latter of which is the formal definition of mutual information (MI). As computing the continuous MI is difficult, we assume a discrete input with uniform probability, p(x) (e.g., these could be one-hot variables indicating one of N i.i.d. random samples), and a randomly initialized N × M joint distribution, p(x, y), such that M j=1 p(x i, y j) = 1 ∀i. For this joint distribution, we sample from a uniform distribution, then apply dropout to encourage sparsity to simulate the situation when there is no bijective function between x and y, then apply a softmax. As the distributions are discrete, we can compute the KL and JSD between p(x, y) and p(x)p(y).We ran these experiments with matched input / output dimensions of 8, 16, 32, 64, and 128, randomly drawing 1000 joint distributions, and computed the KL and JSD divergences directly. Our (Figure A.1) indicate that the KL (traditional definition of mutual information) and the JSD have an approximately monotonic relationship. Overall, the distributions with the highest mutual information also have the highest JSD. Here we provide architectural details for our experiments. Example code for running Deep Infomax (DIM) can be found at https://github.com/rdevon/DIM.Encoder We used an encoder similar to a deep convolutional GAN discriminator for CIFAR10 and CIFAR100, and for all other datasets we used an Alexnet architecture similar to that found in. ReLU activations and batch norm were used on every hidden layer. For the DCGAN architecture, a single hidden layer with 1024 units was used after the final convolutional layer, and for the Alexnet architecture it was two hidden layers with 4096. For all experiments, the output of all encoders was a 64 dimensional vector. with discrete inputs and a given randomized and sparse joint distribution, p(x, y). 8 × 8 indicates a square joint distribution with 8 rows and 8 columns. Our experiments indicate a strong monotonic relationship between M I(x; y) and JSD(p(x, y)||p(x)p(y)) Overall, the distributions with the highest M I(x; y) have the highest JSD(p(x, y)||p(x)p(y)).Mutual information discriminators For the global mutual information objective, we first encode the input into a feature map, C ψ (x), which in this case is the output of the last convolutional layer. We then encode this representation further using linear layers as detailed above to get E ψ (x). C ψ (x) is then flattened, then concatenated with E ψ (x). We then pass this to a fully-connected network with two 512-unit hidden layers (see TAB7). We tested two different architectures for the local objective. The first (Figure 5) concatenated the global feature vector with the feature map at every location, i.e., {[C DISPLAYFORM0 . A 1 × 1 convolutional discriminator is then used to score the (feature map, feature vector) pair, DISPLAYFORM1 Fake samples are generated by combining global feature vectors with local feature maps coming from different images, x: DISPLAYFORM2 This architecture is featured in the of TAB4, as well as the ablation and nearest-neighbor studies below. We used a 1 × 1 convnet with two 512-unit hidden layers as discriminator (Table 7). Table 7: Local DIM concat-and-convolve network architecture DISPLAYFORM3 The other architecture we tested (Figure 6) is based on non-linearly embedding the global and local features in a (much) higher-dimensional space, and then computing pair-wise scores using dot products between their high-dimensional embeddings. This enables efficient evaluation of a large number of pair-wise scores, thus allowing us to use large numbers of positive/negative samples. Given a sufficiently high-dimensional embedding space, this approach can represent (almost) arbitrary classes of pair-wise functions that are non-linear in the original, lower-dimensional features. For more information, refer to Reproducing Kernel Hilbert Spaces. We pass the global feature through a Figure 5: Concat-and-convolve architecture. The global feature vector is concatenated with the lower-level feature map at every location. A 1 × 1 convolutional discriminator is then used to score the "real" feature map / feature vector pair, while the "fake" pair is produced by pairing the feature vector with a feature map from another image. Figure 6: Encode-and-dot-product architecture. The global feature vector is encoded using a fully-connected network, and the lower-level feature map is encoded using 1x1 convolutions, but with the same number of output features. We then take the dotproduct between the feature at each location of the feature map encoding and the encoded global vector for scores. Figure 7: Matching the output of the encoder to a prior. "Real" samples are drawn from a prior while "fake" samples from the encoder output are sent to a discriminator. The discriminator is trained to distinguish between (classify) these sets of samples. The encoder is trained to "fool" the discriminator.fully connected neural network to get the encoded global feature, S ω (E ψ (x)). In our experiments, we used a single hidden layer network with a linear shortcut (See TAB8). We embed each local feature in the local feature map C ψ (x) using an architecture which matches the one for global feature embedding. We apply it via 1 × 1 convolutions. Details are in TAB9. Finally, the outputs of these two networks are combined by matrix multiplication, summing over the feature dimension (2048 in the example above). As this is computed over a batch, this allows us to efficiently compute both positive and negative examples simultaneously. This architecture is featured in our main classification in TAB0, and 5.For the local objective, the feature map, C ψ (x), can be taken from any level of the encoder, E ψ. For the global objective, this is the last convolutional layer, and this objective was insensitive to which layer we used. For the local objectives, we found that using the next-to-last layer worked best for CIFAR10 and CIFAR100, while for the other larger datasets it was the previous layer. This sensitivity is likely due to the relative size of the of the receptive fields, and further analysis is necessary to better understand this effect. Note that all feature maps used for DIM included the final batch normalization and ReLU activation. Figure 7 shows a high-level overview of the prior matching architecture. The discriminator used to match the prior in DIM was a fully-connected network with two hidden layers of 1000 and 200 units TAB0 ). Generative models For generative models, we used a similar setup as that found in for the generators / decoders, where we used a generator from DCGAN in all experiments. All models were trained using Adam with a learning rate of 1 × 10 −4 for 1000 epochs for CIFAR10 and CIFAR100 and for 200 epochs for all other datasets. Contrastive Predictive Coding For Contrastive Predictive Coding , we used a simple a GRU-based PixelRNN with the same number of hidden units as the feature map depth. All experiments with CPC had the global state dimension matched with the size of these recurrent hidden units. We found both infoNCE and the DV-based estimators were sensitive to negative sampling strategies, while the JSD-based estimator was insensitive. JSD worked better (1 − 2% accuracy improvement) by excluding positive samples from the product of marginals, so we exclude them in our implementation. It is quite likely that this is because our batchwise sampling strategy overestimate the frequency of positive examples as measured across the complete dataset. infoNCE was highly sensitive to the number of negative samples for estimating the log-expectation term (see FIG4). With high sample size, infoNCE outperformed JSD on many tasks, but performance drops quickly as we reduce the number of images used for this estimation. This may become more problematic for larger datasets and networks where available memory is an issue. DV was outperformed by JSD even with the maximum number of negative samples used in these experiments, and even worse was highly unstable as the number of negative samples dropped. Accuracies shown averaged over the last 100 epochs, averaged over 3 runs, for the infoNCE, JSD, and DV DIM losses. x-axis is log base-2 of the number of negative samples (0 mean one negative sample per positive sample). JSD is insensitive to the number of negative samples, while infoNCE shows a decline as the number of negative samples decreases. DV also declines, but becomes unstable as the number of negative samples becomes too low. DISPLAYFORM0 In order to better understand the metric structure of DIM's representations, we did a nearest-neighbor analysis, randomly choosing a sample from each class in the test set, ordering the test set in terms of L 1 distance in the representation space (to reflect the uniform prior), then selecting the four with the lowest distance. Our in FIG3 show that DIM with a local-only objective, DIM(L), learns a representation with a much more interpretable structure across the image. However, our potentially highlights an issue with using only consistent information across patches, as many of the nearest neighbors share patterns (colors, shapes, texture) but not class. Values calculated are points on the grid, and the heatmaps were derived by bilinear interpolation. Heatmaps were thresholded at the minimum value (or maximum for NDM) for visual clarity. Highest (or lowest) value is marked on the grid. NDM here was measured without the sigmoid function. Figure 11: Ablation study on CelebA over the global and local parameters, α and β. The classification task is multinomial, so provided is the average, minimum, and maximum class accuracies across attibutes. While the local objective is crucial, the global objective plays a stronger role here than with other datasets. To better understand the effects of hyperparameters α, β, and γ on the representational characteristics of the encoder, we performed several ablation studies. These illuminate the relative importance of global verses local mutual information objectives as well as the role of the prior. The of our ablation study for DIM on CIFAR10 are presented in FIG5. In general, good classification performance is highly dependent on the local term, β, while good reconstruction is highly dependent on the global term, α. However, a small amount of α helps in classification accuracy and a small about of β improves reconstruction. For mutual information, we found that having a combination of α and β yielded higher MINE estimates. Finally, for CelebA (Figure 11), where the classification task is more fine-grained (is composed of potentially locally-specified labels, such as "lipstick" or "smiling"), the global objective plays a stronger role than with classification on other datasets (e.g., CIFAR10). Figure 12: A schematic of learning the Neural Dependency Measure. For a given batch of inputs, we encode this into a set of representations. We then shuffle each feature (dimension of the feature vector) across the batch axis. The original version is sent to the discriminator and given the label "real", while the shuffled version is labeled as "fake". The easier this task, the more dependent the components of the representation. Neural Dependency Measures (NDMs) for various β-VAE BID0 ) models (0.1, 0.5, 1.0, 1.5, 2.0, 4.0). Error bars are provided over five runs of each VAE and estimating NDM with 10 different networks. We find that there is a strong trend as we increase the value of β and that the estimates are relatively consistent and informative w.r.t. independence as expected. β = 1.0 give similar numbers. In addition, the variance over estimates and models is relatively low, meaning the estimator is empirically consistent in this setting. Here we present experimental details on the occlusion and coordinate prediction tasks. Training with occlusion. With occluded inputs, this loss tends to be highest for local features with receptive fields that overlap the occluded region. Occlusions. For the occlusion experiments, the sampling distribution for patches to occlude was ad-hoc. Roughly, we randomly occlude the input image under the constraint that at least one 10 × 10 block of pixels remains visible and at least one 10 × 10 block of pixels is fully occluded. We chose 10 × 10 based on the receptive fields of local features in our encoder, since it guarantees that occlusion leaves at least one local feature fully observed and at least one local feature fully unobserved. FIG2 shows the distribution of occlusions used in our tests. Absolute coordinate prediction For absolute coordinate prediction, the global features y and local features c (i,j) are sampled by 1) feeding an image from the data distribution through the feature encoder, and 2) sampling a random spatial location (i, j) from which to take the local features c (i,j). Given y and c (i,j), we treat the coordinates i and j as independent categorical variables and measure the required log probability using a sum of categorical cross-entropies. In practice, we implement the prediction function p θ as an MLP with two hidden layers, each with 512 units, ReLU activations, and batchnorm. We marginalize this objective over all local features associated with a given global feature when computing gradients. Relative coordinate prediction For relative coordinate prediction, the global features y and local features c (i,j) /c (i,j) are sampled by 1) feeding an image from the data distribution through the feature encoder, 2) sampling a random spatial location (i, j) from which to take source local features c (i,j), and 3) sampling another random location (i, j) from which to take target local features c (i,j). In practice, our predictive model for this task uses the same architecture as for the task described previously. For each global feature y we select one source feature c (i,j) and marginalize over all possible target features c (i,j) when computing gradients. We show here and in our experiments below that we can use prior objective in DIM (Equation 7) to train a high-quality generator of images by training U ψ,P to map to a one-dimensional mixture of two Gaussians implicitly. One component of this mixture will be a target for the push-forward distribution of P through the encoder while the other will be a target for the push-forward distribution of the generator, Q θ, through the same encoder. Let G θ: Z → X be a generator function, where the input z ∈ Z is drawn from a simple prior, p(z) (such as a spherical Gaussian). Let Q θ be the generated distribution and P be the empirical distribution of the training set. Like in GANs, we will pass the samples of the generator or the training data through another function, E ψ, in order to get gradients to find the parameters, θ. However, unlike GANs, we will not play the minimax game between the generator and this function. Rather E ψ will be trained to generate a mixture of Gaussians conditioned on whether the input sample came from P or Q θ:V P = N (µ P, 1), V Q = N (µ Q, 1), U ψ,P = P#E ψ, U ψ,Q = Q θ #E ψ,where N (µ P, 1) and N (µ Q, 1) are normal distributions with unit variances and means µ P and µ Q respectively. In order to find the parameters ψ, we introduce two discriminators, T DISPLAYFORM0: Y → R, and use the lower bounds following defined by the JSD f-GAN:(ψ,φ P,φ Q) = arg min ψ arg max DISPLAYFORM1 The generator is trained to move the first-order moment of E U ψ,Q [y] = E p(z) [E ψ (G θ (z))] to µ P: DISPLAYFORM2 Some intuition might help understand why this might work. As discussed in BID2, if P and Q θ have support on a low-dimensional manifolds on X, unless they are perfectly aligned, there exists a discriminator that will be able to perfectly distinguish between samples coming from P and Q θ, which means that U ψ,P and U ψ,Q must also be disjoint. However, to train the generator, U ψ,P and U ψ,Q need to share support on Y in order to ensure stable and non-zero gradients for the generator. Our own experiments by overtraining the discriminator FIG8 ) confirm that lack of overlap between the two modes of the discriminator is symptomatic of poor training. Suppose we start with the assumption that the encoder targets, V P and V Q, should overlap. Unless P and Q θ are perfectly aligned (which according to BID2 is almost guaranteed not to happen with natural images), then the discriminator can always accomplish this task by discarding information about P or Q θ. This means that, by choosing the overlap, we fix the strength of the encoder. For the generator and encoder, we use a ResNet architecture identical to the one found in. We used the contractive penalty (found in but first introduced in contractive autoencoders ) on the encoder, gradient clipping on the discriminators, and no regularization on the generator. Batch norm was used on the generator, but not on the discriminator. We trained on 64 × 64 dimensional LSUN BID10, CelebA , and Tiny Imagenet dataset. A.10 IMAGES GENERATION a) NS-GAN-CP b) WGAN-GP c) Mapping to two Gaussians Figure 16: Samples of generated used to get scores in TAB0. For every methods, the sample are generated after 100 epochs and the models are the same. Qualitative from these three methods show no qualitative difference. Here, we train a generator mapping to two Gaussian implicitly as described in Section A.8. Our (Figure 16) show highly realistic images qualitatively competitive to other methods . In order to quantitatively compare our method to GANs, we trained a non-saturating GAN with contractive penalty (NS-GAN-CP) and WGAN-GP with identical architectures and training procedures. Our TAB0 show that, while our mehtod did not surpass NS-GAN-CP or WGAN-GP in our experiments, they came reasonably close.
We learn deep representation by maximizing mutual information, leveraging structure in the objective, and are able to compute with fully supervised classifiers with comparable architectures
1,466
scitldr
Estimating the importance of each atom in a molecule is one of the most appealing and challenging problems in chemistry, physics, and material engineering. The most common way to estimate the atomic importance is to compute the electronic structure using density-functional theory (DFT), and then to interpret it using domain knowledge of human experts. However, this conventional approach is impractical to the large molecular database because DFT calculation requires huge computation, specifically, O(n^4) time complexity w.r.t. the number of electrons in a molecule. Furthermore, the calculation should be interpreted by the human experts to estimate the atomic importance in terms of the target molecular property. To tackle this problem, we first exploit machine learning-based approach for the atomic importance estimation. To this end, we propose reverse self-attention on graph neural networks and integrate it with graph-based molecular description. Our method provides an efficiently-automated and target-directed way to estimate the atomic importance without any domain knowledge on chemistry and physics. In molecules, each atom has the importance in manifesting the entire molecular properties, and estimating such atomic importance plays a key role in interpreting molecular systems. For these reasons, the atomic importance estimation has been consistently studied in the scientific communities (; ;). However, estimating the atomic importance is one of the most challenging tasks in chemistry and quantum mechanics because the importance of each atom is comprehensively determined based on atomic properties, neighbor atoms, bonding types, and target molecular property. The most common approach for estimating the atomic importance is to interpret the electronic structure using density-function theory (DFT) . In this approach, the atomic importance is estimated through three steps: 1) A human expert selects appropriate functional and basis sets for a given molecule to apply DFT; 2) The electronic structure of the molecule is calculated based on DFT calculation; 3) The human expert estimates the atomic importance by interpreting the calculated electronic structure in terms of target molecular property. Although some methods are developed to estimate relative contributions of atoms in molecules, their generality is typically limited to the molecular properties . For this reason, DFT that can generate a general description of the molecule has been most widely used to interpret the molecular systems and to reveal important atoms for target molecular property (; b;). However, the conventional approach based on DFT has three fundamental limitations in efficiency, automation, and generality. • Efficiency: As an example of the electronic structure computations, DFT calculation requires O(n 4) time complexity to compute the electronic structure, where n is the number of basis functions that describe electrons in the molecule. Generally, molecules have more electrons than atoms. • Automation: DFT cannot automatically generate all target-specified physical properties in principle, so human expert should manually select additional calculation method to com-pute target molecular property from the electronic distributions. That is, domain knowledge of the human experts is necessarily required to estimate the atomic importance in terms of the target molecular property. • Generality: For some molecular properties, the relationship between them and the electronic distributions is not clear. Moreover, sometimes the estimation is impossible because the relationships between molecular property and molecular structure are not interpretable. For these limitations, estimating the atomic importance is remaining as one of the most challenging problems in both science and engineering such as physics, chemistry, pharmacy, and material engineering. To overcome the limitations of the conventional approach in estimating the atomic importance, we first exploit machine learning-based approach. To this end, we propose a new concept of reverse self-attention and integrate it with the graph neural networks. The self-attention mechanism was originally designed to determine important elements within the input data to accurately predict its corresponding target or label in natural language processing . Similarly, in graph neural networks, self-attention is used to determine important neighbor nodes within the input graph to generate more accurate node or graph embeddings . Our reverse self-attention is defined as the inverse of the self-attention to calculate how important a selected node is considered in the graph. For a given molecule and target property, the proposed estimation method selects the atom that has the largest reverse self-attention score as the most important atom. The proposed method estimates the target-directed atomic importance through two steps: 1) For the given molecular graphs and their corresponding target properties, self-attention-based graph neural network is trained; 2) After the training, the reverse self-attention scores are calculated, and then the atomic importance is estimated according to the reverse self-attention scores. As shown in this estimation process, neither huge computation nor human experts in chemistry and physics is required. Thus, the proposed method provides an efficient and fully-automated way to estimate the atomic importance in terms of the target molecular property via target-aware training of the graph self-attention. To validate the effectiveness of the proposed method, we conducted comprehensive experiments and evaluated the estimation performance using both quantitative and qualitative analyses. The contributions of this paper are summarized as: • This paper first proposes a machine learning-based approach to estimate the atomic importance in the molecule. • The proposed method drastically reduced the computational cost for the atomic importance estimation from O(n 4) time complexity to the practical time complexity of the graph-based deep learning. • The proposed method provides a fully-automated and target-directed way to estimate the atomic importance. • We comprehensively validated the effectiveness of the proposed method using both quantitative and qualitative evaluations. However, since none of a labeled dataset for the atomic importance estimation and a systematic way to quantitatively evaluate the estimation accuracy, we devised a systematic quantitative evaluation method and validated the effectiveness of the proposed method using it. Before describing the atomic importance estimation based on the reverse self-attention, we briefly two essential concepts for understanding the proposed method in this section: 1) graph-based molecular analysis; 2) graph self-attention and graph attention network. Graph-based molecular machine learning has attracted significant attention from both scientific and machine learning communities. It has shown state-of-the-art performance in various scientific applications such as molecular or crystal property prediction (; ; Lu Figure 1: Overall process of the graph-based molecular machine learning. In the molecular graph, three and two-dimensional vectors mean atom-features and bond-features, respectively. et al., 2019), molecular generation ), and atomic reaction analysis . In the graph-based molecular machine learning, a molecule is represented as an undirected feature graph G = (V, A, X, U), where V is a set of atoms (nodes); A ∈ {0, 1} |V|×|V| is an adjacency matrix indicating the existence of the atomic bonds (edges); X ∈ R |V|×d is a ddimensional atom-feature matrix; U ∈ R |B|×m is a m-dimensional bond-feature matrix; and B is a set of atomic bonds. For a given labeled molecular dataset D = {(G 1, y 1),..., (G |D|, y |D|)}, the goal of the graph-based molecular machine learning is to build a model f: G → y, where y can be target value, class label, or another molecular graph. In this paper, we will refer atom and bond as node and edge, respectively. Fig. 1 shows the overall process of the graph-based molecular machine learning. First, molecular or crystal structures formatted by.xyz or.cif are converted into the molecular graph with the adjacency matrix A, node-feature matrix X, and edge-feature matrix U. Then, graph neural network generates the graph-level embedding of the input molecular graph through the aggregation layers and predict the corresponding target y. In the aggregation layer of the graph neural network, each node in the graph is converted into the node-embedding vector, and it is stacked to the node-embedding matrix in column-wise. The output node-embedding matrix of the k th aggregation layer, H (k), is calculated by: where ψ is an aggregation function, and W (k) is a trainable weight matrix of the k th aggregation layer. Note that H is the input node-feature matrix X. For example, in graph convolutional network (GCN) , H (k), is calculated by: whereà is an adjacency matrix containing self-loop and can be calculated byà = A + I. After the node-embedding, the node-embedding matrix of the last aggregation layer is converted into the graph-level embedding vector by readout function . For the implementation of the readout function, mean or max-based operations are commonly used (a; . Finally, the target corresponding the input graph is predicted by interpreting the output graph-level embedding vector through the fully-connected layers. The attention and the self-attention mechanisms are originally introduced in natural language processing to refer other data and elements to improve prediction and classification accuracies, respectively . In graph neural networks based on neighbor node aggregation approach such as GCN, the self-attention mechanism is used to calculate the importance of neighbor nodes in generating the node-embedding vectors. Graph attention network (GAT) is the first neural network to apply the self-attention mechanism in graph-based deep learning. By exploiting the self-attention mechanism, GAT showed highly improved prediction and classification accuracies (; . In the k th aggregation layer of GAT, the attention coefficient between node i and its neighbor node j is defined by: where f is a feedforward neural network; V is a trainable weight matrix; and ⊕ is a vector concatenation. Based on the attention coefficient, the attention score, α (k) ij, is calculated by: where N(i) is a set of neighbor nodes of node i in the graph. Finally, in the k th aggregation layer of GAT, the node-embedding vector of node i is calculated by: In GAT, the aggregation layer that generates the node-embeddings based on the graph self-attention is called graph attention layer. In this section, we explain our machine learning-based approach for estimating the atomic importance. To devise the machine learning-based importance estimator, we define a new concept of reverse self-attention and integrate it with GAT. The self-attention mechanism in the molecular graph provides numerical importance of each neighbor atom in terms of target molecular property. To exploit the concept of the self-attention for estimating the atomic importance, reverse self-attention in the k th graph attention layer is defined as the inverse of the self-attention: ρ That is, the reverse self-attention of node i means how much attention node i receives from its neighbor nodes. Fig. 2 shows an example of the reverse self-attention in the graph with five nodes. In this example, the reverse self-attention of n 1 is calculated as the sum of α 21 + α 31 + α 41. As shown in the definition of the reverse self-attention, the additional time and space complexities of the reverse self-attention are negligible because calculating the reverse self-attention is just the sum of pre-computed self-attention scores in prediction or classification time. In GAT, the self-attention scores of the k th graph attention layer are calculated based the nodeembeddings of the (k − 1) th graph attention layer. That is, the self-attention scores of the first layer are calculated by considering 1-hop neighbor nodes, and the second layer's self-attention scores are determined by considering 2-hop neighbor nodes. Thus, the reverse self-attention in the first Figure 3: Overall process of building machine learning-based atomic importance estimator with the reverse self-attention mechanism and selecting important atom or group of atoms using the estimator. graph attention layer indicates the importance of each atom itself, and the second layer's reverse self-attention means the importance of each atom group consists of 1-hop neighbor atoms, not an atom itself. These interpretations can be extended to the atomic importance for the group of atoms that consists of k-hop neighbor atoms by the reverse self-attention in the (k + 1) th graph attention layer. This section explains the way to build a fully-automated and target-directed atomic importance estimator based on the reverse self-attention and GAT. We call this estimator Machine Intelligencebased Atomic Importance Estimator (MIAIE). Fig. 3 shows overall process of building MIAIE and estimating the target-directed atomic importance via MIAIE. The atomic importance estimation using MIAIE consists of four steps: • Step 1: Train GAT for regression or classification on the molecular dataset with the target molecular property. In this training process, GAT automatically learns the self-attentions scores in terms of predicting the target molecular property. • Step 2: Predict the target property of the interesting molecule and extract self-attention scores for each graph attention layer. • Step 3: Calculate the reverse self-attention scores using Eq. and sort them for each graph attention layer. • Step 4: Select atom or group of atoms that have the largest reverse self-attention score as the most important element in the molecule in terms of the target property. As shown in the overall process of estimating the atomic importance using MIAIE, the estimation process is fully-automated and does not require any domain knowledge of the human experts in chemistry and physics. MIAIE is also incomparably efficient than the conventional approach using DFT calculations with O(n 4) time complexity because it uses only the graph neural networks to describe the molecules, and the time complexity of the estimation process (step 3 and 4) is negligible. To accurately validate the effectiveness of MIAIE, we conducted both quantitative and qualitative evaluations on two well-known molecular datasets. However, to the best of our knowledge, neither a labeled dataset for the atomic importance estimation nor a systematic way to evaluate the performance of the atomic importance estimator exists. For this reason, we devised a validation method to quantitatively evaluate the performance of the atomic importance estimators. We will explain this validation method in Section 4.3. We used MolView 1 to visualize the estimation of MIAIE. For the experiments, we used two well-known molecular datasets: Quantum Mechanics9 (QM9) and Estimated SOLubility (ESOL) . These datasets were randomly split into 90% train and 10% test subsets because the test dataset for them are not provided. In the experiments, we validated the effectiveness of MIAIE by splitting train and test datasets to evaluate the generalization capability as well as the atomic importance estimation accuracy. However, we can use MIAIE by fitting it to a given dataset only without considering the generalization capability-for example, training without L 2 -regularization , dropout , or batch normalization if our analysis is only focused on specific molecular datasets containing the interesting molecules. To quantitatively evaluate the effectiveness of MIAIE, QM9 dataset is used. It is a subset of GDB-17 database , in which the structural information of the molecules is given in Cartesian coordinates. QM9 dataset contains 133,886 organic molecules and several target molecular properties such as highest occupied molecular orbital (HOMO), lowest unoccupied molecular orbital (LUMO), and their gap (HOMO-LUMO gap). In the experiments, we used HOMO-LUMO gap as the target molecular property because it is an essential property describing the molecular systems and one of the difficult properties to be interpreted from the molecular structure. Molecules in QM9 dataset have 0.0245∼0.6221 HOMO-LUMO gaps. For the qualitative analysis, we used ESOL dataset that contains aqueous solubility as a target property because the important atoms in terms of aqueous solubility can be easily determined by a human expert. ESOL dataset was originally published with the aqueous solubility of 2,874 compounds but a smaller subset of 1,128 compounds is recently used in chemistry , and we also used the subset of ESOL for the experiment. Unlike QM9 dataset, the structural information of the molecules in ESOL is provided by SMILES representation that does not present hydrogen. For this reason, hydrogen in the molecule is ignored in estimating the atomic importance. We implemented MIAIE using PyTorch 2 and PyTorch Deep Graph Library (DGL) 3. GAT in MI-AIE is also implemented based on the neural network modules of DGL. In the experiments, we used GAT with two graph attention layers and two fully-connected layers. To generate graphlevel embedding, mean-based readout function is applied. Mean squared error (MAE) was used as an objective function to train GAT, and L 2 -regularization with 0.001 regularization coefficient was applied to improve the generalization capability of the model. As a training algorithm, Adam SGD was used with an initial learning rate of 0.001 to fit model parameters of GAT. To accelerate the training and improve the prediction performance of GAT, we concatenated additional molecular features about molecular weight and the number of rings to the graphlevel embedding vector. The source code of MIAIE and the experiment scripts are available at http://----------------------(open after the review). To quantitatively evaluate the effectiveness of MIAIE, we assume that if the selected atom or group of atoms are truly important in terms of the target property, then the gap between the target properties of the original molecule and its selected sub-molecule (atom or group of atoms) will be small. Based: An original molecule made of the carbon ring and its selected sub-molecule that consists of 1-hop neighbor atoms. Note that hydrogens are automatically attached to make a valid molecular structure. The group of atoms that is marked by a red mask means the most important atomic group estimated by MIAIE. White node: hydrogen (H); Gray node: carbon (C). on this assumption, we define atomic importance estimation error (AIEE) as: where G i is a sub-molecule of G i selected by an atomic importance estimator; DF T means the value of the target property of G i computed by DFT calculations; and size(G i) is the number of atoms in G i. Since DFT can accurately calculate the value of the molecular properties in most cases even though it requires huge computation, we used DFT to calculate the value of the target molecular property of unknown G i. In AIEE, the error of each molecule is divided by the number of atoms in the molecule. It is an essential part of AIEE to accurately measure the target property gaps because the larger the size of the original molecule, the greater the probability of increasing the gap between the properties of the original molecule and its smaller sub-molecule with a fixed size. In this experiment, we quantitatively evaluated the effectiveness of MIAIE using AIEE in Eq. on QM9 dataset. We estimated the atomic importance using MIAIE with the reverse self-attention of the second graph attention layer for 13,3889 test molecules in terms of HOMO-LUMO gap and observed 0.00544 on AIEE. This error is relatively small when considering the fact that the molecular properties of the original molecule cannot be preserved completely in sub-molecules. However, we observed a relatively large error of 0.01007 compared to the mean of error (0.00544) on the molecule made of the carbon ring as shown in Fig. 4. This large error is caused due to the chemical characteristics of HOMO-LUMO gap that it is determined by the overall electronic distributions in ring-shaped molecules with the same atoms such as Fig. 4-(a). Thus, this large error in HOMO-LUMO gap is inevitable in splitting a ring-shaped molecule with the same atoms into the sub-molecules because each atom has similar importance. It is a limitation of our validation method for the quantitative evaluation. One possible future work is to modify the estimation step of MIAIE to answer "each atom has similar importance" instead of selecting important atom for the molecules like Fig. 4 -(a). For ESOL dataset, we conducted a qualitative evaluation because the important atom or group of atoms in terms of aqueous solubility can be easily determined by human experts. Furthermore, we denoted the normalized atomic importance of the most important group of atoms, which is selected by MIAIE. Due to the limitation of the space, we present only three natural and two interesting estimation in this paper. can be easily justified: 1) Since nitrogen and oxygen exist only in the selected sub-molecule, the estimation of 2-Nitropropane is natural; 2) Similar to the of 2-Nitropropane, MIAIE accurately selected a group of atoms that contains the largest number of nitrogen and oxygen in Metronidazole; 3) In Indoline, only three sub-molecules that consists of 1-hop neighbor atoms can contain the nitrogen. However, among these sub-molecules, the selected group of atoms has the largest electronic charge that improve the reactivity to water, so the estimation of MIAIE is also chemically resonable. Some interesting are and (e). Benfluralin in Fig. 5 -(d) has a low aqueous solubility of -5.53 in log-scale because the C-CF 3 group is attached . Interestingly, although we did not tell this chemical observation about the relationship between the C-CF 3 group and aqueous solubility to MIAIE, it exactly selected the C-CF 3 group as the most important group of atoms in terms of aqueous solubility. Another interesting is Coumaphos in Fig. 5 -(e). In this molecule, the C-CO 2 group was selected as the most important group of atoms rather than PO 3 S group that contains the largest number of oxygen. However, this estimation is chemically reasonable because the carbons surrounding the PO 3 S group prevent the reaction between the PO 3 S group and water. In these estimation , we observed that MIAIE can estimate the atomic importance by considering complex chemical rules related to the functional properties of the atomic groups and the overall environment of the molecules, even though it is trained without any domain knowledge of chemistry and quantum mechanics. To check the correctness of the estimation of MIAIE, we also measured the atomic importance on the molecules that have the symmetric structure because the structurally-equivalent groups of atoms must have the same atomic importance. Fig. 6 shows the estimation of MIAIE for the molecules containing the symmetric structure. Benfluralin has a locally-symmetric structure in two C-NO 2 groups, and MIAIE correctly estimated the atomic importance of two C-NO 2 groups as the same value. Similarly, the C 24 O 4 molecule has globally-symmetric structure throughout the overall molecular structure. For the C 24 O 4 molecule, MIAIE correctly estimated two C-CO 2 groups as the most important group of atoms with the same score. Figure 6: Two original molecules with symmetric structure and their selected sub-molecules in terms of aqueous solubility. The group of atoms that is marked by a blue mask means the second most important atomic group estimated by MIAIE. This paper first exploited machine learning approach to estimate the atomic importance in molecules. To this end, the reverse self-attention was proposed and integrated with graph attention network. The proposed method is efficient and fully-automated. Furthermore, it can estimate the atomic importance in terms of the given target molecular property without human experts. However, the proposed method can estimate the importance of the group of atoms that consists of k-hop neighbor atoms only, even though some important group of atoms may have an arbitrary shape such as ring and bar. As the future work, it is necessary to modify the proposed method to estimate the importance of the group of atoms with arbitrary shape. Fig. 7 shows the original molecules and their selected sub-molecules with the extremely small error. As shown in the , even though the molecules have carbon rings, the important group of atoms was correctly captured because the molecules are characterized by nitrogen or oxygen. On the other hand, Fig. 8 shows the original molecules and their sub-molecules with the extremely large error. Chemically, double bond plays an important role in determining HOMO-LUMO gap. However, as shown in the of Fig. 8, sub-structures that do not contain the double bonds are selected as an important group of atoms. Thus, we need to develop a descriptor for the molecules that can emphasis the bond-features more strongly.
We first propose a fully-automated and target-directed atomic importance estimator based on the graph neural networks and a new concept of reverse self-attention.
1,467
scitldr
Spectral clustering is a leading and popular technique in unsupervised data analysis. Two of its major limitations are scalability and generalization of the spectral embedding (i.e., out-of-sample-extension). In this paper we introduce a deep learning approach to spectral clustering that overcomes the above shortcomings. Our network, which we call SpectralNet, learns a map that embeds input data points into the eigenspace of their associated graph Laplacian matrix and subsequently clusters them. We train SpectralNet using a procedure that involves constrained stochastic optimization. Stochastic optimization allows it to scale to large datasets, while the constraints, which are implemented using a special purpose output layer, allow us to keep the network output orthogonal. Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points. To further improve the quality of the clustering, we replace the standard pairwise Gaussian affinities with affinities leaned from unlabeled data using a Siamese network. Additional improvement can be achieved by applying the network to code representations produced, e.g., by standard autoencoders. Our end-to-end learning procedure is fully unsupervised. In addition, we apply VC dimension theory to derive a lower bound on the size of SpectralNet. State-of-the-art clustering are reported for both the MNIST and Reuters datasets. Discovering clusters in unlabeled data is a task of significant scientific and practical value. With technological progress images, texts, and other types of data are acquired in large numbers. Their labeling, however, is often expensive, tedious, or requires expert knowledge. Clustering techniques provide useful tools to analyze such data and to reveal its underlying structure. Spectral Clustering BID20 BID16 BID22 ) is a leading and highly popular clustering algorithm. It works by embedding the data in the eigenspace of the Laplacian matrix, derived from the pairwise similarities between data points, and applying k-means to this representation to obtain the clusters. Several properties make spectral clustering appealing: First, its embedding optimizes a natural cost function, minimizing pairwise distances between similar data points; moreover, this optimal embedding can be found analytically. Second, spectral clustering variants arise as relaxations of graph balanced-cut problems BID22. Third, spectral clustering was shown to outperform other popular clustering algorithms such as k-means DCN, VaDE, DEPICT and IMSAT (bottom) on simulated datasets in 2D and 3D. Our approach successfully finds these non-convex clusters, whereas the competing algorithms fail on all five examples. (The full set of for these algorithms is shown in FIG4 in Appendix A.) BID22, arguably due to its ability to handle non-convex clusters. Finally, it has a solid probabilistic interpretation, since the Euclidean distance in the embedding space is equal to a diffusion distance, which, informally, measures the time it takes probability mass to transfer between points, via all the other points in the dataset BID15 BID5.While spectral embedding of data points can be achieved by a simple eigen-decomposition of their graph Laplacian matrix, with large datasets direct computation of eigenvectors may be prohibitive. Moreover, generalizing a spectral embedding to unseen data points, a task commonly referred to as out-of-sample-extension (OOSE), is a non-trivial task; see, for example, BID1 BID2 BID9 BID6 ).In this work we introduce SpectralNet, a deep learning approach to spectral clustering, which addresses the scalability and OOSE problems pointed above. Specifically, SpectralNet is trained in a stochastic fashion, which allows it to scale. Moreover, once trained, it provides a function, implemented as a feed-forward network, that maps each input data point to its spectral embedding coordinates. This map can easily be applied to new test data. Unlike optimization of standard deep learning models, SpectralNet is trained using constrained optimization, where the constraint (orthogonality of the net outputs) is enforced by adding a linear layer, whose weights are set by the QR decomposition of its inputs. In addition, as good affinity functions are crucial for the success of spectral clustering, rather than using the common Euclidean distance to compute Gaussian affinity, we show how Siamese networks can be trained from the given unlabeled data to learn more informative pairwise distances and consequently significantly improve the quality of the clustering. Further improvement can be achieved if our network is applied to transformed data obtained by an autoencoder (AE). On the theoretical front, we utilize VC-dimension theory to derive a lower bound on the size of neural networks that compute spectral clustering. Our experiments indicate that our network indeed approximates the Laplacian eigenvectors well, allowing the network to cluster challenging non-convex point sets, which recent deep network based methods fail to handle; see examples in Figure 1. Finally, SpetralNet achieves competitive performance on MNIST handwritten digit dataset and state-of-the-art on the Reuters document dataset, whose size makes standard spectral clustering inapplicable. Recent deep learning approaches to clustering largely attempt to learn a code for the input that is amenable to clustering according to either the k-means or mixture of gaussians clustering models. DCN BID24 directly optimizes a loss composed of a reconstruction term (for the code) and the k-means functional. DEC BID23 iteratively updates a target distribution to sharpen cluster associations. DEPICT adds a regularization term that prefers balanced clusters. All three methods are pre-trained as autoencoders, while DEPICT also initializes its target distribution using k-means or other standard clustering algorithms. Several other recent approaches rely on a variational autoencoder that utilizes a Gaussian mixture prior, see, for example, VaDE BID27 and GMVAE BID7. IMSAT BID11 is based on data augmentation, where the net is trained to maximize the mutual information between inputs and predicted clusters, while regularizing the net so that the cluster assignment of original data points will be consistent with the assignment of augmented points. Different approaches are proposed by BID4, who uses a deep belief net followed by non-parametric maximum margin clustering (NMMC), and by BID25, who introduce a recurrent-agglomerative framework to image clustering. While these approaches achieve accurate clustering on standard datasets (such as the MNIST and Reuters), the use of the k-means criterion, as well as the Gaussian mixture prior, seems to introduce an implicit bias towards the formation of clusters with convex shapes. This limitation seems to hold even in code space. This bias is demonstrated in FIG0 (bottom), which shows the failure of several of the above approaches on relatively simple clustering tasks. In contrast, as is indicated in FIG0 (top), our SpectralNet approach appears to be less vulnerable to such bias. The full set of runs can be found in Appendix A.In the context of spectral clustering, BID21 learn an autoencoder that maps the rows of a graph Laplacian matrix onto the corresponding spectral embedding, and then use k-means in code space to cluster the underlying data. Unlike our work, which learns to map an input data point to its spectral embedding, Tian et al.'s network takes as input an entire row of the graph Laplacian, and therefore OOSE is impractical, as it requires to compute the affinities of each new data point to all the training data. Also of interest is the kernel spectral method by BID0, which allows for out of sample extension and handles large datasets through smart sampling (but does not use a neural network). BID26 address the problem of 3D shape segmentation. Their work, which focuses on learning graph convolutions, uses a graph spectral embedding through eigenvector decomposition, which is not learned. In addition, we enforce orthogonalization stochastically through a constraint layer, while they attempt to learn orthogonalized functional maps by adding an orthogonalization term to the loss function, which involves non-trivial balancing between two loss components. Other deep learning works use a spectral approach in the context of supervised learning. BID12 apply supervised metric learning, showing that their method approximates the eigenvectors of a 0-1 affinity matrix constructed from the true labels. BID13 trained a network to compute graph Laplacian eigenvectors using supervised regression. Their approach, however, requires the true eigenvectors for training, and hence does not easily scale to large datasets. Finally, a number of papers showed that stochastic gradient descent can be used effectively to compute the principal components of covariance matrices, see, e.g., BID19 and references therein. The setup in these papers assumes that in each iteration a noisy estimate of the entire input matrix is provided. In contrast, in our work we use in each iteration only a small submatrix of the affinity matrix, corresponding to a small minibatch. In future work, we plan to examine how these algorithms can be adapted to improve the convergence rate of our proposed network. In this section we present our proposed approach, describe its key components, and explain its connection to spectral clustering. Consider the following standard clustering setup: Let X = {x 1, . . ., x n} ⊆ R d denote a collection of unlabeled data points drawn from some unknown distribution D; given a target number of clusters k and a distance measure between points, the goal is to learn a similarity measure between points in X and use it to learn a map that assigns each of x 1,..., x n to one of k possible clusters, so that similar points tend to be grouped in the same cluster. As in classification tasks we further aim to use the learned map to determine the cluster assignments of new, yet unseen, points drawn from D. Such out-of-sample-extension is based solely on the learned map, and requires neither computation of similarities between the new points and the training points nor re-clustering of combined data. In this work we propose SpectralNet, a neural network approach for spectral clustering. Once trained, SpectralNet computes a map F θ: R d → R k and a cluster assignment function c: R k → {1, . . ., k}. It maps each input point x to an output y = F θ (x) and provides its cluster assignment c(y). The spectral map F θ is implemented using a neural network, and the parameter vector θ denotes the network weights. The training of SpectralNet consists of three components: (i) unsupervised learning of an affinity given the input distance measure, via a Siamese network (see Section 3.2); (ii) unsupervised learning of the map F θ by optimizing a spectral clustering objective while enforcing orthogonality (see Section 3.1); (iii) learning the cluster assignments, by k-means clustering in the embedded space. In this section we describe the main learning step in SpectralNet, component (ii) above. To this end, let w: DISPLAYFORM0 be a symmetric affinity function, such that w(x, x) expresses the similarity between x and x. Given w, we would like points x, x which are similar to each other (i.e., with large w(x, x)) to be embedded close to each other. Hence, we define the loss DISPLAYFORM1 where y, y ∈ R k, the expectation is taken with respect to pairs of i.i.d. elements (x, x) drawn from D, and θ denotes the parameters of the map y = F θ (x). Clearly, the loss L SpectralNet (θ) can be minimized by mapping all points to the same output vector (F θ (x) = y 0 for all x). To prevent this, we require that the outputs will be orthonormal in expectation with respect to D, i.e., DISPLAYFORM2 As the distribution D is unknown, we replace the expectations in and by their empirical analogues. Furthermore, we perform the optimization in a stochastic fashion. Specifically, at each iteration we randomly sample a minibatch of m samples, which without loss of generality we denote x 1,..., x m ∈ X, and organize them in an m × d matrix X whose ith row contains x T i. We then minimize the loss DISPLAYFORM3 where y i = F θ (x i) and W is a m × m matrix such that W i,j = w(x i, x j). The analogue of for a small minibatch is DISPLAYFORM4 where Y is a m × k matrix of the outputs whose ith row is y T i. We implement the map F θ as a general neural network whose last layer enforces the orthogonality constraint. This layer gets input from k units, and acts as a linear layer with k outputs, where the weights are set to orthogonalize the output Y for the minibatch X. LetỸ denote the m × k matrix containing the inputs to this layer for X (i.e., the outputs of F θ over the minibatch before orthogonalization). A linear map that orthogonalizes the columns ofỸ is computed through its QR decomposition. Specifically, for any matrix A such that A T A is full rank, one can obtain the QR decomposition via the Cholesky decomposition A T A = LL T, where L is a lower triangular matrix, and then setting Q = A L −1 T. This is verified in Appendix B. Therefore, in order to orthogonalizeỸ, the last layer multipliesỸ from the right by DISPLAYFORM5, whereL is obtained from the Cholesky decomposition ofỸ TỸ and the √ m factor is needed to satisfy.We train this spectral map in a coordinate descent fashion, where we alternate between orthogonalization and gradient steps. Each of these steps uses a different minibatch (possibly of different sizes), sampled uniformly from the training set X. In each orthogonalization step we use the QR decomposition to tune the weights of the last layer. In each gradient step we tune the remaining weights using standard backpropagation. Once SpectralNet is trained, all the weights are freezed, including those of the last layer, which simply acts as a linear layer. Finally, to obtain the cluster assignments c 1,... c 2, we propagate x 1,... x n through it to obtain the embeddings y 1,..., y n ∈ R k, and perform k-means on them, obtaining k cluster centers, as in standard spectral clustering. These algorithmic steps are summarized below in Algorithm 1 in Sec. 3.3. The loss can also be written as DISPLAYFORM0 The symmetric, positive semidefinite matrix D − W forms the (unnormalized) graph Laplacian of the minibatch x 1,..., x m. For k = 1 the loss is minimized when y is the eigenvector of D − W corresponding to the smallest eigenvalue. Similarly, for general k, under the constraint, the minimum is attained when the column space of Y is the subspace of the k eigenvectors corresponding to the smallest k eigenvalues of D − W. Note that this subspace includes the constant vector whose inclusion does not affect the final cluster assignment. Hence, SpectralNet approximates spectral clustering, where the main differences are that the training is done in a stochastic fashion, and that the orthogonality constraint with respect to the full dataset X holds only approximately. SpectralNet therefore trades accuracy with scalability and generalization ability. Specifically, while its outputs are an approximation of the true eigenvectors, the stochastic training enables its scalability and thus allows one to cluster large datasets that are prohibitive for standard spectral clustering. Moreover, once trained, SpectralNet provides a parametric function whose image for the training points is (approximately) the eigenvectors of the graph Laplacian. This function can now naturally embed new test points, which were not present at training time. Our experiments with the MNIST dataset (Section 5) indicate that the outputs of SpectralNet closely approximate the true eigenvectors. Finally, as in common spectral clustering applications, cluster assignments are determined by applying k-means to the embeddings y 1,... y n. We note that the k-means step can be replaced by other clustering algorithms. Our preference to use k-means is based on the interpretation (for normalized Laplacian matrices) of the Euclidean distance in the embedding space as diffusion distance in the input space BID15 BID5. In spectral clustering, the symmetric normalized graph Laplacian DISPLAYFORM0 2 can use as an alternative to the unnormalized Laplacian D − W. In order to train SpectralNet with normalized graph Laplacian, the loss function should be replaced by DISPLAYFORM1 DISPLAYFORM2. Batch size considerations Typically in classification or regression, the loss is a sum over the losses of individual examples. In contrast, SpectralNet loss is summed over pairs of points, and each summand describes relationships between data points. This relation is encoded by the full n×n affinity matrix W full (which we never compute explicitly). The minibatch size m should therefore be sufficiently large to capture the structure of the data. For this reason, it is also highly important that minibatches will be sampled at random from the entire dataset at each step, and not be fixed across epochs. When the minibatches are fixed, the knowledge of W full is reduced to a (possibly permuted) diagonal sequence of m × m blocks, thus ignoring many of the entries of W full. In addition, while the output layer orthogonalizesỸ, we do not have any guarantees on how well it orthogonalizes other random minibatches. However, in our experiments we observed that if m is large enough, it approximately orthogonalizes other batches as well, and its weights stabilize as training progresses. Therefore, to train SpectralNet, we use larger minibatches compared to common choices made by practitioners in the context of classification. In our experiments we use minibatches of size 1024 for MNIST and 2048 for Reuters, re-sampled randomly at every step. Choosing a good affinity measure is crucial to the success of spectral clustering. In many applications, practitioners use an affinity measure that is positive for a set of nearest neighbor pairs, combined with a Gaussian kernel with some scale σ > 0, e.g., DISPLAYFORM0 where one typically symmetrizes W, for example, by setting DISPLAYFORM1 Euclidean distance may be overly simplistic measure of similarity; seeking methods that can capture more complex similarity relations might turn out advantageous. Siamese nets BID10 BID17 are trained to learn affinity relations between data points; we empirically found that the unsupervised application of a Siamese net to determine the distances often improves the quality of the clustering. Siamese nets are typically trained on a collection of similar (positive) and dissimilar (negative) pairs of data points. When labeled data are available, such pairs can be chosen based on label information (i.e., pairs of points with the same label are considered positive, while pairs of points with different labels are considered negative). Here we focus on datasets that are unlabeled. In this case we can learn the affinities directly from Euclidean proximity or from graph distance, e.g., by "labeling" points x i, x j positive if x i − x j is small and negative otherwise. In our experiments, we construct positive pairs from the nearest neighbors of each point. Negative pairs are constructed from points with larger distances. This Siamese network, therefore, is trained to learn an adaptive nearest neighbor metric. A Siamese net maps every data point x i into an embedding z i = G θsiamese (x i) in some space. The net is typically trained to minimize contrastive loss, defined as DISPLAYFORM2 where c is a margin (typically set to 1).Once the Siamese net is trained, we use it to define a batch affinity matrix W for the training of SpectralNet, by replacing the Euclidean distance x i − x j in with z i − z j.Remarkably, despite being trained in an unsupervised fashion on a training set constructed from relatively naive nearest neighbor relations, in Section 5 we show that affinities that use the Siamese distances yield dramatically improved clustering quality over affinities that use Euclidean distances. This implies that unsupervised training of Siamese nets can lead to learning useful and rich affinity relations. Our end-to-end training approach is summarized in Algorithm 1.Input: X ⊆ R d, number of clusters k, batch size m; Output: embeddings y 1,..., y n, y i ∈ R k, cluster assignments c 1,... c n, c i ∈ {1, . . . k} Construct a training set of positive and negative pairs for the Siamese network; Train a Siamese network; Randomly initialize the network weights θ; while L SpectralNet (θ) not converged do Orthogonalization step: Sample a random minibatch X of size m; Forward propagate X and compute inputs to orthogonalization layerỸ; Compute the Cholesky factorization LL T =Ỹ TỸ;Set the weights of the orthogonalization layer to be √ m L −1 T; Gradient step: Sample a random minibatch x 1,..., x m; Compute the m × m affinity matrix W using the Siamese net; Forward propagate x 1,..., x m to get y 1,..., y m; Compute the loss or; Use the gradient of L SpectralNet (θ) to tune all F θ weights, except those of the output layer; end Forward propagate x 1,..., x n and obtain F θ outputs y 1,..., y n; Run k-means on y 1,..., y n to determine cluster centers;Algorithm 1: SpectralNet training Once SpectralNet is trained, computing the embeddings of new test points (i.e., out-of-sampleextension) and their cluster assignments is straightforward: we simply propagate each test point x i through the network F θ to obtain their embeddings y i, and assign the point to its nearest centroid, where the centroids were computed using k-means on the training data, at the last line of Algorithm 1. Given a dataset X, one can either apply SpectralNet in the original input space, or in a code space (obtained, for example, by an autoencoder). A code space representation is typically lower dimensional, and often contains less nuisance information (i.e., information on which an appropriate similarity measure should not depend). Following BID24 BID23 BID27 and others, we empirically observed that SpectralNet performs best in code space. Unlike these works, which use an autoencoder as an initialization for their clustering networks, we use the code as our data representation and apply SpectralNet directly in that space, (i.e., we do not change the code space while training SpectralNet). In our experiments, we use code spaces obtained from publicly available autoencoders trained by BID27 on the MNIST and Reuters datasets. Our proposed SpectralNet not only determines cluster assignments in training, as clustering algorithms commonly do, but it also produces a map that can generalize to unseen data points at test time. Given a training set with n points, it is thus natural to ask how large should such a network be to represent this spectral map. The theory of VC-dimension can provide useful worst-case bounds for this size. In this section, we use the VC dimension theory to study the minimal size a neural network should have in order to compute spectral clustering for k = 2. Specifically, we consider the class of functions that map all training points to binary values, determined by thresholding at zero the eigenvector of the graph Laplacian with the second smallest eigenvalue. We denote this class of binary classifiers F spectral clustering n. Note that with k = 2, k-means can be replaced by thresholding of the second smallest eigenvector, albeit not necessarily at zero. We are interested in the minimal number of weights and neurons required to allow the net to compute such functions, assuming the affinities decay exponentially with the Euclidean distance. We do so by studying the VC dimension of function classes obtained by performing spectral clustering on n points in arbitrary Euclidean spaces R d, with d ≥ 3. We will make no assumption on the underlying distribution of the points. In the main of this section, we prove a lower bound on the VC dimension of spectral clustering, which is linear in the number of points n. In contrast, the VC dimension of k-means, for example, depends solely on the dimension d, but not on n, hence making k-means significantly less expressive than spectral clustering 1. As a of our main theorem, we bound from below the number of weights and neurons in any net that is required to compute Laplacian eigenvectors. The reader might find the analysis in this section interesting in its own right. Our main shows that for data in R d with d ≥ 3, the VC dimension of F spectral clustering n is linear in the number n of points, making spectral clustering almost as rich as arbitrary clustering of the n points. The formal proof of Theorem 4.1 is deferred to Appendix C. Below we informally sketch its principles. We show that for any integer n (assuming for simplicity that n is divisible by 10), there exists a set of m = n/10 points in R d that is shattered by F spectral clustering n. In particular, we show this for the set of m points placed in a 2-dimensional grid in R d. We then show that for any arbitrary dichotomy of these m points, we can augment the set of points to a larger set X, containing n = 10m points, with a balanced partition of X into two disjoint sets S and T that respects the dichotomy of the 1 For two clusters in R d, k-means clustering partitions the data using a linear separation. It is well known that the VC dimension of the class of linear classifiers in R d is d + 1. Hence, k-means can shatter at most d + 1 points in R d, regardless of the size n of the dataset.original m points. The larger set has the special properties: within S (and resp. T), there is a path between any two points such that the distances between all pairs of consecutive points along the path are small, and all pairs (s, t) ∈ S × T are far apart. We complete the proof by constructing a Gaussian affinity W with a suitable value of σ and showing that the minimizer of the spectral clustering loss for (X, W) (i.e., the second eigenvector of the Laplacian), when thresholded at 0, separates S from T, and hence respects the original dichotomy. By connecting Theorem 4.1 with known regarding the VC dimension of neural nets, see, e.g., BID18, we can bound the size from below (in terms of number of weights and neurons) of any neural net that computes spectral clustering. This is formalized in the following corollary. Corollary 4.2.1. For the class of neural nets with |v| sigmoid nodes and |w| weights to represent all functions realizable by spectral clustering (i.e., second eigenvector of the Laplacian, thresholded at 0) on n points, it is necessary to have |w| 2 |v| 2 ≥ O(n).2. For the class of neural nets with |w| weights from a finite family (e.g., single-precision weights) to represent all functions realizable by spectral clustering, it is necessary to have |w| ≥ O(n).Proof.1. The VC dimension of the class of neural nets with |v| sigmoid units and |w| weights is at most O(|w| 2 |v| 2) . Hence, if |w| 2 |v| 2 < O(n), such net cannot shatter any collection of points of size O(n). From Theorem 4.1, F spectral clustering n shatters at least O(n) points. Therefore, in order for a class of networks to be able to express any function that can be computed using spectral clustering, it is a necessary (but not sufficient) condition to satisfy |w| 2 |v| 2 ≥ O(n).2. The VC dimension of the class of neural nets with |w| weights from a finite family is O(w) . The arguments above imply that |w| ≥ O(n).Corollary 4.2 implies that in the general case (i.e., without assuming any structure on the n data points), to perform spectral clustering, the size of the net has to grow with n. However, when the data has some geometric structure, the net size can be much smaller. Indeed, in a related , the ability of neural networks to learn eigenvectors of Laplacian matrices was demonstrated both empirically and theoretically by BID13. They proved that there exist networks which approximate the eigenfunctions of manifold Laplacians arbitrarily well (where the size of the network depends on the desired error and the parameters of the manifold, but not on n). To numerically evaluate the accuracy of the clustering, we use two commonly used measures, the unsupervised clustering accuracy (ACC), and the normalized mutual information (NMI). For completeness, we define ACC and NMI below, and refer the reader to BID3 for more details. For data point x i, let l i and c i denote its true label and predicted cluster, respectively. Let l = (l 1, . . . l n) and similarly c = (c 1, . . . c n).ACC is defined as DISPLAYFORM0 where Π is the collection of all permutations of {1, . . . k}. The optimal permutation π can be computed using the Kuhn-Munkres algorithm BID14 BID23. (* *) reported in BID24, (†) reported in BID27, (‡) reported in BID8, († †) reported in BID25, (‡ ‡) reported in BID11. The IMSAT on Reuters was obtained on a subset of 10,000 from the full dataset. NMI is defined as DISPLAYFORM1 where I(l; c) denotes the mutual information between l and c, and H(·) denotes their entropy. Both ACC and NMI are in, with higher values indicating better correspondence the clusters and the true labels. We compare SpectralNet to several deep learning-based clustering approaches on two real world datasets. In all runs we assume the number of clusters is given (k=10 in MNIST and k=4 in Reuters).As a reference, we also report the performance of k-means and (standard) spectral clustering. Specifically, we compare SpectralNet to DEC BID23, DCN BID24, VaDE BID27, JULE BID25, DEPICT , and IMSAT BID11. The for these six methods are reported in the corresponding papers. Technical details regarding the application of k-means and spectral clustering appear in Appendix D.We considered two variants of Gaussian affinity functions: using Euclidean distances, and Siamese distances; the latter case follows Algorithm 1. In all experiments we used the loss. In addition, we report of SpectralNet (and the Siamese net) in both input space and code space. The code spaces are obtained using the publicly available autoencoders which are used to pre-train the weights of VaDE 2, and are 10-dimensional. We refer the reader to Appendix D for technical details about the architectures and training procedures. MNIST is a collection of 70,000 28 × 28 gray-scale images of handwritten digits, divided to training and test sets. To construct positive pairs for the Siamese net, we paired each instance with its two nearest neighbors. An equal number of negative pairs were chosen randomly from non-neighboring points. Table 1 shows the performance of the various clustering algorithms on the MNIST dataset, using all 70,000 images for training. As can be seen, the performance of SpectralNet is significantly improved when using Siamese distance instead of Euclidean distance, and when the data is represented in code space rather than in pixel space. With these two components, SpectralNet outperforms DEC, DCN, VaDE, DEPICT and JULE, and is competitive with IMSAT.To evaluate how well the outputs of SpectralNet approximate the true eigenvectors of the graph Laplacian, we compute the Grassmann distance between the subspace of SpectralNet outputs and that of the true eigenvectors. The squared Grassmann distance measures the sum of squared sines of the angles between two k-dimensional subspaces; the distance is in [0, k]. FIG2 shows the Grassmann distance on the MNIST dataset as a function of the training time (expressed as number of parameter updates). It can be seen that the distance decreases rapidly at the beginning of training and stabilizes around 0.026 as time progresses. To check the generalization ability of SpectralNet to new test points, we repeated the experiment, this time training SpectralNet only on the training set, and predicting the labels of the test examples by passing them through the net and associating each test example with the nearest centroid from the k-means that were performed on the embedding of the training examples. The accuracy on test examples was.970, implying that SpectralNet generalizes well to unseen test data in this case. We similarly also evaluated the generalization performance of k-means. The accuracy of k-means on the test set is.546 when using the input space and.776 when using the code space, both significantly inferior to SpectralNet. The Reuters dataset is a collection of English news, labeled by category. Like DEC and VaDE, we used the following categories: corporate/industrial, government/social, markets, and economics as labels and discarded all documents with multiple labels. Each article is represented by a tfidf vector, using the 2000 most frequent words. The dataset contains n = 685, 071 documents. Performing vanilla spectral clustering on a dataset of this size in a standard way is prohibitive. The AE used to map the data to code space was trained based on a random subset of 10,000 samples from the full dataset. To construct positive pairs for the Siamese net, we randomly sampled 300,000 examples from the entire dataset, and paired each one with a random neighbor from its 3000 nearest neighbors. An equal number of negative pairs was obtained by randomly pairing each point with one of the remaining points. Table 1 shows the performance of the various algorithms on the Reuters dataset. Overall, we see similar behavior to what we observed on MNIST: SpectralNet outperforms all other methods, and performs best in code space, and using Siamese affinity. Our SpectralNet implementation took less than 20 minutes to learn the spectral map on this dataset, using a GeForce GTX 1080 GPU. For comparison, computing the top four eigenvectors of the Laplacian matrix of the complete data, needed for spectral clustering, took over 100 minutes using ARPACK. Note that both SpectralNet and spectral clustering require pre-computed nearest neighbor graph. Moreover, spectral clustering using the ARPACK eigenvectors failed to produce reasonable clustering. This illustrates the robustness of our method in contrast to the well known instability of spectral clustering to outliers. To evaluate the generalization ability of SpectralNet, we divided the data randomly to a 90%-10% split, re-trained the Siamese net and SpectralNet on the larger subset, and predicted the labels of the smaller subset. The test accuracy was 0.798, implying that as on MNIST, SpectralNet generalizes well to new examples. We have introduced SpectralNet, a deep learning approach for approximate spectral clustering. The stochastic training of SpectralNet allows us to scale to larger datasets than what vanilla spectral clustering can handle, and the parametric map obtained from the net enables straightforward out of sample extension. In addition, we propose to use unsupervised Siamese networks to compute distances, and empirically show that this in better performance, comparing to standard Euclidean distances. Further improvement are achieved by applying our network to code representations produced with a standard stacked autoencoder. We present a novel analysis of the VC dimension of spectral clustering, and derive a lower bound on the size of neural nets that compute it. In addition, we report state of the art on two benchmark datasets, and show that SpectralNet outperforms existing methods when the clusters cannot be contained in non overlapping convex shapes. We believe the integration of spectral clustering with deep learning provides a useful tool for unsupervised deep learning. To compare SpectralNet to spectral clustering, we consider a simple dataset of 1500 points in two dimensions, containing two nested'C'-shaped clusters. We applied spectral clustering to the dataset by computing the eigenvectors of the unnormalized graph Laplacian L = D − W corresponding to the two smallest eigenvalues, and then applying k-means (with k=2) to these eigenvector embeddings. The affinity matrix W was computed using DISPLAYFORM0, where the scale σ was set to be the median distance between a point to its 3rd neighbor -a standard practice in diffusion applications. FIG3 shows the clustering of the data obtained by SpectralNet, standard spectral clustering, and k-means. It can be seen that both SpectralNet and spectral clustering identify the correct cluster structure, while k-means fails to do so. Moreover, despite the stochastic training, the net outputs closely approximate the two true eigenvectors of W with smallest eigenvalues. Indeed the Grassmann distance between the net outputs and the true eigenvectors approaches zero as the loss decreases. In the next experiment, we trained, DCN, VaDE, DEPICT (using agglomerative clustering initialization) and IMSAT (using adversarial perturbations for data augmentation) on the 2D datasets of FIG0. The experiments were performed using the code published by the authors of each paper. For each method, we tested various network architectures and hyper-parameter settings. Unfortunately, we were unable to find a setting that will yield an appropriate clustering on any of the datasets for DCN, VaDE and DEPICT. IMSAT worked on two out of the five datasets, however failed to yield an appropriate clustering in fairly simple cases. Plots with typical of each of the methods on each of the five 2D datasets is shown in FIG4.To further investigate why these methods fail, we performed a sequence of experiments with the two nested'C's data, while changing the distance between the two clusters. The are shown in FIG5. We can see that all three methods fail to cluster the points correctly once the clusters cannot be linearly separated. Interestingly, although the target distribution of DEPICT was initialized with agglomerative clustering, which successfully clusters the nested'C's, its target distribution becomes corrupted throughout the training, although its loss is considerably reduced, see FIG6. DISPLAYFORM1 To prove Theorem 4.1, we begin with the following definition and lemmas. DISPLAYFORM0, where V has an even number of vertices and has a balanced partition V = S∪T, |S| = |T |, and W is an affinity matrix so that:• For any v i, v j ∈ S (resp. T), there is a path v i = v k1, v k2,..., v k l = v j ∈ S, so that for every two consecutive points v k l, v k l+1 along the path, W k l,k l+1 ≥ α.• For any v i ∈ S, v j ∈ T, W i,j ≤ β. Lemma C.2. For any integer m > 0 there exists a setX = {x 1, . . ., DISPLAYFORM1, so that for any binary partitionX =S ∪T, we can construct a set X of n = 10m points,X ⊂ X, and a balanced binary partition X = S ∪ T, |S| = |T | of it, such that DISPLAYFORM2 • For any x i, x j ∈ S (resp. T), there is a path x i, x k1, x k2,..., x k l, x j ∈ S, so that for every two consecutive points x k l, x k l+1 along the path, DISPLAYFORM3 • For any x i ∈ S, x j ∈ T, x i − x j | ≥ 1(property b).Proof. We will prove this for the case d = 3; the proof holds for any d ≥ 3.Let m > 0 be integer. We choose the setX to lie in a 2-dimensional unit grid inside a square of minimal diameter, which is placed in the Z = 0 plane. Each point x i is at a distance 1 from its neighbors. Next, given a partition of x 1,..., x m to two subsets,S andT, we will construct a set X ⊃X with n = 10m points and a partition S ∪ T that satisfy the conditions of the lemma (an illustration can be seen in Figure 7). First, we add points to obtain a balanced partition. We do so by adding m new points x m+1,..., x 2m, assigning each of them arbitrarily to eitherS orT until |S| = |T | = m. We Figure 7: Illustration of the construction of Lemma C.2. We select the setX to lie in a grid in the Z = 0 plane. Given an arbitrary dichotomyX =S ∪T (points are marked with filled circles, colored respectively in red and blue), we first add points to make the sets balanced (not shown). Next, we make a copy for S at Z = 1 and for T at Z = −1 (filled squares). We then add midpoints between each point and its copy (empty circles), and finally add more points along the minimal length spanning tree (empty squares). Together, all the red points form the set S; the blue points form the set T, and X = S ∪ T.place all these points also on grid points in the Z = 0 plane so that all 2m points lie inside a square of minimal diameter. We further add all the points inS to S and those inT to T.In the next step, we prepare a copy of theS-points at Z = 1 (with the same X, Y coordinates) and a copy of theT -points at Z = −1. We denote these copies by x 1,..., x 2m and refer to the lifted points at Z = 1 by S and at Z = −1 by T. Next, we will add 6m more points to make the full set of n = 10m points satisfy properties a and b. First, we will add the midpoint between every point and its copy, i.e., x i = (x i + x i)/2. We assign each such midpoint to S (resp. T) if it is placed between x i ∈ S and x i ∈ S (resp. T and T). Then we connect the points in S (resp. T) by a minimal length spanning tree and add 4m more points along the edges of these two spanning trees so that the added points are equally spaced along every edge. We assign the new points on the spanning tree of S to S and of T to T.We argue that the obtained point set X of size 10m satisfies the conditions of the lemma. Clearly, S ⊂ S andT ⊂ T. To show that property a is satisfied, note that the length of each spanning tree cannot exceed 2m, since the full 2m grid pointsX can be connected with a tree of length 2m − 1. It is evident therefore that every two points x i, x j ∈ S (resp. T) are connected by a path in which the distance between each two consecutive points is strictly less than 1 (property a). Property b too is satisfied because the grid points inX are at least distance 1 apart; each midpoint x i is distance 1/2 from x i and x i (and they all belong to the same set, either S or T), but its distance to the rest of the points inX exceeds 1, and the rest of the points in S (resp. T) are on the Z = 1 (resp. Z = 1) plane, and so they are at least distance 1 away from members of the opposite set which all lie in the Z ≤ 0 (resp. Z ≥ 0) half space. Lemma C.3.. Let f (·) be the spectral clustering loss DISPLAYFORM4 Let G = (X, W) be a (α,β)-separated graph, such that |X| = n ≥ 4. Let y * be a minimizer of f (y) w.r.t W, subject to 1 T y = 0, y = 1. Let ∆ S = max{y * i − y * j : x i, x j ∈ S}, and similarly ∆ T = max{y * i − y * j : x i, x j ∈ T}. Let ∆ = max {∆ S, ∆ T}. For k-means we used Python's sklearn.cluster; we used the default configuration (in particular, 300 iterations of the algorithm, 10 restarts from different centroid seeds, final are from the run with the best objective). To perform spectral clustering, we computed an affinity matrix W using, with the number of neighbors set to 25 and the scale σ set to the median distance from each point to its 25th neighbor. Once W was computed, we took the k eigenvectors of D − W corresponding to the smallest eigenvalues, and then applied k-means to that embedding. The k-means configuration was as above. In our experiments, the loss was computed with a factor of 1 m rather than 1 m 2, for numerical stability. The architectures of the Siamese net and SpectralNet are described in TAB2. Additional technical details are shown in TAB3.The learning rate policy for all nets was determined by monitoring the loss on a validation set (a random subset of the training set); once the validation loss did not improve for a specified number of epochs (see patience epochs in TAB3), we divided the learning rate by 10 (see LR decay in TAB3). Training stopped once the learning rate reached 10 −8. Typical training took about 100 epochs for a Siamese net and less than 20,000 parameter updates for SpectralNet, on both MNIST and Reuters. In the MNIST experiments, the training set for the Siamese was obtained by pairing each data point with its two nearest neighbors (in Euclidean distance). During the training of the spectral map, we construct the batch affinity matrix W by connecting each point to its nearest two neighbors in the Siamese distance. The scale σ in was set to the median of the distances from each point to its nearest neighbor. In the Reuters experiment, we obtained the training set for the Siamese net by pairing each point from that set to a random point from its 100 nearest neighbors, found by approximate nearest neighbor algorithm 4. To evaluate the generalization performance, the Siamese nets were trained using training data only. The scale σ in was set globally to the median (across all points in the dataset) distance from any point to its 10th neighbor. Finally, we used the validation loss to determine the hyper-parameters. To demonstrate that indeed the validation loss is correlated to clustering accuracy, we conducted a series of experiments with the MNIST dataset, where we varied the net architectures and learning rate policies; the Siamese net and Gaussian scale parameter σ were held fixed throughout all experiments. In each experiment, we measured the loss on a validation set and the clustering accuracy (over the entire data). The correlation between loss and accuracy across these experiments was -0.771. This implies that hyper-parameter setting for the spectral map learning can be chosen based on the validation loss, and a setup that yields a smaller validation loss should be preferred. We remark that we also use the convergence of the validation loss to determine our learning rate schedule and stopping criterion.
Unsupervised spectral clustering using deep neural networks
1,468
scitldr
Meta-learning methods, most notably Model-Agnostic Meta-Learning or MAML, have achieved great success in adapting to new tasks quickly, after having been trained on similar tasks. The mechanism behind their success, however, is poorly understood. We begin this work with an experimental analysis of MAML, finding that deep models are crucial for its success, even given sets of simple tasks where a linear model would suffice on any individual task. Furthermore, on image-recognition tasks, we find that the early layers of MAML-trained models learn task-invariant features, while later layers are used for adaptation, providing further evidence that these models require greater capacity than is strictly necessary for their individual tasks. Following our findings, we propose a method which enables better use of model capacity at inference time by separating the adaptation aspect of meta-learning into parameters that are only used for adaptation but are not part of the forward model. We find that our approach enables more effective meta-learning in smaller models, which are suitably sized for the individual tasks. Meta-learning or learning to learn is an appealing notion due to its potential in addressing important challenges when applying machine learning to real-world problems. In particular, learning from prior tasks but being able to to adapt quickly to new tasks improves learning efficiency, model robustness, etc. A promising set of techiques, Model-Agnostic Meta-Learning or MAML, and its variants, have received a lot of interest (; ;). However, despite several efforts, understanding of how MAML works, either theoretically or in practice, has been lacking ). For a model that meta-learns, its parameters need to encode not only the common knowledge extracted from the tasks it has seen, which form a task-general inductive bias, but also the capability to adapt to new test tasks (similar to those it has seen) with task-specific knowledge. This begs the question: how are these two sets of capabilities represented in a single model and how do they work together? In the case of deep learning models, one natural hypothesis is that while knowledge is represented distributedly in parameters, they can be localized -for instance, lower layers encode task-general inductive bias and the higher layers encode adaptable task-specific inductive bias. This hypothesis is consistent with one of deep learning's advantages in learning representations (or feature extractors) using its bottom layers. Then we must ask, in order for a deep learning model to meta-learn, does it need more depth than it needs for solving the target tasks? In other words, is having a large capacity to encode knowledge that is unnecessary post-adaptation the price one has to pay in order to be adaptable? Is there a way to have a smaller (say, less deep) meta-learnable model which still adapts well? This question is of both scientific interest and practical importance -a smaller model has a smaller (memory) footprint, faster inference and consumes less resources. In this work, through empirical studies on both synthetic datasets and benchmarks used in the literature, we investigate these questions by analyzing how well different learning models can meta-learn and adapt. We choose to focus on MAML due to its popularity. Our observations suggest depth is indeed necessary for meta-learning, despite the tasks being solvable using a shallower model. Thus, applying MAML to shallower models does not in successful meta-learning models that can adapt well. Moreover, our studies also show that higher layers are responsible more for adapting to new tasks while the lower layers are responsible for learning task-general features. Our findings prompt us to propose a new method for meta-learning. The new approach introduces a meta-optimizer which learns to guide the (parameter) optimization process of a small model. The small model is used for solving the tasks while the optimizer bears the burden of extracting the knowledge of how to adapt. Empirical show that despite using smaller models, the proposed algorithm with small models attains similar performance to larger models which use MAML to meta-learn and adapt. We note that a recent and concurrent work to ours addresses questions in this line of inquiry . They reach similar through different analysis and likewise, they propose a different approach for improving MAML. We believe our work is complementary to theirs. Meta-learning, or learning-to-learn, is a vibrant research area with a long and rich history, lying at the intersection of psychology , neuroscience , and computer science Of particular interest to this manuscript is the line of work concerned with optimization-based meta-learning (OBML) algorithms in the few-shot regime, of which MAML is a particular instance.; ) Since its inception, MAML has been widely applied to tackle the few-shot learning challenge, in domains such as computer vision, natural language processing , and robotics . It is also the basis of extensions for continual learning , single-and multi-agent reinforcement learning , objective learning , transfer learning , and domain adaptation. Due to its generality, the adaptation procedure MAML introduceswhich is the focus of our analysis -has recently been branded as Generalized Inner Loop MetaLearning. While popular in practical applications, relatively few works have analysed the convergence and modelling properties of those algorithms. showed that, when combined with deep architectures, OBML is able to approximate arbitrary meta-learning schemes. recently provided convergence guarantees for MAML to approximate first-order stationary points for non-convex loss surfaces, under some assumptions on the availability and distribution of the data. Other analyses (empirical or theoretical) have attempted to explain the generalization ability of OBML , the bias induced by restricting the number of adaptation steps , or the effect of higher-order terms in the meta-gradient estimation Closely related to our proposed methods are works attempting to improve the adaptation mechanisms of OBML. Meta-SGD meta-learns per-parameter learning rates, while Alpha MAML adapts those learning rates during adaptation via gradient-descent. MetaCurvature propose to learn a Kronecker-factored pre-conditioning matrix to compute fast-adaptation updates. Their ing algorithm is a special case of one of our methods, where the linear transformation is not updated during adaptation. Another way of constructing preconditioning matrices is to explicitly decompose all weight matrices of the model in two separate components, as done in T-Nets . The first component is only updated via the evaluation loss, while the second is also updated during fast-adaptation. Warped Gradient Descent further extends T-Nets by allowing both components to be non-linear functions. Instead of directly working with gradients, suggest to directly learn a loss function which is differentiated during fast-adaptation and in faster and better learning. Additionally, meta-optimizers have also been used for meta-descent (; ; b; a;). They can be learned during a pre-training phase (; a; b; b; a; c;) or online (; ;). Our work differentiates from the the above by diagnosing and attempting to address the entanglement between modelling and adaptation in the meta-learning regime. We uncover a failure mode of MAML with linear and smaller models, and propose an effective solution in the form of expressive meta-optimizers. 3.1 In MAML and its many variants (; ;), we have a model whose parameters are denoted by θ. We would like to optimize θ such that the ing model can adapt to new and unseeen tasks fast. To this end, we are given a set of (meta)training tasks, indexed by τ. For each such task, we associate with a loss L τ (θ). Distinctively, MAML minimizes the expected task loss after an adaptation phase, consisting of a few steps of gradient descent from the model's current parameters. Since we do not have access to the target tasks to which we wish to adapt to, we use the expected loss over the training tasks, where the expectation is taken with respect to the distribution of the training tasks. α is the learning rate for the adaptation phase. The right-hand-side uses only one step gradient descent such that the aim is to adapt fast: in one step, we would like to reduce the loss as much as possible. In practice, a few more steps are often used. Shallow models can be hard to meta-learn Many intuitive explanations for why MAML works exist. One appealing suggestion is that the minimizer of the meta-learning loss L META is chosen in such a way that it provides a very good initialization for the adaptation phase; however, if the model is shallow such that L τ is convex in its parameters, then any initialization that is good for fast adapting to one subset of tasks could be bad for another subset of tasks since all the tasks have precisely one global minimizer and those minimizers can be arbitrarily far from each other. When the test tasks are distributed similar to the training tasks, the ideal initialization point has to be the "mean" of the minimizers of the training tasks -the precise definition of the mean is not important, as we will see below. We illustrate a surprising challenge by studying MAML on a synthetic dataset and the Omniglot task . Specifically, for the former study, we construct a set of binary classification tasks by first randomly sampling datapoints w ∈ R 100 from a standard Gaussian and use each of them to define a linear decision boundary of a binary classification task. We assume the boundaries pass through the origin and we sample training, validation and testing samples by randomly sampling data points from both sides of the decision boundaries. By construction, a linear model such as logistic regression is sufficient to achieve very high accuracy on any task. But can MAML learn a logistic regression model from a subset of training tasks that is able to adapt quickly to the test tasks? Note that due to the random sampling of the training tasks, the average of the minimizers (ie, the samples from the Gaussian distribution) is the origin. Likewise, for a set of test tasks randomly sampled the same way, the origin provides the best initialization by not favoring any particular task. Figure 1 reports the 1-step post-adaptation accuracy on the test tasks for the meta-learned logistic regression model. Surprisingly, the model fails to perform better than chance. Despite the simplicity of the tasks, logistic regression models are unable to find the origin as an initialization that adapts quickly to a set of test tasks that are similar to the training tasks. The models to be meta-learned are shallow ones (logistic regression models for binary classification and multinomial logistic regression) and deep ones -the shallow models with linear networks (LinNet) added to them. While the shallow models do not adapt and achieve chance level accuracies on average, the overparameterized versions attain significantly better after adaptation. Note that while the model has the same representational capacity as a linear logistic regression, it is overparameterized and has many local optimizers. As such, MAML can train this model such that its 1-step adaptation accuracy reaches 92% on average. We observe the same phenomena on meta-learning with MAML on the Omniglot dataset (details of the dataset are given in the Section 5). The shallow logistic regression model achieves 45% accuracy on average (for the 5-way classification tasks) after 2-step adaptation from the meta-learned initialization. However, with a linear network, the adapted model achieves significantly higher accuracy -70% on average, while having the same modeling capacity as the logistic regression. In summary, these experiments suggest that even for tasks that are solvable with shallow models, a model needs to have enough depth in order to be meta-learnable and to adapt. We postpone the description of our experiments on nonlinear models to section 5, where we also show that having sufficient depth is crucial for models to be meta-learnable, even when the tasks require fewer layers. A natural question arises: if being deep is so important for meta-learning on even very simple tasks, what different roles, if any, do different layers of a deep network play? Depth enables task-general feature learning and fast adaptation We hypothesize that for deep models meta-learned with MAML, lower layers learn task-invariant features while higher layers are responsible for fast-adaptation. To examine this claim, we meta-train a model consisting of four convolutional layers (C1 -C4) and a final fully-connected layer (FC) on Omniglot and CIFAR-FS . (Experimental setups are detailed in Section 5.) Once the model has finished meta-training, we perform a layer-wise ablation to study each layer's effect on adaptation. In particular, we iterate over each layer and perform two sets of experiments. In the first, we freeze the weights of the layer such that it does not get updated during fast-adaptation -we call it freezing only this. In the second experiment, we freeze all layers but the layer such that this layer is updated during fast-adaptation -we call it adapting only this. The left two plots in Figure 2 report the average accuracy over 100 testing tasks from both datasets. We observe that freezing only the first few lower layers (C1-C3) does not cause noticeable degradation to the post-adaptation accuracy. In fact, as long as the last convolutional layer (C4) is not frozen, post-adaptation accuracy remains unaffected. This indicate that C1-C3 provide information that is task-invariant, while C4 is crucial for adaptation. This does not mean FC is not important -since adapting C4 requires gradients passing through the FC layer, it cannot be arbitrary. In fact, in the rightmost plot of the figure, C1-C3 are held fixed during adaptation. While C4 and FC are allowed to adapt, and FC is perturbed with noise. When the noise is strong, the performance degrades significantly. Thus, we conclude that both C4 and FC play important roles in the mechanism for fast adaptation. We note that the recent work by concurently reached similar on the mini-ImageNet dataset, using feature similarity-based analyses of the model's representations. These observations highlight a fundamental issue: the property of being meta-learnable entails more model capacity than being learnable for a specific task. Thus, MAML can fail on models that lack the capacity to encode both task-general features and adaptation information, even when the models themselves are powerful enough to perform well on each of the tasks used for the meta-learning procedure. For example, with linear models (e.g. logistic regression), the parameters are forced to overlap and serve both modelling and adaptation purposes. However, as soon as the models are overparameterized, the extra layers enable meta-learnability. In section 5, we show that this observation also applies to nonlinear models where MAML-trained models quickly lose their performance when the number of layers is reduced. The previous section has shown that MAML, when applied to deep learning models, meta-learns both task-general features and task-specific adaption parameters at the same time. Since both are represented in a single model where strong dependencies exist between the lower layers and the higher layers, the model needs to be large enough and all its parameters have to be used to solve the test tasks. In some cases, the model has a modelling capacity that is bigger than what the test tasks require. For a model where linear network layers exist, the layers can be collapsed after adaptation so that the actual model used for inference on teh test tasks is small. However, it is not clear how to do so for typical deep models where nonlinearities prevent collapsing the model into smaller ones. Can we have a different adaptation mechanism such that a smaller model, whose modelling capacity is still adequate for the test tasks, can be adapted to find the minimizers of its loss on the aforementioned tasks? In this section, we describe a new approach of learning meta-optimizers for meta-learning. The meta-optimizer aims to separate modelling from adaptation. The meta-optimizer transforms the parameter update process for the model so the model can converge fast to its minimizer. How to transform is, however, learned from meta-training tasks. Because of this separation, the model for the task does not have to know how to adapt; it only needs to have enough capacity (roughly, big enough to represent the target task). We note that classical tools could be used to compress a large model for a given task. However, in the scenario where meta-learning is often used, such tools are unlikely effective. For example, distillation requires a lot of labeled data from the target task, which is not suitable for few-shot learning tasks. Pruning often degrades performance. A meta-optimizer -or learnable optimizer -is a parameterized function U ξ defining the model's parameter updates. For example, a linear meta-optimizer might be defined as: where ξ = (A, b) is the set of parameters of the linear transformation. The objective is to jointly learn the model and optimizer's parameters in the hope of accelerating the minimization of L τ. Concretely, we alternate between model and optimizer updates such that their values at adaptation step t + 1 are given by: While this change to the classical optimization pipeline might seem innocuous at first, it bears its own set of computational challenges. First, it is common for modern machine learning models to have millions of parameters. In this case, simply storing the optimizer's parameters becomes infeasible even for the simplest of models, such as the linear one outlined above. Because of this dimensionality issue most of the current literature considers a per-parameter independent update function, thus greatly limiting the expressivity of the meta-optimizer. We propose to address the issue of parameter dimensionality via factorizations of the optimizer's parameters, which we detail in the following subsection. To tackle the issue of parameter dimensionality we propose to factorize the optimizer's parameters. To demonstrate this, let us revisit the linear optimizer example above: we assume g ∈ R k to be the vectorization of a matrix gradient G ∈ R n×m, such that vec(G) = g and m · n = k. By expressing A = R ⊗ L as the Kronecker product of small matrices R ∈ R m×m and L ∈ R n×n, the linear update above can be efficiently computed as: (, Section 9.1) where b ∈ R k and ξ = (L, R, b). In the best case scenario where m = n = √ k, the above factorization requires O (k) memory as opposed to O k 2 for the non-factored case. Similarly, the matrix-product takes O k √ k time complexity, while the non-factored version takes O k 2. Note that this linear transformation can be used as a building block to implement more expressive and non-linear meta-optimizers. For example, a fully-connected network meta-optimizer U ξ is the composition of linear transformations U Wi interleaved with activation function σ. If the weight matrices (W 1, . . ., W h) of the network admit a Kronecker-factorization W i = R i ⊗ L i, the fullyconnected meta-optimizer is imlemented as: where ξ = (W 1, . . ., W h). Such a scheme can be used to implement arbitrary meta-optimizerssuch as convolutional or recurrent ones -so long as the architecture involves a composition of linear maps. We refer the reader to Appendix A.3 for pseudo-code and schematics of the model-optimizer loop. We emphasize that the choice of a Kronecker factorization is arbitrary; many matrix decompositions work equivalently well and in different modeling and computational trade-offs. For example, using a low-rank Cholesky factorization A = LL T where L ∈ R k×r allows to interpolate between computational complexity and decomposition rank by tuning the additional hyper-parameter r. The Cholesky decomposition might be preferable to the Kronecker one in memory-constrained applications, since r can be used to control the memory requirements of the meta-optimizer. Moreover, such a decomposition imposes symmetry and positiveness on A, which might be desirable when approximating the Hessian or its inverse. In this work, we prefered the Kronecker decomposition over alternatives for three reasons: the computational and memory cost of the Kronecker-product are acceptable, R ⊗ L is fullrank whenever L, R are full-rank, and the identity matrix lies in the span of Kronecker-factored matrices. In particular, this last motivation allows meta-optimizers to recover the gradient descent update by letting R, L be the identity. In our experiments, we complement our empirical studies and analysis from Section 3 with further and address the following question: can we meta-learn smaller (or shallower) models that perform as well as larger (or deeper) ones? To this end, we apply the proposed approach of learning meta-optimizers to the example synthetic dataset, as well as popular benchmark datasets: Omniglot , mini-ImageNet , and CIFAR-FS . All hyper-parameters and additional experimental details are available in Appendix A.1. Unless otherwise noted, all models are trained using MAML. For models that use learnable deep optimizers, we meta-learn both model and optimizer parameters. Similarly, both sets of parameters are adapted during fast-adaptation. To differentiate, we add the name of the optimizer to the model. For example, LR W/ KLIN corresponds to a logisitic regression model with the Kronecker-factorized linear transformation from the previous section. Similarly, W/ KFC indicates a model optimized by Kronecker-factored FullyConnected network in the previous section. We adopt the same setting as in section 3 where we study the synthetic data and the Omniglot dataset. For the Omniglot dataset, we consider the 1-shot, 5-way setup of and let the model adapt for 2 steps. Recall that the logistic regression model does not meta-learn while overparameterized logistic regression models (ie, models with linear networks) do. Table 1 reports the final post-adaptation testing accuracies and their standard deviations. As can be observed, while logistic regression (LR) cannot be meta-learned, overparameterized models (LINNET) as well as LR with our meta-optimizers (LR W/ KLIN and LR W/ KFC) can all metalearn while our approaches lead to similar or better compared to LinNet. Note that since linear networks can always be collapsed into a single weight matrix and bias term, any increase in performance is the consequence of better adaptation, rather than better modelling. We examine whether our meta-optimizers can meta-learn successfully with smaller but deep nonlinear models to match bigger deep models. We focus on two datasets: the Omniglot where the setting is 10-way classification with 5-shots and 4 adaptation steps, using the original 4-layer convolutional network (CNN) of , and the CIFAR-FS dataset , doing 10-way classification with 3-shots and 2 adaptation steps. The model is a CNN similar to the one for the Omniglot, but only uses 32 hidden units as opposed to 64. We report numerical in detail in Table A2 in the Appendix, which are graphically presented in Figure 3. We vary the number of convolutional layers. As the number of the layers decrease, the adaptation performance by both MAML and our approach (KFC) decrease. However, our approach has a much slower degradation rate. In fact, our approach is generally able to adapt a smaller model to the same level of performance as a MAML-trained model with an additional layer. We also compare our approach of learning meta-optimizers to other approaches, notably Meta-SGD and Meta-Curvature . Both of these methods attempt to approximate the expected task-loss landscape curvature and can be seen as ablations of our methods; Meta-Curvature corresponds to KLin without adapting the optimizer, while Meta-SGD approximates Meta-Curvature with a diagonal matrix. To ease the computational burden, we use a smaller CNN with 2 convolutional layers (SCNN) model for adaptation. As an upper bound on adaptation performance, we train the model on individual tasks to convergence and report the average final accuracy. Additionally, we include on the mini-ImageNet dataset for all methods, in the 10-way 1-shot with 5 adaptation steps setting . Table 2 reports post-adaptation testing accuracies. All methods of learning optimizers are able to adapt better than SCNN w/ MAML. In particular, our approachs perform the best and comes closest to approaching the upper bound of performance (SCNN). We introduce our approach by analyzing the success and failure modes of optimization-based metalearning methods. Namely, we find that, when successful, these methods tend to learn task-general features in early layers and adaptable parameters/update functions in the later layers. Moreover, we find that this learning fails when model size is reduced, indicating that optimization-based metalearning methods rely on the ability to encode task-general features and/or adaptable parameters, even when the model itself is adequate for learning on the individual tasks. As such, we introduce our method for decomposing modelling from adaptation using factored meta-optimizers. These meta-optimizers enable the forward model to use more capacity on learning task-specific features, while the expressiveness of their updates allows the forward model to adapt quickly to different tasks. We find that our approach is able to enable successful meta-learning in models that do not work with traditional optimization-based meta-learning methods. The following paragraphs describe the data-generation process, model architectures, and learning hyper-parameters for experiments in Section 5. Meta-optimizers's parameter matrices are always initialized to the identity. The task decision boundaries w ∼ N (0.0, I 100) are sampled from standard Gaussian. Each one of the 1,000 data point x ∼ N (0.0, I 100) is also sampled from the standard Gaussian, and is assigned a label from {−1, 1} based on the sign of w x. All models are trained for 30,000 iterations of stochastic gradient descent (without momentum) so as to minimize the 1-step MAML loss L MAML. The task loss L(θ) is the binary cross-entropy. Each stochastic gradient is estimated on a single task. (i.e. the meta-batch size is 1.) For the logistic regression (LR) , we used meta and adapatation learning rates of 0.01 and 0.5. The linear network (LR+LinNet) consists of 3 hidden layers of 64 units, and is trained with both meta-and fast-adaptation learning rates set to 1.9. Experiments using the Kronecker-factored linear meta-optimizer also use the same meta-and fast-adaptation learning rate, set to 3.33. The metaoptimizer itself consists of 10,001 parameters. (m = 100, n = 1) The Kronecker-factored fullyconnected meta-optimizer uses a single hidden layer with rectified linear-units (ReLU) activation functions, for a total of 20,002 parameters. (m = 100, n = 1) It has both learning rates set to 3.9. None of the meta-optimizers use a bias term. Omniglot Our Omniglot experiments exactly replicate the setup of. Of the 1623 classes, we designate 1,200 for meta-training, 100 for validation, and 423 for testing. We then generate 20,000 training tasks, 600 validation/testing tasks, each consisting of five classes containing a single character image, possibly rotated by 90, 180, or 270 degrees. All models are trained using Adam for 60,000 iterations, an iteration consisting of 32 tasks for which the model is allowed 2 steps of fast-adaptation. The logistic regression model uses a meta-learning rate of 0.0005 and an adaptation learning-rate set to 1.0. The linear network uses 4 hidden layers of 256, 128, 64, and 64 units and flattens the character images into a 784-dimensional vector before processing them. The meta-learning rate was set to 0.0005 and the adaptation learning rate to 0.08. For the KLin experiment (LR w/ KLin), we use a meta-learing rate of 0.003 and set the adaptation learning rate to 0.9. The KLin optimizer consists of 614k parameters, with m = 784, n = 5. The KFC meta-optimizer (LR w/ KFC) consists of 4 hidden layers (2.5M parameters) with ReLU activations, whose meta and adaptation learning rates are set to 0.001 and 0.01, respectively. Again, none of the meta-optimizer use biases. For the non-linear model experiments, we learn one KFC optimizer per layer of the model. Each KFC optimizer is based on the same architecture, but learns its own set of parameters. Moreover, we separate processing of the magnitude and direction of the model's gradient as follows: the gradient is initially scaled so as to have unit norm before being fed to the KFC network. Once the normalized gradient has been processed, it is rescaled by the initial normalizing constant times a learnable factor. (This learnable factor is initialized to 1.) We found this architecture to help in situations where the model is allowed several adaptation steps. For convolutional models, we flatten the height and width weight dimensions such that the optimizer is shared across filters of a same layer. (i.e. if the convolutional layer has shape NxCxHxW, we have m = C, n = HW .) Note that, again, none of the meta-optimizers use a bias term. Omniglot The dataset replicates the processing of linear model experiments, detailed above. Instead of 5 ways and 1 shot, we use 10 ways and 5 shots. All models are allowed 4 steps of fast adaptation. The 4-layer CNN network (CNN w/ MAML) uses a meta-learning rate of 0.003, and an adaptation learning rate of 0.5. Its architecture replicates the one described in. The 2-layer CNN (SCNN w/ MAML) uses the same first two layers and same input dimensionality to the FC layer as the 4-layer CNN, with meta and adapation learning rates set to 0.003 and 0.8, respectively. The 4-layer CNN has a total of 112,586 parameters, while the 2-layer only 38,474. We used the same learning rates for the Meta-SGD, Meta-Curvature, and KFC optimizers: a meta learning rate of 0.003, and an adaptation learning rate of 0.5. The KFC architecture consists of 4 layers, such that the meta-optimizers contains a total of 134,171 parameters for the 2-layer CNN model and 267,451 parameters for the 4-layer CNN. We obtained the splits created by and exactly reproduced their preprocessing setting for our experiments on CIFAR-FS. We also split the 100 classes in 64 training, 16 validation, and 20 testing classes and generate 20,000 training tasks, and 600 validation and testing tasks. We consider the 10-ways 3-shots setup with 2 fast-adaptation steps. As opposed to , our model closely resembles the one of our Omniglot experiments. Its main difference with the model described in prior work is the absence of max-pooling layers, and averaging of the last two convolutional feature dimensions before the fully-connected layer. Doing so allows us to conduct our ablation studies on the effect of our meta-optimizers with different number of convolutional layers, while keeping the input dimensionality of the fully-connected layer constant. The 4-and 2-layer CNNs use meta-and fast-adaptation learning rates of 0.003 and 0.7, respectively. The 4-layer CNN has a total of 29,226 parameters, while the 2-layer only 10,602. Meta-SGD, MetaCurvature, and KFC also use a 0.003 meta-learning rate, but the adaptation learning rate is decreased to 0.5. The KFC optimizer consists 4 layers with ReLU activations, for a total of 69,325 parameters for the 4-layer CNN model and 35,117 parameters for the 2-layer CNN. mini-ImageNet As for the Omniglot experiment, we replicate the mini-ImageNet setting and model from. We consider the 10-ways, 1-shot and 5 fast-adaptation steps setting. The 4-layer CNN and 2-layer CNN both use a meta-learning rate of 0.0005 and a fast-adaptation learning rate of 0.07. The 4-layer CNN has a total of 36,906 parameters, while the 2-layer only 18,282. The Meta-SGD, Meta-Curvature, and KFC meta-optimizer use the same meta-learning rate, but a smaller fast-adaptation learning rate set to 0.1 for Meta-SGD, 0.09 for Meta-Curvature, and 0.005 for KFC. For the 2-layer CNN model, the 4-layer KFC optimizer consists of approximately 2.60M parameters, while for the 4-layer CNN model it consists of 2.63M parameters. This section provides additional experimental evidence supporting the claims in the main text. Synthetic Figure A.1: Effect of width on adaptability on the synthetic binary classification setting. We vary the width and depth of linear networks and report their post-adaptation accuracy. As long as the model has one or more hidden layers, the model is able to adapt to the tasks regardless of the layers' width. In Section 3, we argue that depth is a curcial ingredient to enable gradient-based meta-learning. In particular, we show that even on simple, linearly separable binary classification tasks a logistic regression model (i.e. with no hidden layer) fails to solve the meta-learning problem. Using the same toy experiment, we now argue that width plays a relatively less important role with respect to adaptability. In Figure A.2, we plot the post-adaptation accuracy for different number of hidden layers, and vary their width w = 2, 4,..., 256. As can be seen, all models are able to eventually solve the metalearning problem regardless of width, so long as the model has one or more hidden layers. A.2.2 DO LAYERS C1-C3 BEHAVE SIMILARLY TO FC? FS (bottom). We observe that the modelling layers (C1-C3) are insensitive to parameter scaling regardless of whether it is applied pre-or post-adaptation, indicating they serve modelling purposes. On the other hand, parameter scaling on the adaptation layers (C4 & FC) quickly degrades postadaptation accuracy if applied pre-adaptation, but has virtually no effect if applied post-adaptation. In turn, this underlines the importance of these layers for adaptation. In Section 3, we claim that early layers in deep meta-trained models are purposed for modelling, while latter layers are responsible for fast-adaptation. This is reached through the ar-gument that (a) layers C1-C3 can be completely omitted from the adaptation dynamics, and (b) perturbing the last FC layer in large post-adaptation accuracy degradation. We now present further evidence that both layer groups (i.e. C1-C3 and C4-FC) indeed serve different purposes. For each layer of a meta-trained model, we multiply the weights of the selected layer by a constant factor either pre-or post-adaptation. Intuitively, layers that affect the fast-adaptation update will incur a large post-adaptation accuracy penalty as the scaling magnitude increases, due to the value of their weights playing an important part in the computation of the adaptation gradient. On the other hand, a constant scaling of the weights should have little effect on the modelling ability of the network, due to the relative magnitude of activation being maintained. (Note that this intuition is somewhat blurred for our models, as they make use of batch normalization.) Figure A.2 reports post-adaptation accuracy for those experiments. As predicted, scaling the weights of C1-C3 has little impact on accuracy, whether applied pre-or post-adaptation. Clearly, those layers do not affect the computation of the fast adaptation update. Similarly, scaling the weights of C4-FC post-adaptation does not impact the modelling ability of the network, and post-adaptation accuracy does not suffer. However, the same rescalings prove to be disastrous when applied pre-adaptation; the model is unable to adapt and accuracy drops by as much as 85% for Omniglot and 40% for CIFAR-FS. Evidently, those layers are central to successful fast-adaptation. In this section, we detail additional methods and apply them to the non-linear models of Section 5. Cholesky-factored Meta-Optimizers As alluded in Section 4, the choice of Kronecker decomposition is arbitrary. We now describe an alternative implementation of meta-optimizers, based on a low-rank Chholesky decomposition. The rank r Cholesky decomposition consists of a matrix L ∈ R n×r, such that a rank r symmetric positive definite matrix A ∈ R n×n is expressed as A = LL. Using this decomposition, we can write the linear transformation of Section 4 as: where b ∈ R n×1 is the bias term and g ∈ R n×1 is the vectorization of a weight's gradient. As for the Kronecker product, the latter expression of the linear transformation does not require to construct a large n × n matrix. Additionally, multiple linear transformations can be chained or interleaved by activation functions in order to obtain more expressive and adaptable meta-optimizers. A major attribute of the Cholesky decomposition is its flexibility in terms of memory requirements. For example, by letting r = 1 we force the meta-optimizer to have as many weights as the model. This attribute can in turn be a disadvantage: as we show in our experiments below, r plays an important role in the expressivity of the meta-optimizer, and should be treated as an additional hyper-parameter to tune. A second inconvenience stems from the choice of initialization scheme of the meta-optimizer; for the Kronecker product, we initialize the Kronecker factors to the identity, thus recovering hypergradient descent in the first meta-training batch. However, since the identity matrix I n does not lie in the span of low-rank Cholesky factorizable matrices, the initialization scheme becomes another decision left to the practitioner. For our subsequent experiments, we find that letting L follow a Glorot initialization (i.e. L ij ∼ N (0, gain · 1 n)) to work well. We compare the low-rank Cholesky-factored meta-optimizers to the Kronecker ones in Table A1. We denote by CFC1 the rank-1 Cholesky-factored fully-connected network, while CFC10 indicates the rank-10 version. CFC10 contains approximately as many parameters as KFC. (c.f. Table ? ?) While models trained with CFC do not outperform their KFC counter-parts, they offer a competitive alternative to existing methods. We study the effect of post-adaptation pruning of a meta-trained model. Ideally, pruning provides an attractive alternative to explicitly separating adaptation from modelling for learning lean models: practitioners could meta-train a larger model, adapt it to the desired target task, and finally prune it. We implement this pipeline by pruning weights smaller than a predefined threshold and report in Figure A. 3. As can be observed, pruning the 4-layer CNN models to obtain the equivalent number of parameters to a 2-layer SCNN can be catastrophic. On Omniglot, the pruned model reaches 70.06% post-adaptation accuracy, more than 20% lower than using metaoptimizers. On CIFAR-FS and mini-ImageNet, pruning can barely outperform chance prediction, reaching on average 10.00% and 12.20%, respectively. These are also available in Table A1, in the W/ PRUNE column. Sample task τ ∼ p(τ) 3: 4: for step t = 1,..., T do Compute loss L τ (θ t) Compute gradients ∇ θt L τ (θ t) and ∇ ξt L τ (θ t) Update the meta-optimizer parameters ξ t+1 = ξ t − α∇ ξt L τ (θ t) 8: Compute the model update U ξt+1 (∇ θt L τ (θ t)) 9: Update the model parameters θ t+1 = θ t − U ξt+1 (∇ θt L τ (θ t)) 10: end for 11: Update model and meta-optimizer initializations 12: 13: Table A3. All metrics are measured using a single NVIDIA Titan XP.
We find that deep models are crucial for MAML to work and propose a method which enables effective meta-learning in smaller models.
1,469
scitldr
Convolutional Neural Networks continuously advance the progress of 2D and 3D image and object classification. The steadfast usage of this algorithm requires constant evaluation and upgrading of foundational concepts to maintain progress. Network regularization techniques typically focus on convolutional layer operations, while leaving pooling layer operations without suitable options. We introduce Wavelet Pooling as another alternative to traditional neighborhood pooling. This method decomposes features into a second level decomposition, and discards the first-level subbands to reduce feature dimensions. This method addresses the overfitting problem encountered by max pooling, while reducing features in a more structurally compact manner than pooling via neighborhood regions. Experimental on four benchmark classification datasets demonstrate our proposed method outperforms or performs comparatively with methods like max, mean, mixed, and stochastic pooling. Convolutional Neural Networks (CNNs) have become the standard-bearer in image and object classification BID18. Due to the layer structures conforming to the shape of the inputs, CNNs consistently classify images, objects, videos, etc. at a higher accuracy rate than vector-based deep learning techniques BID18. The strength of this algorithm motivates researchers to constantly evaluate and upgrade foundational concepts to continue growth and progress. The key components of CNN, the convolutional layer and pooling layer, consistently undergo modifications and innovations to elevate accuracy and efficiency of CNNs beyond previous benchmarks. Pooling has roots in predecessors to CNN such as Neocognitron, which manual subsampling by the user occurs BID5, and Cresceptron, which introduces the first max pooling operation in deep learning BID28. Pooling subsamples the of the convolutional layers, gradually reducing spatial dimensions of the data throughout the network. The benefits of this operation are to reduce parameters, increase computational efficiency, and regulate overfitting BID1.Methods of pooling vary, with the most popular form being max pooling, and secondarily, average pooling BID18 BID13. These forms of pooling are deterministic, efficient, and simple, but have weaknesses hindering the potential for optimal network learning BID13 BID30. Other pooling operations, notably mixed pooling and stochastic pooling, use probabilistic approaches to correct some of the issues of the prior methods BID30 BID31.However, one commonality all these pooling operations employ a neighborhood approach to subsampling, reminiscent of nearest neighbor interpolation in image processing. Neighborhood interpolation techniques perform fast, with simplicity and efficiency, but introduce artifacts such as edge halos, blurring, and aliasing BID20. Minimizing discontinuities in the data are critical to aiding in network regularization, and increasing classification accuracy. We propose a wavelet pooling algorithm that uses a second-level wavelet decomposition to subsample features. Our approach forgoes the nearest neighbor interpolation method in favor of an organic, subband method that more accurately represents the feature contents with less artifacts. We compare our proposed pooling method to max, mean, mixed, and stochastic pooling to verify its validity, and ability to produce near equal or superior . We test these methods on benchmark image classification datasets such as Mixed National Institute of Standards and Technology (MNIST) BID12, Canadian Institute for Advanced Research (CIFAR-10) BID11, Street House View Numbers (SHVN) BID17, and Karolinska Directed Emotional Faces (KDEF) . We perform all simulations in MATLAB R2016b. The rest of this paper organizes as follows: Section 2 gives the , Section 3 describes the proposed methods, Section 4 discusses the experimental , and Section 5 gives the summary and . Pooling is another term for subsampling. In this layer, the dimensions of the output of the convolutional layer are condensed. The dimensionality reduction happens by summarizing a region into one neuron value, and this occurs until all neurons have been affected. The two most popular forms of pooling are max pooling and average pooling BID18 BID13. Max pooling involves taking the maximum value of a region R ij and selecting it for the condensed feature map. Average pooling involves calculating the average value of a region and selecting it for the condensed feature map. The max pooling function is expressed as: DISPLAYFORM0 While average pooling is shown by the following equation: DISPLAYFORM1 Where a kij is the output activation of the k th feature map at (i,j), a kpq is the input activation at (p,q) within R ij, and |R ij | is the size of the pooling region. An illustration of both of these pooling methods is expressed in FIG0 BID29: While max and average pooling both are effective, simple methods, they also have shortcomings. Max pooling, depending on the data, can erase details from an image BID30 BID31. This happens if the main details have less intensity than the insignificant details. In addition, max pooling commonly overfits training data BID30 BID31. Average pooling, depending on the data, can dilute pertinent details from an image. The averaging of data with values much lower than significant details causes this action BID30 BID31. FIG1 illustrates these shortcomings using the toy image example:To combat these issues, researchers have created probabilistic pooling methods. Mixed pooling combines max and average pooling by randomly selecting one method over the other during training BID30. There is no set way to perform mixed pooling. This method is applied arbitrarily in three different ways for all features within a layer, mixed between features within a layer, or mixed between regions for different features within a layer BID13 BID30. Mixed pooling is shown in the following equation: DISPLAYFORM2 where λ is a random value 0 or 1, indicating max or average pooling for a particular region/feature/layer. Another probabilistic pooling method, called stochastic pooling, improves upon max pooling by randomly sampling from neighborhood regions based on the probability values of each activation BID31. These probabilities p for each region are calculated by normalizing the activations within the region: DISPLAYFORM3 The pooled activation is sampled from a multinomial distribution based on p to pick a location l within the region BID31. The process is captured in the following equation BID31:. In any given region, the activations with the highest probabilities have the higher chance of selection. However, any activation can be chosen. In this example, the stochastic pooling method selects the midrange activation with a probability of 13%. By being based off probability, and not deterministic, stochastic pooling avoids the shortcomings of max and average pooling, while enjoying some of the advantages of max pooling BID31. DISPLAYFORM4 The previously highlighted pooling methods use neighborhoods to subsample, almost identical to nearest neighbor interpolation. Previous studies explore the possibilities of wavelets in image interpolation versus traditional methods BID4. Our proposed pooling method uses wavelets to reduce the dimensions of the feature maps. We propose using the wavelet transform to minimize artifacts ing from neighborhood reduction BID20. We postulate that our approach, which discards the first-order subbands, more organically captures the data compression. This organic reduction therefore lessens the creation of jagged edges and other artifacts that may impede correct image classification. The proposed wavelet pooling scheme pools features by performing a 2nd order decomposition in the wavelet domain according to the fast wavelet transform (FWT) BID15 BID16 BID23 BID2, which is a more efficient implementation of the two-dimensional discrete wavelet transform (DWT) as follows BID3 BID24 BID21: DISPLAYFORM0 DISPLAYFORM1 where ϕ is the approximation function, and ψ is the detail function, W ϕ, W ψ are called approximation and detail coefficients. h ϕ [−n] and h ψ [−n] are the time reversed scaling and wavelet vectors, (n) represents the sample in the vector, while (j) denotes the resolution level. When using the FWT on images, we apply it twice (once on the rows, then again on the columns). By doing this in combination, we obtain our detail subbands (LH, HL, HH) at each decomposition level, and our approximation subband (LL) for the highest decomposition level. After performing the 2nd order decomposition, we reconstruct the image features, but only using the 2nd order wavelet subbands. This method pools the image features by a factor of 2 using the inverse FWT (IFWT) BID15 BID16 BID23 BID2, which is based off of the inverse DWT (IDWT) BID3 BID24 BID21: FIG3 gives an illustration of the algorithm for the forward propagation of wavelet pooling: The proposed wavelet pooling algorithm performs backpropagation by reversing the process of its forward propagation. First, the image feature being back propagated undergoes 1 st order wavelet decomposition. After decomposition, the detail coefficient subbands upsample by a factor of 2 to create a new 1 st level decomposition. The initial decomposition then becomes the 2 nd level decomposition. Finally, this new 2 nd order wavelet decomposition reconstructs the image feature for further backpropagation using the IDWT. FIG4 details the backpropagation algorithm of wavelet pooling: DISPLAYFORM2 All CNN experiments use MatConvNet BID26. All training uses stochastic gradient descent BID0. For our proposed method, the wavelet basis is the Haar wavelet, mainly for its even, square subbands. All experiments are run on a 64-bit operating system, with an Intel Core i7-6800k CPU @ 3.40 GHz processor, with 64.0 GB of RAM. We utilize two GeForce Titan X Pascal GPUs with 12 GB of video memory for all training. All CNN structures except for MNIST use a network loosely based on Zeilers network BID31. We repeat the experiments with Dropout BID22 and replace Local Response Normalization BID11 with Batch Normalization BID7 for CIFAR-10 and SHVN (Dropout only) to examine how these regularization techniques change the pooling . To test the effectiveness of each pooling method on each dataset, we solely pool with that method for all pooling layers in that network. All pooling methods use a 2x2 window for an even comparison to the proposed method. Figure 6 gives a selection of each of the datasets. The network architecture is based on the example MNIST structure from MatConvNet, with batch normalization inserted. All other parameters are the same. The input training data and test data come from the MNIST database of handwritten digits. The full training set of 60,000 images is used, as well as the full testing set of 10,000 images. TAB0 shows our proposed method outperforms all methods. Given the small number of epochs, max pooling is the only method to start to overfit the data during training. Mixed and stochastic pooling show a rocky trajectory, but do not overfit. Average and wavelet pooling show a smoother descent in learning and error reduction. FIG6 shows the energy of each method per epoch. We run two sets of experiments with the pooling methods. The first is a regular network structure with no dropout layers. We use this network to observe each pooling method without extra regularization. The second uses dropout and batch normalization, and performs over 30 more epochs to observe the effects of these changes. FIG7 shows our network structure for the CIFAR-10 experiments:The input training and test data come from the CIFAR-10 dataset. The full training set of 50,000 images is used, as well as the full testing set of 10,000 images. For both cases, with no dropout, and with dropout, TAB1 show our proposed method has the second highest accuracy. Max pooling overfits fairly quickly, while wavelet pooling resists overfitting. The change in learning rate prevents our method from overfitting, and it continues to show a slower propensity for learning. Mixed and stochastic pooling maintain a consistent progression of learning, and their validation sets We run two sets of experiments with the pooling methods. The first is a regular network structure with no dropout layers. We use this network to observe each pooling method without extra regularization. The second uses dropout to observe the effects of this change. FIG0 shows our network structure for the SHVN experiments:The input training and test data come from the SHVN dataset. For the case with no dropout, we use 55,000 images from the training set. For the case with dropout, we use the full training set of 73,257 images, a validation set of 30,000 images we extract from the extra training set of 531,131 images, as well as the full testing set of 26,032 images. For both cases, with no dropout, and with dropout, FIG0: CNN SHVN Structure Block Diagram TAB3 and TAB4 show our proposed method has the second lowest accuracy. Max and wavelet pooling both slightly overfit the data. Our method follows the path of max pooling, but performs slightly better in maintaining some stability. Mixed, stochastic, and average pooling maintain a slow progression of learning, and their validation sets trend at near identical rates. FIG0 shows the energy of each method per epoch. We run one set of experiments with the pooling methods that includes dropout. FIG0 shows our network structure for the KDEF experiments:The input training and test data come from the KDEF dataset. This dataset contains 4,900 images of 35 people displaying seven basic emotions (afraid, angry, disgusted, happy, neutral, sad, and surprised) using facial expressions. They display emotions at five poses (full left and right profiles, half left and right profiles, and straight). This dataset contains a few errors that we fix (missing or corrupted images, uncropped images, etc.). All of the missing images are at angles of -90, -45, 45, or 90 degrees. We fix the missing and corrupt images by mirroring their counterparts in MATLAB and adding them back to the dataset. We manually crop the images that need to match the dimensions set by the creators (762 x 562). KDEF does not designate a training or test data set. We shuffle the data and separate 3,900 images as training data, and 1,000 images as test data. We resize the images to 128x128 because of memory and time constraints. The dropout layers regulate the network and maintain stability in spite of some pooling methods known to overfit. TAB5 shows our proposed method has the second highest accuracy. Max pooling eventually overfits, while wavelet pooling resists overfitting. Average and mixed pooling resist overfitting, but are unstable for most of the learning. Stochastic pooling maintains a consistent progression of learning. Wavelet pooling also follows a smoother, consistent progression of learning. FIG0 shows the energy of each method per epoch. TAB5: KDEF Performance of Pooling Methods + Dropout 4.5 COMPUTATIONAL COMPLEXITY Our construction and implementation of wavelet pooling is not efficient. We present this proposed methods as a proof-of-concept, to show its potential and validity, and also to be open to massive improvements. The main area of improvement is computational efficiency. As a proof-of-concept, the code written to implement this method is not at its peak form. Additionally, we did not have the time, space, or resources to optimize the code. We view the accuracy and novelty as a starting point to spawn improvements, both from our own research as well as other researchers. We calculate efficiency in terms of mathematical operations (multiplications, additions, logical, etc.) that each method utilizes to complete its algorithm. For max pooling, we calculate operations based on the worst-case scenarios for each neighborhood in finding the maximum value. For average pooling, we calculate the number of additions and division for each neighborhood. Mixed pooling is the mean value of both average and max pooling. We calculate operations for stochastic pooling by counting the number of mathematical operations as well as the random selection of the values based on probability (Roulette Wheel Selection). For wavelet pooling, we calculate the number of operations for each subband at each level, in both decomposition and reconstruction. TAB7 shows the number of mathematical operations for one image in forward propagation. This table shows that for all methods, average pooling has the least number of computations, followed by mixed pooling, with max pooling not far behind. Stochastic pooling is the least computationally efficient pooling method out of the neighborhood-based methods. It uses about 3x more mathematical operations than average pooling, the most computationally efficient. However, wavelet pooling by far is the least computationally efficient method, using 54 to 213x more mathematical operations than average pooling. This is partially due to the implementation of the subband coding, which did not implement multidimensional decomposition and reconstruction. MNIST Nonetheless, by implementing our method through good coding practices (vectorization, architecture, etc.), GPUs, and an improved FTW algorithm, this method can prove to be a viable option. There exists a few improvements to the FTW algorithm that utilize multidimensional wavelets BID8 BID27, lifting BID25, parallelization BID6, as well as other methods that boast of improving the efficiency in speed and memory BID19 BID9 BID10 We prove wavelet pooling has potential to equal or eclipse some of the traditional methods currently utilized in CNNs. Our proposed method outperforms all others in the MNIST dataset, outperforms all but one in the CIFAR-10 and KDEF datasets, and performs within respectable ranges of the pooling methods that outdo it in the SHVN dataset. The addition of dropout and batch normalization show our proposed methods response to network regularization. Like the non-dropout cases, it outperforms all but one in both the CIFAR-10 & KDEF datasets, and performs within respectable ranges of the pooling methods that outdo it in the SHVN dataset. Our confirm previous studies proving that no one pooling method is superior, but some perform better than others depending on the dataset and network structure BID1; BID13. Furthermore, many networks alternate between different pooling methods to maximize the effectiveness of each method. Future work and improvements in this area could be to vary the wavelet basis to explore which basis performs best for the pooling. Altering the upsampling and downsampling factors in the decomposition and reconstruction can lead to better image feature reductions outside of the 2x2 scale. Retention of the subbands we discard for the backpropagation could lead to higher accuracies and fewer errors. Improving the method of FTW we use could greatly increase computational efficiency. Finally, analyzing the structural similarity (SSIM) of wavelet pooling versus other methods could further prove the vitality of using our approach.
Pooling is achieved using wavelets instead of traditional neighborhood approaches (max, average, etc).
1,470
scitldr
Dynamic ridesharing services (DRS) play a major role in improving the efficiency of urban transportation. User satisfaction in dynamic ridesharing is determined by multiple factors such as travel time, cost, and social compatibility with co-passengers. Existing DRS optimize profit by maximizing the operational value for service providers or minimize travel time for users but they neglect the social experience of riders, which significantly influences the total value of the service to users. We propose DROPS, a dynamic ridesharing framework that factors the riders' social preferences in the matching process so as to improve the quality of the trips formed. Scheduling trips for users is a multi-objective optimization that aims to maximize the operational value for the service provider, while simultaneously maximizing the value of the trip for the users. The user value is estimated based on compatibility between co-passengers and the ride time. We then present a real-time matching algorithm for trip formation. Finally, we evaluate our approach empirically using real-world taxi trips data, and a population model including social preferences based on user surveys. The demonstrate improvement in riders' social compatibility, without significantly affecting the vehicle miles for the service provider and travel time for users. Dynamic ridesharing services, such as UberPool and LyftLine, are becoming an increasingly popular means of commute, especially in large cities BID6 BID2. Dynamic ridesharing is characterized by matching multiple requests that arrive in real-time, for a one-way and one-time trip. We consider a setting in which a service provider operates a vehicle fleet and schedules cars to pick up and drop off passengers in response to a stream of requests, which includes matching requests with each other. There are two important factors that explain the growing attractiveness of DRS for customers: (i) cost effectiveness and (ii) ease of finding a ride in large cities where it is comparatively hard to find a taxi otherwise. For a service provider, dynamic ridesharing helps serve customers with possibly fewer vehicles, thus reducing their operational cost. A common objective for optimizing riders' satisfaction in existing ridesharing systems is to minimize travel time BID14 BID0 BID2. In practice, however, there are many other factors that affect user satisfaction in dynamic ridesharing, apart from travel time. Since a user could be traveling with strangers in the ride, their compatibility plays a major role in the user's satisfaction. In fact, there is growing evidence that desire for personal space and security when riding with strangers pose a major barrier to using ridesharing for many users (Tao and Wu 2008; BID0 . For example, a female passenger may prefer to ride only with female co-passengers. The user may have a different set of preferences depending on the time of day and the location -preferences are tripspecific and not necessarily user-specific. Consider a scenario with three requests where r 1 and r 2 are male and r 3 is a female passenger. Let these requests arrive at the same time FIG0), such that optimizing the operational value for the service provider forms a trip with these requests (1(a)). However, this may violate the users' social preferences and the trip may need to be altered to satisfy the preferences, such as the following:• If the passengers prefer riding with co-passengers of the same gender but are indifferent to riding with copassengers of a different gender, then it is desirable to minimize their ride time overlap in the vehicle by altering the pick up and drop off order FIG0 ); and If the service does not provide a mechanism to express such social preferences and forms trips that violate these preferences (as in 1(a)), the customers may not use the service. Current DRS, however, do not account for social preferences in their optimization, despite being indicated as a major concern for users in several surveys BID0 BID15 BID10 BID21. We present DROPS (Dynamic Ridesharing Optimization using social PreferenceS), a dynamic ridesharing framework that facilitates incorporating social preferences of the users in the trip formation process. A weight vector over preferences indicates the importance of each factor in determining the trip value to the user. The goal is to form trips that optimize both operational value for the service provider and value of the trip to the passengers, which incentivizes them to continue using the service and benefits the service provider. The value of a trip to a user is calculated based on their social compatibility with other co-passengers, the ride time, and ride cost. We solve this bi-objective optimization problem using scalarization BID18, which solves a linear combination of the multiple objectives. The relative importance of each objective can be controlled using the weight vector for the objectives. Given a set of riders, we evaluate their potential shared trip using an optimal trajectory planning algorithm. Candidate trips are formed using our real-time greedy algorithm that adds customers to a trip only if the trip's value is above a certain threshold. We consider three basic social factors -age, gender, and user rating-along with a time preference indicating if the user is in a rush. The viability of factoring social preferences into the trips scheduling process is evaluated empirically. The experiments examine the impact of matching with social preferences (social matching) on users and the service provider. We test our approach on a real-world taxi trips dataset and compare the with that of three baselines, each focusing on optimizing different components of the objective for trip formation. The population model and preferences used in our experiments were acquired using webbased user surveys, which was conducted in two phases and had 489 responses. The survey was conducted specifically to determine how different potential riders evaluate social ridesharing. Our show that incorporating social preferences of users in the trip formation improves the overall user satisfaction, without significantly affecting the operational cost for the service provider. Our primary contributions are: (i) presenting DROPS, a system for dynamic ridesharing with social preferences; (ii) proposing a real-time greedy algorithm for trip formation; and (iii) extensive empirical evaluation showing the benefits of social matching in dynamic ridesharing using real-world taxi data and a population model based on user surveys. Dynamic ridesharing has gained popularity since the early 2000's due to the cost benefits it offers to the users and service providers, apart from its contributions to sustainable environment ing from efficient vehicle usage. Dynamic ridesharing is characterized by user requests that arrive in real-time and are matched with vehicles BID13. Another popular ridesharing setting is the car-pooling where users travel together for a particular purpose and the trips are usually recurring BID6. Our work differs from car-pooling as we focus on a dynamic ridesharing setting with a service provider who operates the vehicle fleet instead of individual car owners and trips that are typically non-recurring. Optimizing dynamic ridesharing services has been an active research area, attracting researchers from diverse fields such as operations research, transportation, and artificial intelligence BID0 BID6 BID8 BID1. Existing literature on dynamic ridesharing can be classified broadly based on the objective function and the solution method employed. Optimization-based approaches are the common solution technique employed BID19 BID14 BID8 BID5 AlonsoMora et al. 2017; BID9 BID3. Other approaches include partition-based BID17, auction-based mechanisms BID7, and genetic algorithms BID11. Researchers have employed these techniques largely to optimize the routing or travel time BID10 BID0 BID11 BID17 BID19 BID5. Specifically, the commonly used objectives for determining ridesharing matches are: (i) minimizing system-wide vehicle-miles; (ii) minimizing system-wide travel time; and (iii) maximizing number of participants. A critical missing component of these objectives is the in-ride user experience. Numerous studies have outlined the need for learning and understanding user preferences in the context of ridesharing, beyond simple factors like time windows BID6 BID0 BID22. Multiple surveys have acknowledged that it is essential to account for users' social preferences to improve dynamic ridesharing BID0 BID8 BID10 BID20 BID6 BID17 Tao and Wu 2008; BID16 BID21 BID4. To address this discrepancy, we present a dynamic ridesharing framework that allows for representing and satisfying the social preferences of the users in trip formation. The DROPS framework facilitates customizing rides to improve user compatibility by incorporating the social preferences of users. Let R t denote the finite set of unserved (nondispatched) requests at time t and V t denote the finite set of available vehicles at time t. Each request r ∈ R t is denoted by s, e, i, p, U. Each vehicle v ∈ V t is denoted by the tuple ID, ω. Refer Table 1 for the definitions of variables and constants employed in the formulation. We consider social preferences in each request that correspond to three social factors: age, gender, and rating of users. Additionally, we consider a time preference to indicate if the user is in a rush. We identified these factors based on the of our user surveys, conducted specifically to determine user expectations in ridesharing services. The preferences (p) are denoted as +1, −1, or 0, indicating the user's desirability, undesirability, or indifference about a certain value of a factor. For example, a preference of +1 for rating ≥ 4 denotes that the person prefers riding with co-passengers who have a minimum rating of 4, and a preference of −1 for rating ≤ 3 denotes that the person wishes to avoid riding with co-passengers who have a rating of 3 or below. That is, if rating on a scale of 1 to 5 is treated as a vector, then these preferences are denoted as −1, −1, −1, +1, +1. The weights w = [w t, w a, w g, w s]T correspond to the time, age, gender, and rating, respectively. A solution to an instance of this problem is a set of trips Λ, where each trip λ ∈ Λ is a matching of requests to a vehicle and is denoted by R, v, τ. The value of a trip is denoted by V (λ). The objective is to maximize the cumulative value of all trips dispatched in a given horizon H, DISPLAYFORM0 Multi-objective formulation Since the goal is to schedule trips that maximize the operational value for the service provider as well as maximizing the overall user value, this is naturally a bi-objective optimization. To solve this, we employ scalarization BID18, which projects a multi-objective value to a single scalar value by parameterizing the objectives using a weight vector. The weight value for each objective indicates its relative importance, thus ing in a single objective function for optimization. Let β o denote the weight corresponding to the operational value and let β u denote the weight corresponding to the user value. Then, ∀λ, the trip value is: DISPLAYFORM1 The operational value and the user value are measured in dollars ($) and normalized to the same scale before scalarization. The operational value to the service provider depends on the cost of operating the vehicle for the trip c τ λ and the amount paid by the riders, which is the difference between the amount charged for the trip (x r) and the discount offered for using ridesharing (d r). The value of the trip to a user depends on the user utility (α r) and the discount gained for using ridesharing (d r). The user utility (α r) is the difference between the users' social compatibility with all their co-passengers and the extra travel time incurred by using ridesharing. The social compatibility for a request is calculated as the cumulative weighted difference between the preferences p r and demographics of each co-passenger. We now explain the social utility calculation using a simple example. Consider two requests r 1 (female) and r 2 (male) that arrive at the same time and have the same source and destination coordinates, same age, and rating. of r for the trip α r User's social utility x r Amount charged to r for the trip d r Discount for using ridesharing i r Request initiation time p r Social and time preferences of r w r User's weights for preferences p r U r User demographics: {age, gender, rating} ID v Vehicle ID Given a set of requests and vehicles, our solution approach consists of two components: (i) trip formation and (ii) trip dispatch. FIG2 is an illustration of our solution approach. In each decision cycle, the trip formation component matches requests with each other and to vehicles, and the dispatch component decides which trips are dispatched. We restrict the scope of matching in this paper to requests and vehicles that have not been dispatched. That is, we do not consider a vehicle en-route (already driving on the road) in the scheduling process and therefore do not match requests to such vehicles. The route planner calculates the optimal trajectory for picking up and dropping off a given set of requests. In this phase, requests are matched with other requests and assigned a vehicle to form a trip. The matching is performed using a greedy approach outlined in Algorithm 1. The input to the algorithm is the set of requests and a trip value threshold δ that indicates the required minimum improvement in trip value to form trips. The algorithm adds a request to the best trip (maximum improvement) that improves the trip value at least by a factor of δ and if the trip size has not exceeded the maximum capacity of the vehicle (Lines 7-16). Standard hyperparameter tuning or sample average Each request is assigned to the best trip that satisfies the threshold improvement. If no such trip is found, then a new trip is created with the request. This ensures that all requests are associated with a trip. The route planner computes trajectories that determine the pick up and drop off order for a given set of requests. All possible trajectories are generated and the one that maximizes the trip value for a given set of requests is selected as the route τ for the trip. During the trip formation, the best route is updated whenever a new request is added to a trip (Line 8, 21). The output of this algorithm is the set of all trips formed, Λ t.Partitioning Requests for Scalability The computational complexity of the matching algorithm discussed above increases rapidly with the increase in number of requests. To counter this computational complexity, we exploit the notion of independence among requests. Two requests q and r are said to be independent if serving them together in the same trip is not desirable in terms of trip value. Hence, all the requests over different days or requests with non-overlapping source-destination pairs are independent. The requests can be partitioned based on their dependence and matches may be formed among each partition in parallel. Sometimes, it is non-trivial to estimate an exact partitioning of requests with respect to routes, without forming trips and calculating the best route possible. In such cases, the underlying map may be partitioned into geographic zones to form trips in each zone independently by considering the requests originating in that zone, as in our experiments. The trips formed in the matching phase are dispatched in this phase if at least one of the following conditions is satisfied: (i) trip value is above the predefined dispatch threshold; or (ii) a request in the trip has remained unserved for a certain period of time since its arrival (queue time). The dispatch threshold for trip value and the queue time for the requests are determined by the service provider. For example, all requests that are unserved for five minutes or more since their arrival time may be dispatched irrespective of the trip value, depending on vehicle availability. In our experiments, trips Algorithm 1: Greedy Matching (R t, δ) DISPLAYFORM0 Calculate best route for λ = λ + r that satisfy the queue time threshold are given a higher priority over the trips with lower queue time but higher trip value. This ensures that certain requests do not remain unserved forever due to lower trip value. The trips are then dispatched based on availability of vehicles, V t. At the end of decision cycle t, all unserved requests -requests in trips that are not dispatched -are added to the requests set for the next decision cycle, R t+1. DISPLAYFORM1 The experiments evaluate the impact of using social preferences in ridesharing, with respect to users and the service provider. We built a realistic simulator of ridesharing using the Chicago taxi trips dataset 1 and a population model based on extensive user surveys. We compare the obtained using social preferences in dynamic ridesharing matching (SM) with that of three baselines: (B 1) maximizing only the operational value, β u = 0, β o = 1; (B 2) maximizing only user value, β u = 1, β o = 0; and (B 3) maximizing the comprehensive trip value in Equation 1 but without considering user's social preferences corresponding to age, gender, and rating, w a = 0, w g = 0, w s = 0, for the trip formation. Algorithm 1 is used to form trips using each baseline objective. The algorithms and the simulation system were implemented by us on an Intel Xeon 3.10 GHz computer with 16GB of RAM, using a homogeneous vehicle fleet with a 1 https://data.cityofchicago.org/Transportation/TaxiTrips/wrvz-psew seat capacity of 4 for the evaluation. Each decision cycle is 30 seconds in real-time and the horizon H is one day. We assume that the number of vehicles is not bounded since the benefits of social matching are best illustrated in this case and all techniques are equally affected by vehicle restriction. We set the trip threshold δ to zero for the greedy algorithm; requests are added to the best trips possible as long as the current value of the trip is not diminished. This allows us to examine the benefit of social matching uniformly across zones by using a conservative value. However, in practice this hyperparameter may be tuned to further optimize performance subject to the service provider's objective. The request queue time threshold for dispatch is set to five minutes. The travel time and distances are calculated using straight line distances between the coordinates and a vehicle speed of 30 miles per hour. While these experiments do not account for the actual routes and traffic conditions, these factors are not likely to change the relative merits of each approach and the of the study. The population model considered in our experiments is based on the of online surveys that was conducted in North America. The survey had 489 responses which indicated that users would like to be matched with people who are similar to them. The demographic information such as age and gender, for our experiments, is drawn from the actual Chicago demographic distributions 2. The preferences (p) and the weights (w) are based on the survey . The survey also indicates that some users are unwilling to use ridesharing when social preferences are not taken into account. To reflect this, certain users were marked as reluctant for ridesharing in the absence of social matching and these users were always dispatched as solo rides, when forming trips with the baseline objectives. The Chicago taxi trips data consists of trip-specific information such as start time and end time of the taxi ride, trip duration, trip fare, and the latitude and longitude coordinates for pick up and drop off locations along with the geographic zone corresponding to these locations. A map of Chicago divided into zones 3 is shown in FIG4. We partition the data from each zone into training and testing sets. The weights for scalarization were estimated empirically using the training data FIG3 ). In FIG3, the x-axis is the weight for operational value (β o) and the y-axis denotes the weight corresponding to user value (β u). The weights used for the test sets are β o = 0.8 and β u = 0.6 for zones 8 and 28, and we used β o = 0.5 and β u = 0.5 for experiments on zone 56. Our algorithm is evaluated along different metrics on the test set which uses data from two consecutive weeks in April 2015. We consider requests originating in zones 8, 28, and 56, whose requests densities are high, medium, and low respectively. The average number of requests per day in each of these zones is 20000, 7000, and 1500 respectively. Since user value and the operational value are often competing metrics, we analyze the quality of trips formed with respect to each of these. We measure the impact on users based on the total user value (Figure 5), average social utility per minute (Figure 6), and the increase in ride time, relative to a solo trip FIG6.Trips formed by maximizing operational value (B 1) have the least user value across all zones, as expected. Our approach (SM) achieves user value close to that of optimiz- ing for user value alone (B 2), and sometimes better than B 2. This is because, in some cases, the values of the trips formed by optimizing B 2 objective may not meet the dispatch threshold in which the case the trips are dispatched after five minutes, which eventually reduces the user value. Our approach overcomes this drawback by optimizing for both the objectives, providing greater cumulative value for a given trip and enabling it to be dispatched more quickly. The social utility (α r) per minute measures the average social compatibility of users with their co-passengers. To account for the different ride times of the trips, we measure the average utility per minute, along with standard error (Figure 6). We observe that SM consistently performs similar to or better than B 2, showing that the user value is improved through better matching, and not merely based on the ride time or discount offered. We also evaluated the increase in ride time of the different techniques, compared to solo ride FIG6. The average ride times are in the range 10-20 minutes for requests originating in zones of interest. Though the increase in ride time of our technique is around three minutes, note that ridesharing, in general, incurs additional ride time. The increase in ride time of our technique is well within the range that users consider acceptable (at most 5 minutes) according to the survey . The social compatibility typically offsets the increase in ride time for the users, thus ing in increased user utility when forming trips using our approach. Impact on the Service Provider The impact on service provider is determined based on the operational value and the total miles driven, to give a sense of degree of variation induced by social matching on the trip routes and quality of service. As expected, objective B 1 achieves the highest operational value and maximizing B 2 objective has the lowest operational value (Figure 8). The operational value achieved by our approach (SM) is closer to that of B 1, with a slightly higher miles driven (Figure 9) and higher user utility. The total number of trips formed by our approach is also comparable to that of B 1. This shows that our approach improves the quality of trips without significantly affecting the total miles driven or the cost of operating the service by the provider. Since matching is performed every 30 seconds, it is important to ensure that the matching algorithm is fast so that it may be effectively used in real-time. The run time (in seconds) of our matching algorithm is 0.5 on average in the zone with high request density (zone 8), 0.12 in zone 28, and 0.003 in zone 56, demonstrating the scalability of DROPS.We also compared our matching algorithm to a hindsight greedy matching with access to all the requests in a day, including future ones. The purpose of this experiment is to evaluate the gain in operational value and user value that could be achieved when knowledge of future requests is available. We compare the total operational value obtained using our approach with that of optimizing only for operational value with all requests in a day. Similarly, the total user value obtained with our approach, with requests arriving in real-time, is compared with that of optimizing for user value only and with access to all requests in a day. Trips are formed using the best-fit greedy algorithm (Algorithm 1) for our approach and for the hindsight evaluation. The , summarized in TAB4, show that our approach achieves at least ∼89% of the operational value and up to ∼84% of the user value compared to the hindsight X X X X X X X X matching in all three zones, indicating that any prediction method of future requests would yield very limited performance gains in the operational value. However, some improvements in user value could be achieved with knowledge of future requests by forming trips where the users have a higher social compatibility with co-passengers. Dynamic ridesharing is an increasingly appealing commuter option. However, numerous surveys have indicated that users' concerns, primarily about the social characteristics of co-passengers, pose a major barrier to using ridesharing for a segment of the population. We present the DROPS system for optimizing dynamic ridesharing with social preferences and present an efficient real-time matching algorithm that can handle effectively high density zones. Our demonstrate that factoring social preferences into the matching process helps improve the user value, without significantly affecting the operational value to the service provider. Furthermore, survey indicate that services that perform social matching are likely to incentivize more individuals to use the service. We conclude that while social matching is beneficial overall, it is not always guaranteed to in improved performance. Factoring social preferences into the matching process is most beneficial in zones with a high request density per decision cycle and greater compatibility among ridesharing users. In the future, we aim to examine ways to extend the matching model to consider nearby trips that have already been dispatched and are currently en-route. We will also consider more complex ways to factor the competing objectives using more general multi-objective planning algorithms BID23. Additionally, based on the performance analysis of our approach with that of a hindsight trip formation, we aim to employ a predictive model for future requests to improve the user value. While we anticipate some performance gains, we do not expect the relative benefits of social matching to diminish.
We propose a novel dynamic ridesharing framework to form trips that optimizes both operational value for the service provider and user value to the passengers by factoring the users' social preferences into the decision-making process.
1,471
scitldr
Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called `adversarial subspaces') in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets. Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs. Deep Neural Networks (DNNs) are highly expressive models that have achieved state-of-the-art performance on a wide range of complex problems, such as speech recognition and image classification BID18. However, recent studies have found that DNNs can be compromised by adversarial examples (; BID8 BID27 . These intentionally-perturbed inputs can induce the network to make incorrect predictions at test time with high confidence, even when the examples are generated using different networks BID24 BID3 BID29 . The amount of perturbation required is often small, and (in the case of images) imperceptible to human observers. This undesirable property of deep networks has become a major security concern in real-world applications of DNNs, such as self-driving cars and identity recognition BID5 BID34. In this paper, we aim to further understand adversarial attacks by characterizing the regions within which adversarial examples reside. Each adversarial example can be regarded as being surrounded by a connected region of the domain (the 'adversarial region' or 'adversarial subspace') within which all points subvert the classifier in a similar way. Adversarial regions can be defined not only in the input space, but also with respect to the activation space of different DNN layers . Developing an understanding of the properties of adversarial regions is a key requirement for adversarial defense. Under the assumption that data can be modeled in terms of collections of manifolds, several works have attempted to characterize the properties of adversarial subspaces, but no definitive method yet exists which can reliably discriminate adversarial regions from those in which normal data can be found. argued that adversarial subspaces are low probability regions (not naturally occurring) that are densely scattered in the high dimensional representation space of DNNs. However, a linear formulation argues that adversarial subspaces span a contiguous multidimensional space, rather than being scattered randomly in small pockets BID8 ). further emphasize that adversarial subspaces lie close to (but not on) the data submanifold. Similarly, it has also been found that the boundaries of adversarial subspaces are close to legitimate data points in adversarial directions, and that the higher the number of orthogonal adversarial directions of these subspaces, the more transferable they are to other models (Tramèr et al., 2017). To summarize, with respect to the manifold model of data, the known properties of adversarial subspaces are: they are of low probability, they span a contiguous multidimensional space, they lie off (but are close to) the data submanifold, and they have class distributions that differ from that of their closest data submanifold. Among adversarial defense/detection techniques, Kernel Density (KD) estimation has been proposed as a measure to identify adversarial subspaces BID7. BID2 demonstrated the usefulness of KD-based detection, taking advantage of the low probability density generally associated with adversarial subspaces. However, in this paper we will show that kernel density is not effective for the detection of some forms of attack. In addition to kernel density, there are other density-based measures, such as the number of nearest neighbors within a fixed distance, and the mean distance to the k nearest neighbors (k-mean distance). Again, these measures have limitations for the characterization of local adversarial regions. For example, in FIG0 the three density measures fail to differentiate an adversarial example (red star) from a normal example (black cross), as the two examples are locally surrounded by the same number of neighbors FORMULA8, and have the same k-mean distance (KM=0.19) and kernel density (KD=0.92).As an alternative to density measures, FIG0 leads us to consider expansion-based measures of intrinsic dimensionality as a potentially effective method of characterizing adversarial examples. Expansion models of dimensionality assess the local dimensional structure of the data -such models have been successfully employed in a wide range of applications, such as manifold learning, dimension reduction, similarity search and anomaly detection BID0 BID13. Although earlier expansion models characterize intrinsic dimensionality as a property of data sets, the Local Intrinsic Dimensionality (LID) fully generalizes this concept to the local distance distribution from a reference point to its neighbors BID13 BID7 -the dimensionality of the local data submanifold in the vicinity of the reference point is revealed by the growth characteristics of the cumulative distribution function. In this paper, we use LID to characterize the intrinsic dimensionality of adversarial regions, and attempt to test how well the estimates of LID can be used to distinguish adversarial examples. Note that the main goal of LID is to characterize properties of adversarial examples, instead of being applied as a pure defense method, which requires stronger assumptions on the current threat model. In FIG0, the estimated LID of the adversarial example (LID ≈ 4.36) is much higher than that of the referenced normal data sample (LID ≈ 1.53), illustrating that the estimated LID can efficiently capture the intrinsic dimensional properties of adversarial regions. In this paper, we aim to study the LID properties of adversarial examples generated using state-of-the-art attack methods. In particular, our contributions are:• We propose LID for the characterization of adversarial regions of deep networks. We discuss how adversarial perturbation can affect the LID characteristics of an adversarial region, and empirically show that the characteristics of test examples can be estimated effectively using a minibatch of training data.• Our study reveals that the estimated LID of adversarial examples considered in this paper 1 is significantly higher than that of normal data examples, and that this difference becomes more pronounced in deeper layers of DNNs.• We empirically demonstrate that the LID characteristics of adversarial examples generated using five state-of-the-art attack methods can be easily discriminated from those of normal examples, and provide a baseline classifier with features based on LID estimates that generally outperforms several existing detection measures on five attacks across three benchmark datasets. Though the adversarial examples considered here are not guaranteed to be the strongest with careful parameter tuning, these preliminary firmly demonstrate the usefulness of LID measurement.• We show that the adversarial regions generated by different attacks share similar dimensional properties, in that LID characteristics of a simple attack can potentially be used to detect other more complex attacks. We also show that a naive LID-based detector is robust to the normal low confidence Optimization-based attack of BID2. In this section, we briefly review the state of the art in both adversarial attack and adversarial defense. Adversarial Attack: A wide range of approaches have been proposed for the crafting of adversarial examples to compromise the performance of DNNs; here, we mention a selection of such works. The Fast Gradient Method (FGM) BID8 directly perturbs normal input by a small amount along the gradient direction. The Basic Iterative Method (BIM) is an iterative version of FGM BID19. One variant of BIM stops immediately once misclassification has been achieved with respect to the training set (BIM-a), and another iterates a fixed number of steps (BIM-b). For image sets, the Jacobian-based Saliency Map Attack (JSMA) iteratively selects the two most effective pixels to perturb based on the adversarial saliency map, repeating the process until misclassification is achieved BID30. The Optimization-based attack (Opt), arguably the most effective to date, addresses the problem via an optimization framework BID24 BID3.Adversarial Defense: A number of defense techniques have been introduced, including adversarial training BID8, distillation BID31, gradient masking BID10, and feature squeezing . However, these defenses can generally be evaded by Opt attacks, either wholly or partially BID2 BID11 BID21. Given the inherent challenges for adversarial defense, recent works have instead focused on detecting adversarial examples. These works attempt to discriminate adversarial examples (positive class) from both normal and noisy examples (negative class), based on features extracted from different layers of a DNN. Detection subnetworks based on activations BID25, a cascade detector based on the PCA projection of activations BID23, an augmented neural network detector based on statistical measures, a learning framework that covers unexplored space in vulnerable models BID32 BID33, a logistic regression detector based on KD, and Bayesian Uncertainty (BU) features BID9 are a few such works. However, a recent study by BID2 has shown that these detection methods can be vulnerable to attack as well. In the theory of intrinsic dimensionality, classical expansion models (such as the expansion dimension and generalized expansion dimension BID16 BID15) measure the rate of growth in the number of data objects encountered as the distance from the reference sample increases. As an intuitive example, in Euclidean space, the volume of an m-dimensional ball grows proportionally to r m, when its size is scaled by a factor of r. From this rate of volume growth with distance, the expansion dimension m can be deduced as: DISPLAYFORM0 By treating probability mass as a proxy for volume, classical expansion models provide a local view of the dimensional structure of the data, as their estimation is restricted to a neighborhood around the sample of interest. Transferring the concept of expansion dimension to the statistical setting of continuous distance distributions leads to the formal definition of LID BID13. Definition 1 (Local Intrinsic Dimensionality). Given a data sample x ∈ X, let R > 0 be a random variable denoting the distance from x to other data samples. If the cumulative distribution function F (r) of R is positive and continuously differentiable at distance r > 0, the LID of x at distance r is given by: DISPLAYFORM1 whenever the limit exists. F (r) is analogous to the volume V in Equation FORMULA0; however, we note that the underlying distance measure need not be Euclidean. The last equality of Equation FORMULA1 follows by applying L'Hôpital's rule to the limits BID13. The local intrinsic dimension at x is in turn defined as the limit, when the radius r tends to zero: DISPLAYFORM2 LID F describes the relative rate at which its cumulative distance function F (r) increases as the distance r increases from 0, and can be estimated using the distances of x to its k nearest neighbors within the sample BID0.In the ideal case where the data in the vicinity of x is distributed uniformly within a submanifold, LID F equals the dimension of the submanifold; however, in general these distributions are not ideal, the manifold model of data does not perfectly apply, and LID F is not an integer. Nevertheless, the local intrinsic dimensionality does give a rough indication of the dimension of the submanifold containing x that would best fit the data distribution in the vicinity of x. We refer readers to BID13 BID7 for more details concerning the LID model. According to the branch of statistics known as extreme value theory, the smallest k nearest neighbor distances could be regarded as extreme events associated with the lower tail of the underlying distance distribution. Under very reasonable assumptions, the tails of continuous probability distributions converge to the Generalized Pareto Distribution (GPD), a form of powerlaw distribution BID4. From this, BID0 developed several estimators of LID to heuristically approximate the true underlying distance distribution by a transformed GPD; among these, the Maximum Likelihood Estimator (MLE) exhibited a useful trade-off between statistical efficiency and complexity. Given a reference sample x ∼ P, where P represents the data distribution, the MLE estimator of the LID at x is defined as follows: DISPLAYFORM0 Here, r i (x) denotes the distance between x and its i-th nearest neighbor within a sample of points drawn from P, where r k (x) is the maximum of the neighbor distances. In practice, the sample set is drawn uniformly from the available training data (omitting x itself), which itself is presumed to have been randomly drawn from P. We emphasize that the LID defined in Equation FORMULA2 is a theoretical quantity, and that LID as defined in Equation FORMULA3 is its estimate. In the remainder of this paper, we will refer to Equation to calculate LID estimates. Our aim is to gain a better understanding of adversarial regions, and thereby derive potential defenses and provide new directions for more efficient attacks. We begin by providing some motivation with respect to the manifold model of data as to how adversarial perturbation might affect the LID characteristic of adversarial regions. We then show how a detector can potentially be designed using LID estimates to discriminate between adversarial and normal examples. LID of Adversarial Subspaces: Consider a sample x ∈ X lying within a data submanifold S, where X is a randomly sampled dataset from P consisting only of normal (unperturbed) examples. Adversarial perturbation of x typically in a new sample x whose coordinates differ from those of x by very small amounts. Assuming that x is indeed a successful adversarial perturbation of x, the theoretical LID value associated with x is simply the dimension of S, whereas the theoretical LID value associated with x is the dimension of the adversarial subspace within which it resides. Recent work in BID1 shows that the magnitude of the perturbation required to make changes in the expected nearest neighbor ranking tends to zero as the LID and the data sample size tend to infinity. Since perturbation schemes generally allow the modification of all data coordinates, they exploit the full degrees of freedom afforded by the representational dimension of the data domain. As pointed out by BID8; ), x is very likely to lie outside S (but very close to S -in a high-dimensional contiguous space). In applications involving high-dimensional data, the representational dimension is typically far larger than the intrinsic dimension of any given data submanifold, which implies that the theoretical LID of x is far greater than that of x. In practice, however, the values of LID must be estimated from local data samples. This is typically done by applying an appropriate estimator (such as the MLE estimator shown in Equation) to a k-nearest neighborhood of the test samples, for some appropriate fixed choice of k. Typically, k is chosen large enough for the estimation to stabilize, but not so large that the sample is no longer local to the test sample. If the dimension of S is reasonably low, one can expect the estimation of the LID of x to be reasonably accurate. For the adversarial subspace, the samples appearing in the neighborhood of x can be expected to be drawn from more than one manifold. The proximity of x to S means that the neighborhood is likely to contain neighbors lying in S; however, if the neighborhood were composed mostly of samples drawn from S, x would not likely be an adversarial example. Thus, the neighbors of x taken together are likely to span a subspace of intrinsic dimensionality much higher than any of these submanifolds considered individually, and the LID estimate computed for x can be expected to reveal this. Computing neighborhoods with respect to the entirety of the dataset X can be prohibitively expensive, particularly when the (global) intrinsic dimensionality of X is too high to support efficient indexing. For this reason, when X is large, the computational cost can be reduced by estimating the LID of an adversarial example x from its k-nearest neighbor set within a randomly-selected sample (minibatch) of the dataset X. Since the LID estimation model regards the distances from x to the members of X as determined by independently-drawn samples from a distribution P, the estimator can also be applied to the distances induced by any random minibatch, as it too would be drawn independently from the same distribution P.Provided that the minibatch is chosen sufficiently large so as to ensure that the k-nearest neighbor sets remain in the vicinity of x, estimates of LID computed for x within the minibatch would resemble those computed within the full dataset X. Conversely, as the size of the minibatch is reduced, the variance of the estimates would increase. However, if the gap between the true LID values of x and x is sufficiently large, even an extremely small minibatch size and / or small neighborhood size could conceivably produce estimates whose difference is sufficient to reveal the adversarial nature of x. As we shall show in Section 5.2, discrimination between adversarial and non-adversarial examples turns out to be possible even for minibatch sizes as small as 100, and for neighborhood sizes as small as 20.Using LID to Characterize Adversarial Examples: We next describe how LID estimates can serve as features to train a detector to distinguish adversarial examples. Note that here we only aim to train a baseline classifier to demonstrate how well LID can characterize adversarial examples. Robust detection taking different attack variations into account, such as attack confidence, will be left as future work. Our methodology requires that training sets be comprised of three types of examples: adversarial, normal and noisy. This replicates the methodology used in BID7 BID2, where the rationale for including noisy examples is that DNNs are required to be robust to random input noise BID6 and noisy inputs should not be identified as adversarial attacks. A classifier can be trained by using the training data to construct features for each sample, based on its LID within a minibatch of samples across different layers, where the class label is assigned positive for adversarial examples and assigned negative for normal and noisy examples. Algorithm 1 describes how the LID features can be extracted for training an LID-based classifier. Given an initial training dataset and a DNN pre-trained on the initial training dataset, the algorithm outputs a classifier trained using LID features. As in previous studies BID2 BID7, we assume that the initial training dataset is free of adversarial examplesthat is, all examples in the dataset are considered'normal' to begin with. The extraction of LID features first begins with the generation of adversarial and noisy counterparts to normal examples (step 3 and 4) in each minibatch. One minibatch of normal examples (B norm) is used for generating 2 counterpart minibatches of examples: one adversarial (B adv) and one noisy (B noisy). The adversarial examples are generated using an adversarial attack on normal examples (step 3), while noisy examples are generated by adding random noise to normal examples, subject to the constraint that the magnitude of perturbation undergone by a noisy example is the same as the magnitude of perturbation undergone by its counterpart adversarial example (step 4). One minibatch of normal examples is converted to an equal number of adversarial examples after step 3, and an equal number of noisy examples after step 4.The LID associated with each example (either normal, adversarial or noisy) is estimated from its k nearest neighbors in the normal minibatch (steps 12-14), using Equation. For any new unknown test example, a minibatch consisting only of normal training examples is used to estimate LID. For each example and each transformation layer in the DNN, an LID estimate is calculated. The distance function needed for this estimate uses the activation values of the neurons in the given layer as inputs (step 7). As will be discussed in Section 5.2, we use all transformation layers, including conv2d, max-pooling, dropout, ReLU and softmax, since we expect adversarial regions to exist in each layer of the DNN representation space. The LID estimates associated with the example are then used as feature values (one feature for each transformation layer). Finally, a classifier (such as logistic regression) is trained using the LID features. Test examples can then be classified by the LID-based classifier to either the positive (adversarial) or negative (non-adversarial) class by means of its LID-based feature values. In this section, we evaluate the discrimination power of LID-based characterization against five adversarial attack strategies -FGM, BIM-a, BIM-b, JSMA, and Opt, as introduced in Section 2. These attack strategies were selected for our experiments due to their reported effectiveness and their diversity. For each of the 5 forms of attack, the LID detector is compared with the state-of-the-art detection measures KD and BU as discussed in Section 2, with respect to three benchmark image datasets: MNIST , CIFAR-10 (BID17) and SVHN BID26. Each of these three datasets is associated with a designated training set and test set. Before reporting and discussing the , we first describe the experimental setup. DISPLAYFORM0 13: DISPLAYFORM1 14: DISPLAYFORM2 15: DISPLAYFORM3 Training and Testing: For each of the three image datasets, a DNN classifier was independently pretrained on its designated training set (the pre-train set), and its designated test set was used for testing (the pre-test set). Any pre-test images not correctly classified were discarded, and the remaining images were subdivided into train (80%) and test (20%) sets for subsequent processing. Both of these sets were randomly partitioned into minibatches of size 100, for later use in the computation of LID characteristics. The LID-, KD-and BU-based detectors were trained separately on the train set using the scheme in Algorithm 1, with the calculation of LID estimates replaced by KD and BU calculation for their respective detectors. All three detectors were then evaluated against equal numbers of normal, noisy and adversarial images crafted from members of the test set, as described in Steps 2-4 of Algorithm 1. The LID, KD and BU characteristics of those test images were then generated as shown in Steps 1-19 of Algorithm 1. It should be noted that no images of the test set were examined during any of the training processes, so as to avoid cross contamination. The adversarial examples for both training and testing were generated by applying one of the five selected attacks. Following the procedure outlined in BID7, the noisy examples for the JSMA attack were crafted by changing the values of a randomly-selected set of pixels to either their minimum or maximum (determined randomly), where the number of pixels to be adjusted was chosen to be equal to the number of pixels perturbed in the generation of adversarial examples. For the other attack strategies, L 2 Gaussian noise was added to the pixel values instead of setting them to their minimum or maximum. As suggested by BID7; BID2, we used the logistic regression classifier as detector, and report its AUC score as the metric for performance. The pretrained DNN used for MNIST was a 5-layer ConvNet with max-pooling and dropout. It achieved 99.29% classification accuracy on (normal) pre-test images. For CIFAR-10, a 12-layer ConvNet with max-pooling and dropout was used. This model reported an accuracy of 84.56% on (normal) pre-test images. For SVHN, we trained a 6-layer ConvNet with max-pooling and dropout. It achieved 92.18% accuracy on (normal) pre-test images. We deliberately did not tune the DNNs, as their performance was close to the state-of-the-art and could thus be considered sufficient for use in an adversarial study BID7.Parameter Tuning: We tuned the bandwidth (σ) parameter for KD, and the number of nearest neighbors (k) for LID, using nested cross validation within the training set (train). Using the AUC values of detection performance, the bandwidth was tuned using a grid search over the range in log-space, and neighborhood size was tuned using a grid search over the range with respect to a minibatch of size 100. For a given dataset, the parameter setting selected was the one with highest AUC averaged across all attacks. The optimal bandwidths chosen for MNIST, CIFAR-10 and SVHN were 3.79, 0.26, and 1.0, respectively, while the value of k for LID estimation was set to 20 for MNIST and CIFAR-10, and 30 for SVHN. For BU, we chose the number of prediction runs to be T = 50 in all experiments. We did not tune this parameter, as it is not considered to be sensitive for choices of T greater than 20 BID2 ).Our implementation is based on the detection framework of BID7. For FGM, JSMA, BIM-a, and BIM-b attack strategies, we used the cleverhans library BID28, and for the Opt attack strategy, we used the author's implementation BID3. We scaled all image feature values to the interval. Our code is available for download at https: //github.com/xingjunm/lid_adversarial_subspace_detection. We provide empirical showing the LID characteristics of adversarial examples generated by Opt, the most effective of the known attack strategies. The left subfigure in FIG1 shows the LID scores (at the softmax layer) of 100 randomly selected normal, noisy and adversarial (Opt) examples from the CIFAR-10 dataset. We observe that at this layer, the LID scores of adversarial examples are significantly higher than those of normal or noisy examples. This supports our expectation that adversarial regions have higher intrinsic dimensionality than normal data regions (as discussed in Section 4). It also suggests that the transition from normal example to adversarial example may follow directions in which the complexity of the local data submanifold significantly increases, leading to an increase in estimated LID values. In the right subfigure of FIG1, we further show that the LID scores of adversarial examples are more easily discriminated from those of other examples at deeper layers of the network. The 12-layer ConvNet used for CIFAR-10 consists of 26 transformation layers: the input layer (L 0), conv2d/max-pooling (L 1−17), dense/dropout (L 18−24) and the final softmax layer (L 25). The estimated LID characteristics of adversarial examples become distinguishable (detection AUC> 0.5) at the dense layers (L 18−24), and significantly different at the softmax layer (L 25). This suggests that the fully-connected and softmax transformations may be more sensitive to adversarial perturbations than convolutional transformations. Plots of LID scores for the MNIST and SVHN datasets can be found in Appendix A.2.With regard to the stability of performance based on parameter variation (k for LID, or bandwidth for KD), we can see from FIG2 that LID is more stable than KD, exhibiting less variation in AUC as the parameter varies. From this figure, we also see that KD requires significantly different optimal settings for different types of data. For simpler datasets such as MNIST and SVHN, KD requires quite high bandwidth choices for best performance. LID Outperforms KD and BU: We compare the performance of LID-based detection with that of detectors trained with features of KD and BU individually, as well as a detector trained with a combination of KD and BU features (denoted as 'KD+BU'). As shown in TAB2, LID outperforms the KD and BU measures (both individually and combined) by large margins on all attack strategies tested, across all datasets tested. For the most effective attack strategy known to date, the Opt attack, the LID-based detector achieved AUC scores of 99.24%, 98.94% and 97.60% on MNIST, CIFAR-10 and SVHN respectively, compared to AUC scores of 95.35%, 93.77% and 90.66% for the detector based on KD and BU. This strong performance suggests that LID is a highly promising characteristic for the discrimination of adversarial examples and regions. We also note that KD was not effective for the FGM, JSMA and BIM-a attack strategies, whereas the BU measure failed to detect most FGM and BIM-b attacks on the MNIST dataset. Generalizability Analysis: It is natural to consider the question of whether samples of one attack strategy may be detected by a model that has been trained on samples of a different attack strategy. We conduct a preliminary investigation of this issue by studying the generalizability of KD, BU and LID for detecting previously unseen attack strategies on the CIFAR-10 dataset. The KD, BU and LID detectors are trained on samples of the simplest attack strategy, FGM, and then tested on samples of the more complex attacks BIM-a, BIM-b, JSMA and Opt. The training and test datasets are generated in the same way as in our previous experiments with only the FGM attack applied on the train set while the other attacks applied separately on the test set. The test attack data is standardized by scaling so as to fit the training data. The are shown in TAB3, from which we see that the LID detector trained on FGM can accurately detect the much more complex attacks of the other strategies. The KD and BU characteristics can also achieve good performance on this transfer learning task, but are less consistent than our proposed LID characteristic. The appear to indicate that the adversarial regions generated by different attack strategies possess similar dimensional properties. It is worth mentioning that the BU detector trained on the FGM attack generalizes poorly to detect BIM-b adversarial examples (AUC=2.65%). This may due to the fact that BIM-b performs a fixed number of perturbations (50 in our setting) that likely extend well beyond the classification boundary. Such perturbed adversarial examples tend to possess Bayesian model uncertainties even lower than normal examples under dropout randomization, as dropping out a certain proportion of their representations (50% in our setting) would not lead to high prediction variance. This is consistent with the reported in BID7: only 4% of BIM-b adversarial examples, in contrast to at least 74.7% of adversarial examples of other attack strategies, exhibit higher Bayesian uncertainties than normal examples. It is particularly interesting to see that detectors trained on the FGM attack strategy can sometimes achieve better performance when used to identify the other attacks. An extensive study of detection generalizability across all attack strategies is an interesting topic for future work. Effect of Larger Minibatch Sizes in LID Estimation: In the estimation of LID values, a default minibatch size of 100 was used, with a view to ensuring efficiency. Even though experimental analysis has shown that the MLE estimator of LID is not stable on such small samples BID0, this is more than adequately compensated for by the learning process in LID-based detection, as evidenced by the superior performance shown in TAB2. However, it is an interesting question as to whether the use of larger minibatch sizes could further improve the performance (as measured by AUC) without incurring unreasonably high computational cost. Figure 5 in Appendix A.3 illustrates the effect of using a minibatch size of 1000 for different choices of k. It does indicate that increasing the batch size can improve the detection performance even further. A comprehensive investigation of the tradeoffs among minibatch size, LID estimation accuracy, and detection performance is an interesting direction for future work. Adaptive Attack Against LID Measurement: To further evaluate the robustness of our LIDbased detector, we applied an adaptive Opt attack in a white-box setting. Similar to the strategy used in BID2 to attack the KD-based detector, we used an Opt L 2 attack with a modified adversarial objective: DISPLAYFORM0 where α is a constant balancing between the amount of perturbation and the adversarial strength, and the LID scores are computed at the pre-softmax layer. We test two different scenarios for detection. In the first scenario, we use LID features as described in Algorithm 1. In the second scenario, we use LID scores only at the pre-softmax layer. Since the Opt attack uses only the pre-softmax activation output to guide the perturbation, the latter scenario allows a fair comparison to be made BID3 a). The optimal constant α is determined via an internal binary search for α ∈ [10 −3, 10 6]. The rationale for the minimization of the LID characteristic in Equation FORMULA8 is that adversarial examples have higher LID characteristics than normal examples, as we have demonstrated in Section 5.2.We applied the adaptive attack on 1000 normal images randomly chosen from the detection test set (test). The deep networks used were the same ConvNet configurations as used in our previous experiments. To evaluate attack performance, instead of AUC as measured in the previous sections, we report accuracy as suggested by BID2. We see from TAB4 that the adaptive attack in Scenario 2 fails to find any valid adversarial example 100%, 95.7% and 97.2% of the time on MNIST, CIFAR-10 and SVHN respectively. In addition, when trained on all transformation layers (Scenario 1), the LID-based detector still correctly detected the attacks 100% of the time. Based on these , we can conclude that integrating LID into the adversarial objective (increasing the complexity of the attack) does not make detection more difficult for our method. This is in contrast to the work of BID2, who showed that incorporating kernel density into the objective function makes detection substantially more difficult for the KD method. In this paper, we have addressed the challenge of understanding the properties of adversarial regions, particularly with a view to detecting adversarial examples. We characterized the dimensional properties of adversarial regions via the use of Local Intrinsic Dimensionality (LID), and showed how these could be used as features in an adversarial example detection process. Our empirical suggest that LID is a highly promising measure for the characterization of adversarial examples, one that can be used to deliver state-of-the-art discrimination performance. From a theoretical perspective, we have provided an initial intuition as to how LID is an effective method for characterizing adversarial attack, one which complements the recent theoretical analysis showing how increases in LID effectively diminish the amount of perturbation required to move a normal example into an adversarial region (with respect to 1-NN classification) BID1. Further investigation in this direction may lead to new techniques for both adversarial attack and defense. In the learning process, the activation values at each layer of the LID-based detector can be regarded as a transformation of the input to a space in which the LID values have themselves been transformed. A full understanding of LID characteristics should take into account the effect of DNN transformations on these characteristics. This is a challenging question, since it requires a better understanding of the DNN learning processes themselves. One possible avenue for future research may be to model the dimensional characteristics of the DNN itself, and to empirically verify how they influence the robustness of DNNs to adversarial attacks. Another open issue for future research is the empirical investigation of the effect of LID estimation quality on the performance of adversarial detection. As evidenced by the improvement in perfor-mance observed when increasing the minibatch size from 100 to 1000 (Figure 5 in Appendix A.3), it stands to reason that improvements in estimator quality or sampling strategies could both be beneficial in practice. FIG3 illustrates LID characteristics of the most effective attack strategy known to date, Opt, on the MNIST and SVHN datasets. On both datasets, the LID scores of adversarial examples are significantly higher than those of normal or noisy examples. In the right-hand plot, the LID scores of normal examples and its noisy counterparts appear superimposed due to their similarities. Figure 5 shows the discrimination power (detection AUC) of LID characteristics estimated using two different minibatch sizes: the default setting of 100, and a larger size of 1000. The horizontal axis represents different choices of the neighborhood size k, from 10% to 90% percent to the batch size. We note that the peak AUC is higher for the larger minibatch size. Figure 5: The detection AUC score of LID estimated using different neighborhood sizes k with a larger minibatch size of 1000. The are shown for the detection of Opt attacks on the MNIST, CIFAR-10 and SVHN datasets.
We characterize the dimensional properties of adversarial subspaces in the neighborhood of adversarial examples via the use of Local Intrinsic Dimensionality (LID).
1,472
scitldr
In this paper, we design and analyze a new zeroth-order (ZO) stochastic optimization algorithm, ZO-signSGD, which enjoys dual advantages of gradient-free operations and signSGD. The latter requires only the sign information of gradient estimates but is able to achieve a comparable or even better convergence speed than SGD-type algorithms. Our study shows that ZO signSGD requires $\sqrt{d}$ times more iterations than signSGD, leading to a convergence rate of $O(\sqrt{d}/\sqrt{T})$ under mild conditions, where $d$ is the number of optimization variables, and $T$ is the number of iterations. In addition, we analyze the effects of different types of gradient estimators on the convergence of ZO-signSGD, and propose two variants of ZO-signSGD that at least achieve $O(\sqrt{d}/\sqrt{T})$ convergence rate. On the application side we explore the connection between ZO-signSGD and black-box adversarial attacks in robust deep learning. Our empirical evaluations on image classification datasets MNIST and CIFAR-10 demonstrate the superior performance of ZO-signSGD on the generation of adversarial examples from black-box neural networks. Zeroth-order (gradient-free) optimization has attracted an increasing amount of attention for solving machine learning (ML) problems in scenarios where explicit expressions for the gradients are difficult or infeasible to obtain. One recent application of great interest is to generate prediction-evasive adversarial examples, e.g., crafted images with imperceptible perturbations to deceive a well-trained image classifier into misclassification. However, the black-box optimization nature limits the practical design of adversarial examples, where internal configurations and operating mechanism of public ML systems (e.g., Google Cloud Vision API) are not revealed to practitioners and the only mode of interaction with the system is via submitting inputs and receiving the corresponding predicted outputs BID31 BID27 BID36 BID17 BID3. It was observed in both white-box and black-box settings 1 that simply leveraging the sign information of gradient estimates of an attacking loss can achieve superior empirical performance in generating adversarial examples BID13 BID28 BID16. Spurred by that, this paper proposes a zeroth-order (ZO) sign-based descent algorithm (we call it 'ZO-signSGD') for solving black-box optimization problems, e.g. design of black-box adversarial examples. The convergence behavior and algorithmic stability of the proposed ZO-signSGD algorithm are carefully studied in both theory and practice. In the first-order setting, a sign-based stochastic gradient descent method, known as signSGD, was analyzed by BID2 BID1. It was shown in BID2 that signSGD not only reduces the per iteration cost of communicating gradients, but also could yield a faster empirical convergence speed than SGD BID19. That is because although the sign operation compresses the gradient using a single bit, it could mitigate the negative effect of extremely components of gradient noise. Theoretically, signSGD achieves O(1/ √ T) convergence rate under the condition of a sufficiently large mini-batch size, where T denotes the total number of iterations. The work in BID1 established a connection between signSGD and Adam with restrictive convex analysis. Prior to BID2 BID1, although signSGD was not formally defined, the fast gradient sign method BID13 to generate white-box adversarial examples actually obeys the algorithmic protocol of signSGD. The effectiveness of signSGD has been witnessed by robust adversarial training of deep neural networks (DNNs) BID28. Given the advantages of signSGD, one may wonder if it can be generalized for ZO optimization and what the corresponding convergence rate is. In this paper, we answer these questions affirmatively. Contributions We summarize our key contributions as follows.• We propose a new ZO algorithm,'ZO-signSGD', and rigorously prove its convergence rate of O(√ d/ √ T) under mild conditions. • Our established convergence analysis applies to both mini-batch sampling schemes with and without replacement. In particular, the ZO sign-based gradient descent algorithm can be treated as a special case in our proposed ZO-signSGD algorithm.• We carefully study the effects of different types of gradient estimators on the convergence of ZO-signSGD, and propose three variants of ZO-signSGD for both centralized and distributed ZO optimization.• We conduct extensive synthetic experiments to thoroughly benchmark the performance of ZO-signSGD and to investigate its parameter sensitivity. We also demonstrate the superior performance of ZO-signSGD for generating adversarial examples from black-box DNNs. Related work Other types of ZO algorithms have been developed for convex and nonconvex optimization, where the full gradient is approximated via a random or deterministic gradient estimate BID18 BID29 BID11 BID9 BID10 BID34 BID15 BID12 BID22 BID25. Examples include ZO-SGD BID11, ZO stochastic coordinate descent (ZO-SCD) BID22, and ZO stochastic variance reduced gradient descent (ZO-SVRG) BID26 a; BID14. Both ZO-SGD and ZO-SCD can achieve O(DISPLAYFORM0 And ZO-SVRG can further improve the iteration complexity to O(d/T) but suffers from an increase of function query complexity due to the additional variance reduced step, known as'gradient blending' BID26 ), compared to ZO-SGD. The existing work showed that ZO algorithms align with the iteration complexity of their first-order counterparts up to a slowdown effect in terms of a small-degree polynomial of the problem size d. In this section, we provide a on signSGD, together with the problem setup of our interest. In particular, we show that the commonly-used methods for generating adversarial attacks fall into the framework of signSGD.Preliminaries on signSGD Consider a nonconvex finite-sum problem of the form DISPLAYFORM0 where x ∈ R d are optimization variables, and {f i} are n individual nonconvex cost functions. The finite-sum form encompasses many ML problems, ranging from generalized linear models to neural networks. If the gradients of {f i} are available, then problem can be solved by many first-order methods such as SGD, SCD, and signSGD. The method of our interest is signSGD, which differs from SGD and SCD, takes the sign of gradient (or its estimate) as the descent direction. It was recently shown in BID2 that signSGD is quite robust to gradient noise and yields fast empirical convergence. Algorithm 1 provides a generic sign-based gradient descent framework that encapsulates different variants of signSGD. In Algorithm 1, GradEstimate(·) signifies a general gradient estimation procedure, which adopts either a stochastic gradient estimate in the first-order setting BID2 or a function difference based random gradient estimate in the ZO setting BID29 BID9. We call the ZO variant of signSGD'ZO-signSGD', which will be elaborated on in Sec. 3.Adversarial attacks meet signSGD It is now widely known that ML models (e.g., deep neural networks) are vulnerable to adversarial attacks, which craft inputs (e.g., images) with imperceptible perturbations to cause incorrect classification BID35 BID13 BID20 BID23. The ing inputs crafted by adversaries are known as adversarial examples. Investigating adversarial examples not only helps to understand the limitation of learning models, but also provides opportunities to improve the models' robustness BID30 Algorithm 1 Generic sign-based gradient descent 1: Input: learning rate {δ k}, initial value x 0, and number of iterations T 2: for k = 0, 1,..., T − 1 do 3:ĝ k ←− GradEstimate(x k) # applies to both first and zeroth order gradient estimates 4: sign-gradient update DISPLAYFORM1, where sign(x) takes element-wise signs of x5: end for BID0 BID28. In what follows, we show that the generation of adversarial examples in BID13 BID20 can be interpreted through signSGD.Let x 0 denote the natural (legitimate) input of an ML model associated with the true label t 0, and x = x 0 + δ be the adversarial example to be designed, where δ are adversarial perturbations. If f (x, t 0) is the training loss of a learning model, then the goal of (white-box) adversarial attack is to find minimal perturbation δ that is sufficient to mislead the learning model, namely, to maximize the loss f (x 0 + δ, t 0). Taking the first-order approximation of f (x, t 0) around x 0, we obtain DISPLAYFORM2 By constraining the strength of perturbation in the ∞ ball of small radius (i.e., δ ∞ ≤), the linear approximation of f (x, t 0) is then maximized at δ = sign(∇ x f (x 0, t 0)) BID33. Therefore, generation of adversarial examples proposed in BID13 obeys the sign-gradient update rule in, DISPLAYFORM3 Such a connection between adversarial example generation and signSGD also holds in other attacks, e.g., the iterative target attack method BID20. Similarly, a so-called black-box attack BID16 BID3 is associated with our proposed ZO-signSGD algorithm. One limitation of signSGD BID2 is the need of first-order information, i.e., stochastic gradients. However, there exists a large practical demand for solving ML problems where explicit expressions of the gradients are difficult or infeasible to obtain, e.g., the generation of adversarial examples from black-box neural networks as discussed in Sec. 1 and 2.Gradient estimation via ZO oracle In the ZO setting where the first-order information is unavailable, the gradient estimator at Step 3 of Algorithm 1 has only access to function values of {f i (x)} given a query point x. Based on that, we construct a ZO gradient estimate through a forward difference of two function values BID29 BID10 BID9. In Algorithm 1, GradEstimate(x) is then specified as DISPLAYFORM0 where DISPLAYFORM1 are i.i.d. random directions drawn from a uniform distribution over a unit sphere, and∇f i (x; u i,j) gives a two-point based random gradient estimate with direction u i,j and smoothing parameter µ > 0. We remark that the random direction vectors in can also be drawn from the standard Gaussian distribution BID29. However, the uniform distribution could be more useful in practice since it is defined in a bounded space rather than the whole real space required for Gaussian. We highlight that unlike the first-order stochastic gradient estimate, the ZO gradient estimate is a biased approximation to the true gradient of f. Instead, it becomes unbiased to the gradient of the randomized smoothing function f µ BID8 BID10, DISPLAYFORM2 where f i,µ gives the randomized smoothing version of f i, and the random variable v follows a uniform distribution over the unit Euclidean ball. Clearly, there exists a gap between a ZO gradient estimate and the true gradient of f, but as will be evident later, such a gap can be measured through the smoothing function f µ.Motivations of ZO-signSGD. Compared to SGD-type methods, the fast empirical convergence of signSGD and ZO-signSGD has been shown in the application of generating white-box and black-box adversarial examples BID13 BID28 BID16. As mentioned in BID2, the sign operation could mitigate the negative effect of (coordinate-wise) gradient noise of large variance. Recall that the ZO gradient estimate is a biased approximation to the true gradient, and thus, could suffer from having larger noise variance than (first-order) stochastic gradients. In this context, one could benefit from ZO-signSGD due to its robustness to gradient noise. In Appendix 1, we provide two concrete examples (FIG1 and Fig. A2) to confirm the aforementioned analysis. In FIG1, we show the robustness of ZO-signSGD against sparse noise perturbation through a toy quadratic optimization problem, originally introduced in BID2 to motivate the fast convergence of signSGD against SGD. In Fig. A2, we show that gradient estimation via ZO oracle indeed encounters gradient noise of large variance. Thus, taking the sign of a gradient estimate might scale down the extremely noisy components. ZO-signSGD & technical challenges beyond signSGD Algorithm 1 becomes ZO-signSGD as the ZO gradient estimate is applied. We note that the extension from first order to ZO is nontrivial, as the proposed ZO-signSGD algorithm yields three key differences to signSGD.First, ZO-signSGD has milder assumption on the choice of mini-batch sampling. Recall that signSGD in BID2 achieves O(1/ √ T) convergence rate given the condition that the mini-batch size is sufficiently large, b = O(T). However, this condition only becomes true when the mini-batch sample is randomly selected from [n] with replacement, which is unusual when n ≤ T. Here [n] represents the integer set {1, 2, . . ., n}. And signSGD fails to cover signGD when b = n, since sampling with replacement leads to I k = [n] even if b = n. In the proposed ZO-signSGD algorithm, we will relax the assumption on mini-batch sampling. Second, in ZO-signSGD both the ZO gradient estimator and the sign operator give rise to approximation errors to the true gradient. Although the statistical properties of ZO gradient estimates can be acquired with the aid of the randomized smoothing function, the use of mini-batch sampling without replacement introduces extra difficulty to bound the variance of ZO gradient estimates since mini-batch samples are no longer independent. Moreover, the sign-based descent algorithm evaluates the convergence error in the 1 -norm geometry, leading to a mismatch with the 2 -norm based gradient variance. Besides translating the the gradient norm from 1 to 2, the probabilistic convergence method BID11 ) is used to bound the eventual convergence error of ZO-signSGD.Finally, beyond the standard ZO gradient estimator, we will cover multiple variants of ZOsignSGD for centralized or distributed optimization. In this section, we begin by stating assumptions used in our analysis. We then derive the convergence rate of ZO-signSGD for nonconvex optimization. Assumptions of problem are listed as follows. DISPLAYFORM0 Both A1 and A2 are the standard assumptions used in nonconvex optimization literature BID2 BID32. A1 implies the L-smoothness of f i, namely, for any x and y we obtain DISPLAYFORM1 i. A2 implies the bounded variance of ∇f i in BID2, Assumption 3), namely, DISPLAYFORM2 2, where we have used the fact that ∇f (x) 2 ≤ σ under A2. Throughout the paper, we assume that problem is solvable, namely, f (x *) > −∞ where x * is an optimal solution. We recall that Algorithm 1 becomes ZO-signSGD when the gradient estimation step is applied. For nonconvex problems, the convergence of an algorithm is typically measured by stationarity, e.g., using ∇f (x) 2 2 in SGD BID11 and ∇f (x) 1 in signSGD BID2. For the latter, the 1 geometry is met when quantifying the stochasticity through the (non-linear) sign operation. Different from signSGD, ZO-signSGD only obtains a biased estimate to the true gradient. In Proposition 1, we bypass such a bias by leveraging the randomized smoothing technique used for ZO optimization BID10 BID29 BID9.Proposition 1 Under A1, the outputs {x k} T −1 k=0 of ZO-signSGD, i.e., Algorithm 1 with, satisfies DISPLAYFORM3 where the expectation is taken with respect to all the randomness of ZO-signSGD, f µ is the randomized smoothing function of f in, andĝ k = GradEstimate(x k) in.Proof: See Appendix 2.In Proposition 1, the rationale behind introducing the smoothing function f µ is that ∇f µ (x k) is the mean of ZO gradient estimateĝ k. And thus, the convergence of ZO-signSGD is now linked with the variance ofĝ DISPLAYFORM4 ]. This crucial relationship presented in Proposition 1 holds for a general class of signSGD-type algorithms that use different ZO gradient estimators. Spurred by, we next investigate the second-order moment ofĝ k in Proposition 2.Proposition 2 Under A1 and A2, the variance of ZO gradient estimateĝ k is upper bounded by DISPLAYFORM5 where DISPLAYFORM6 In FORMULA14, α b and β b are Boolean variables depending on the choice of mini-batch sampling, DISPLAYFORM7 for mini-batch with replacement DISPLAYFORM8 where I(x > a) is the indicator function of x with respect to the constraint x > a, and I(x > a) = 1 if x > a and 0 otherwise. Proof: See Appendix 3.Compared to the variance bound (σ 2 /b) of the stochastic gradient estimate of f in signSGD BID2, Proposition 2 provides a general for the ZO gradient estimateĝ k. It is clear that the bound in contains two parts: DISPLAYFORM9, where the former h 1 = O(σ 2 /b) characterizes the reduced variance (using b mini-batch samples) for the stochastic gradient estimate of the smoothing function f µ, and the latter h 2 = O(C(d, µ)/(bq)) reveals the dimension-dependent variance induced by ZO gradient estimate using b mini-batch samples and q random directions. If a stochastic gradient estimate of f is used in signSGD, then h 2 is eliminated and the variance bound in is reduced to (σ 2 /b).Furthermore, Proposition 2 covers mini-batch sampling with and without replacement, while signSGD only considers the former case. For the latter case, Proposition 2 implies that if b = n (i.e., DISPLAYFORM10, corresponding to α b = 0 and β b = 1 in. In the other extreme case of b = 1, both the studied mini-batch schemes become identical, corresponding to α b = 1 and β b = 0. Proposition 2 also implies that the use of large b and q reduces the variance of the gradient estimate, and will further improve the convergence rate. With the aid of Proposition 1 and 2, we can then show the convergence rate of ZO-signSGD in terms of stationarity of the original function f. The remaining difficulty is how to bound the gap between f and its smoothed version f µ. It has been shown in BID10 BID29 that there exists a tight relationship between f µ and f given the fact that the former is a convolution of the latter and the density function of a random perturbation v in. We demonstrate the convergence rate of ZO-signSGD in Theorem 1.Theorem 1 Under A1 and A2, if we randomly pick x R from {x k} T −1 k=0 with probability P (R = k) = DISPLAYFORM11, then the convergence rate of ZO-signSGD is given by DISPLAYFORM12 where f * denotes the minimum value. Proof: See Appendix 4.In Theorem 1, we translate the gradient norm from 1 to 2, and adopt a probabilistic output x R BID11 BID21 to avoid exhaustive search over {x k} for min k ∇f (x k) 2. Note that the convergence rate of ZO-signSGD relies on the learning rate δ k, the problem size d, the smoothing parameter µ, the mini-batch size b, and the number of random perturbations q for ZO gradient estimation. We next obtain explicit dependence on these parameters by specifying Theorem 1. DISPLAYFORM13 ), then the convergence in simplifies to DISPLAYFORM14 where α b and β b were defined in FORMULA17, and 1 ≤ (α b + β b) ≤ 2. We provide several key insights on the convergence rate of ZO-signSGD through.First, the convergence rate of ZO-signSGD is measured through ∇f (x R) 2 rather than its squared counterpart ∇f (x R) 2 2, where the latter was used in measuring the convergence of ZO-SGD. We recall from (, Theorem 3.2 & Corollary 3. 3) that ZO-SGD yields the convergence DISPLAYFORM15 2 ≤ 1, the convergence of ZO-signSGD meets a stricter criterion than that of ZO-SGD. The possible downside of ZO-signSGD is that it suffers an additional error of order O(DISPLAYFORM16) in the worst case. The aforementioned imply that ZO-signSGD could only converge to a neighborhood of a stationary point but with a fast convergence speed. Here the size of the neighborhood is controlled by the mini-batch size b and the number of random direction vectors q. Also, our convergence analysis applies to mini-batch sampling both with and without replacement. DISPLAYFORM17 ) convergence rate regardless of the choice of mini-batch sampling. When b = n, it is known from that the use of mini-batch without replacement recovers ZO-signGD, yielding the convergence rate O(DISPLAYFORM18 . By contrast, the use of mini-batch with replacement leads to the worse convergence rate O( DISPLAYFORM19). Clearly, as b = n and n < T, ZO-signSGD using mini-batch with replacement fails to achieve the rate O(DISPLAYFORM20) regardless of the choice of q. By contrast, ZO-signSGD using mini-batch without replacement recovers O(DISPLAYFORM21 When b > n, ZO-signSGD is restricted to using mini-batch sampling with replacement. Similar to signSGD BID2, we can obtain O( DISPLAYFORM22, where the dependence on q is induced by the use of ZO gradient estimation. Here we study three variants of ZO-signSGD, where the gradient will be estimated using a) the central difference of function values, b) the sign of ZO gradient estimates with majority vote, or c) the sign of ZO gradient estimates with majority vote for distributed optimization. That is, DISPLAYFORM0 DISPLAYFORM1 where {u i,j} and∇f i (x; u i,j) have been defined in. The gradient estimator FORMULA1 The ZO gradient estimator was used in BID34 for bandit convex optimization and in BID16 for designing black-box adversarial attacks. Compared to the form of forward difference, the central difference requires b(q − 1) times more function queries in gradient estimation. At the cost of more function queries, one may wonder if the convergence rate of ZO-signSGD can be further improved. Corollary 1 Suppose that the conditions in Theorem 1 hold, ZO-signSGD with gradient estimator yields the same convergence rate of ZO-signSGD that uses the estimator.Proof: Recall that Proposition 1 is independent of specific forms of gradient estimators, and thus holds for. Although Proposition 2 relies on the second-order moments of each gradient estimator, we prove that under A1 and A2, both and maintain the same statistical properties. As a , Proposition 2 and Theorem 1 also hold for; see more details in Appendix 5.We next study the gradient estimator, whose sign is equivalent to the majority vote (i.e., the element-wise median) of signs of individual gradient estimates {∇f i (x; u i,j)}. It was shown in BID2 ) that signSGD with majority vote has a better convergence rate under additional assumptions of unimodal symmetric noise distribution of coordinate-wise gradient estimates. In Corollary 2, we show that such a speed-up in convergence can also be achieved by ZO-signSGD with majority vote, which we refer to as'ZO-M-signSGD'.Corollary 2 Suppose that the conditions in Theorem 1 hold, and the distribution of gradient noise is unimodal and symmetric. Then, ZO-M-signSGD with δ k = O(DISPLAYFORM2 Proof: See Appendix 6.We recall from Theorem 1 that under the same parameter setting of Corollary 2, ZO-signSGD yields O( DISPLAYFORM3) convergence rate in the worst case. It is clear from that the error correction term of order DISPLAYFORM4 is eliminated in ZO-M-signSGD. Such an improvement in convergence is achieved under the condition of unimodal symmetric gradient noise. We remark that different from the stochastic gradient noise studied in BID2, the ZO gradient estimation noise could violate this assumption. For example, in a scalar case, if the gradient estimate g follows the distribution where g = 1 with probability 0.9, g = −10 with probability 0.1, then E[g] < 0 and sign(E[g]) < 0. However, E[sign(g)] > 0. This implies that without the assumption of symmetry, the sign of gradient estimates with majority vote (E[sign(g)]) can be in the opposite direction of the sign of averaged gradients (sign(E[g])). Our in the next section show that ZO-M-signSGD may not outperform ZO-signSGD.Lastly, we focus on the gradient estimator, whose sign can be interpreted as the major vote of M distributed agents about the sign of the true gradient BID2. The ing variant of ZO-signSGD is called'ZO-D-signSGD', and its convergence rate is illustrated in Corollary 3. Compared to ZO-M-signSGD for centralized optimization, ZO-D-signSGD suffers an extra error correction term O(DISPLAYFORM5) in the distributed setting. It is also worth mentioning that if M = n and q = 1, then the gradient estimator reduces to with I k = [n]. In this case, Corollary 2 and 3 reach a consensus on O(DISPLAYFORM6) convergence error. Proof: See Appendix 7. In this section, we empirically show the effectiveness of ZO-signSGD, and validate its convergence behavior on both synthetic and real-world datasets such as MNIST and CIFAR-10. For the synthetic experiment, we study the problem of binary classification in the least squared formulation. For the real-world application, we design adversarial examples from black-box neural networks as mentioned in Sec. 2. Throughout this section, we compare ZO-signSGD and its variants with SGD, signSGD BID2, ZO-SGD BID11, and ZO-SCD BID22. We consider the least squared problem with a nonconvex loss function (; BID25 DISPLAYFORM0 2, which satisfies Assumption A2 by letting σ = max i {2 a i 2}. Here instead of using the conventional cost function of logistic regression (a convex function), the considered least squared formulation is introduced to align with our nonconvex theoretical analysis. For generating the synthetic dataset, we randomly draw samples {a i} from N (0, I), and obtain the label y i = 1 if 1/(1 + e −a T i x) > 0.5 and 0 otherwise. The number of training samples {a i, y i} is set by n = 2000 against 200 testing samples. We find the best constant learning rate for algorithms via a greedy search over η ∈ [0.001, 0.1] (see Appendix 8.1 for more details), and we choose the smoothing parameter µ = 10/ √ T d. Unless specified otherwise, let b = q = 10, T = 5000 and d = 100.In FIG1, we report the training loss, the test accuracy, as well as the effects of algorithmic parameters on the convergence of the studied algorithms. We observe from FIG1 and (b) that ZO-signSGD outperforms other ZO algorithms, and signSGD yields the best convergence performance once the first-order information is available. In FIG1 and (d), we observe that the convergence performance of ZO algorithms is improved as b and q increase. In particular, ZO-signSGD and ZO-M-signSGD at b = q = 30 approach to the best provided by signSGD. In FIG1 -(e) and (f), the convergence of all algorithms degrades as the problem size d increases. However, ZO-signSGD and ZO-M-signSGD converge faster than ZO-SGD and ZO-SCD. In Fig. 2, we demonstrate the convergence trajectory of different variants of ZO-signSGD for b ∈ {40, 400}. To make a fair comparison between ZOsignSGD and ZO-D-signSGD, let each of M = 40 agents use a mini-batch of size b/M. As we can see, ZO-signSGD outperforms ZO-M-signSGD and ZO-D-signSGD. And the convergence is improved as the mini-batch size increases. However, we observe that in all examples, ZO-signSGD and its variants converge to moderate accuracy much faster than ZO-SGD, only within a few tens of iterations. Generating black-box adversarial examples Here we study adversarial robustness by generating adversarial examples from a black-box image classifier trained by a deep neural network (DNN) model; see details on problem formulation in Appendix 8.2. We recall from Sec. 2 that the task of black-box adversarial attack falls within the category of ZO optimization as one can only access to the input-output relation of the DNN while crafting adversarial examples. The DNN models trained on MNIST and CIFAR-10 BID4 are performed as the zeroth-order oracle 2. We select one image from each class of MNIST and CIFAR-10 and separately implement black-box attacks using the same attacking loss function (see Appendix 8.2) but with different ZO optimization algorithms (ZO-SGD, ZO-signSGD and ZO-M-signSGD). We also set the same parameters for each method, i.e., µ = 0.01, q = 9, and δ = 0.05 for MNIST and δ = 0.0005 for CIFAR-10, to accommodate to the dimension factor d. Moreover, we benchmark their performance with the natural evolution strategy (NES) based two-point gradient estimator in BID16 for solving the same attacking loss function, where the sign of gradient estimate is also used in the FORMULA1, NES computes the ZO gradient estimate using the central difference of two function values. Thus, one iteration of ZO-NES requires 2q function queries and thus we set q = 5 to align with the number of function queries used in other ZO methods. All methods use the the same natural image as the initial point for finding adversarial examples. Fig. 3 shows the plots of black-box attacking loss versus iterations (more are shown in Appendix 8.3). We find that ZO-signSGD usually takes significantly less iterations than other methods to find the first successful adversarial example with a similar attacking loss. For MNIST, the average iteration over all attacked images in TAB0 to find the first successful adversarial example is 184 for ZO-SGD, 103 for ZO-signSGD, 151 for ZO-M-signSGD, and 227 for ZO-NES. Their corresponding average 2 distortion is 2.345 for ZO-SGD, 2.381 for ZO-signSGD, 2.418 for ZO-MsignSGD, and 2.488 for ZO-NES. For CIFAR-10, the average iteration over all attacked images in TAB3 to find the first successful adversarial example is 302 for ZO-SGD, 250 for ZO-signSGD, 389 for ZO-M-signSGD, and 363 for ZO-NES. Their corresponding average 2 distortion is 0.177 for ZO-SGD, 0.208 for ZO-signSGD, 0.219 for ZO-M-signSGD, and 0.235 for ZO-NES. As a visual illustration, we compare the adversarial examples of a hand-written digit "1" of each attacking method at different iterations in TAB0, corresponding to Fig. 3-(a). As we can see, ZO-signSGD and ZO-M-signSGD can reduce roughly 54% of iterations (around 600 less model queries) than ZO-SGD to find the first successful adversarial example. Given the first successful adversarial example, we observe that ZO-signSGD yields slightly higher 2 distortion than ZO-SGD. This is not surprising since Theorem 1 suggests that ZO-signSGD might not converge to a solution of very high accuracy but it can converge to moderate accuracy sufficient for black-box attacks at a very fast speed. Note that the first successful adversarial examples generated by different ZO methods are all visually similar to the original ones but lead to different top-1 predictions; see more in Appendix 8.3. In addition, we observe that ZO-NES is not as effective as ZO-signSGD in either query efficiency (given by the number of iterations to achieve the first successful attack) or attack distortion. Thus, compared to ZO-NES, ZO-signSGD offers a provable and an efficient black-box adversarial attacking method. Motivated by the impressive convergence behavior of (first-order) signSGD and the empirical success in crafting adversarial examples from black-box ML models, in this paper we rigorously prove the O(√ d/ √ T) convergence rate of ZO-signSGD and its variants under mild conditions. Compared to signSGD, ZO-signSGD suffers a slowdown (proportional to the problem size d) in convergence rate, however, it enjoys the gradient-free advantages. Compared to other ZO algorithms, we corroborate the superior performance of ZO-signSGD on both synthetic and real-word datasets, particularly for its application to black-box adversarial attacks. In the future, we would like to generalize our analysis to nonsmooth and nonconvex constrained optimization problems. BID2 FIG1, we assume that the ZO gradient estimate of f (x) and its first-order gradient ∇f (x) = x suffer from a sparse noise vector v, where v1 ∈ N, and vi = 0 for i ≥ 2. As a , the used descent direction at iteration t is given bŷ ∇f (xt) + v or ∇f (xt) + v. FIG1 presents the convergence performance of 5 algorithms: SGD, signSGD, ZO-SGD, ZO-signSGD and its variant using the central difference based gradient estimator. Here we tune a constant learning rate finding 0.001 best for SGD and ZO-SGD and 0.01 best for signSGD and its ZO variants. As we can see, sign-based first-order and ZO algorithms converge much faster than the stochastic gradient-based descent algorithms. This is not surprising since the presence of extremely noisy component v1 leads to an inaccurate gradient value, and thus degrades the convergence of SGD and ZO-SGD. By contrast, the sign information is more robust to outliers and thus leads to better convergence performance of sign SGD and its variants. We also note that the convergence trajectory of ZO-signSGD using the gradient estimator FORMULA1 coincides with that using the gradient estimator FORMULA6 given by the forward difference of two function values. FIG1: Comparison of different gradient-based and gradient sign-based first-order and ZO algorithms in the example of sparse noise perturbation. The solid line represents the loss averaged over 10 independent trials with random initialization, and the shaded region indicates the standard deviation of over random trials. Left: Loss value against iterations for SGD, signSGD, ZO-SGD, ZO-signSGD and ZO-signSGD using the central difference based gradient estimator. Right: Local regions to highlight the effect of the gradient estimators and on the convergence of ZO-signSGD. The intuition behind why ZO-signSGD could outperform ZO-SGD is that the sign operation can mitigate the negative effect of (coordinate-wise) gradient noise of large variance. To confirm this point, we examine the coordinate-wise variance of gradient noises during an entire training run of the binary classifier provided in the first experiment of Sec. 6. At each iteration, we perform an additional 100 random trials to obtain the statistics of gradient estimates. In Fig. A2 -(a), we present the 1 norm of the mean of gradient estimates (over 100 trials) versus the number of iterations. As we can see, both signSGD and ZO-signSGD outperform SGD and ZO-SGD, evidenced by a fast decrease of the 1 norm of gradient estimate. In Fig. A2-(b), we present the coordinate-wise gradient noise variance (over 100 trails at each coordinate) against the number of iterations. It is not surprising that compared to first-order methods, ZO methods suffer gradient noise of larger variance. In this scenario, we could benefit from ZO-signSGD since taking the sign of gradient estimates might scale down extremely noisy components. Indeed, we observe a significant decrease of the noise variance while performing ZO-signSGD compared to ZO-SGD.(a) (b) Figure A2: Statistics of gradient estimates during an entire training run of the binary classifier provided in the first experiment of Sec. 6. a) The 1 norm of the mean of gradient estimates versus iteration. b) Coordinate-wise gradient noise variance versus iteration. The solid line represents the variance averaged over all coordinates, and the shaded region indicates the corresponding standard deviation with respect to all coordinates at each iteration. Based on the definition of the smoothing function fµ, for any x and y we have DISPLAYFORM0 where the first inequality holds due to Jensen's inequality, and the second inequality holds due to A1. It is known from that fµ has L-Lipschitz continuous gradient. By the L-smoothness of fµ, we obtain that DISPLAYFORM1 where (∇fµ(x))i denotes the ith element of ∇fµ(x).Taking expectation for both sides of FORMULA1, we obtain that DISPLAYFORM2 Similar to BID2, Theorem 1), we relax Prob [sign(ĝ k,i) = sign((∇fµ(x k))i)] by Markov's inequality, DISPLAYFORM3 Substituting FORMULA1 into FORMULA1, we obtain DISPLAYFORM4 where the second inequality holds due to x 1 ≤ √ d x 2, and the last inequality holds by applying Jensen's inequality to the concave function DISPLAYFORM5 ]. Taking sum of both sides of, we then obtain. We recall from thatĝ DISPLAYFORM0 Let zi:=∇f i(xk) − ∇fµ(x k) and zi,j =∇fi(x k ; ui,j) − ∇fµ(x k). Thus, DISPLAYFORM1 where there are two sources of randomness: a) minibatch sampling i ∈ I k, and b) the random direction sampling u = ui,j. Note that these two sources of randomness are independent, and the random direction samples {ui,j} are i.i.d.. Next, we discuss two types of mini-batch sampling: a) mini-batch samples without replacement, and b) minibatch samples with replacement. Suppose that I k is a uniform random subset of [n] (no replacement), motivated by BID21, Lemma A.1) we introduce a new variable Wi = I(i ∈ I k), where I is an indicator function, and I(i ∈ I k) = 1 if i ∈ I k, and 0 otherwise. As a , we have DISPLAYFORM2 From, the variance ofĝ k is given by DISPLAYFORM3 In FORMULA3, the equality (a) holds since FORMULA8, where we have used the fact that Eu[∇fi(x k)] =∇fi,µ(x k) (c, Lemma. 1), and recall that fi,µ denotes the smoothing function of fi. The above implies that DISPLAYFORM4 DISPLAYFORM5 And the equality (b) holds due to Eu[zi DISPLAYFORM6 On the other hand, suppose that the mini-batch I k contains i.i.d. samples (namely, with replacement), the vectors {zi} are then i.i.d. under both mini-batch sampling and random direction sampling. Therefore, we obtain that DISPLAYFORM7 where the second equality holds since FORMULA3 and FORMULA3, we obtain that DISPLAYFORM8 DISPLAYFORM9 In (In FORMULA3, we next bound DISPLAYFORM10 DISPLAYFORM11 where for ease of notation, let∇fi :=∇f i(xk ; ui,1),∇fi,µ:=∇f i,µ(xk) and ∇fµ:= ∇fµ(x k). According to BID26, Lemma 1), the first term at RHS of yields DISPLAYFORM12 where the last inequality holds due to A2. Based on the definition of fµ, the second term at RHS of yields DISPLAYFORM13 where we have used the Jensen's inequality and DISPLAYFORM14 Substituting FORMULA3 and FORMULA3 into, we have DISPLAYFORM15 where C(d, µ) was defined in.We are now ready to bound. Based on In (DISPLAYFORM16 where the equality (a) holds since Eu[∇fi(x; ui,j)] = ∇fi,µ(x) for any j, given by BID26, Lemma 1).Substituting FORMULA1 and FORMULA3 into DISPLAYFORM17 4 PROOF OF THEOREM 1Substituting FORMULA14 into FORMULA12, we obtain It is known from BID26, Lemma 1) that DISPLAYFORM18 From FORMULA6, where f * µ = minx fµ(x) and f * = minx f (x).This yields fµ(x0) − f (x0) + f * − f * µ ≤ µ 2 L, and thus DISPLAYFORM19 Substituting FORMULA6 into FORMULA6, we obtain DISPLAYFORM20 Due to ∇fµ(x k) 2 ≤ ∇fµ(x k) 1 and dividing T −1 k=0 δ k for both sides of, we obtain that DISPLAYFORM21 where ξ l is finite since E ĝ i,j k − ∇fµ(x k) 2 2 is upper bounded. Substituting into, we have |(∇fµ(x k)) l | Prob sign(ĝ i,j k,l) = sign((∇fµ(x k)) l ) ≤ξ l.With the new gradient estimateḡ k = i∈I k q j=1 sign(ĝ i,j k) in, we require to bound Prob [sign(ḡ k,l) = sign ((∇fµ(x k DISPLAYFORM22 whereḡ k,l is the lth coordinate ofḡ k .We recall thatĝ i,j k,l is an unbiased stochastic approximation to gradient component (∇fµ(x k)) l with variance ξ 7 PROOF OF COROLLARY 3
We design and analyze a new zeroth-order stochastic optimization algorithm, ZO-signSGD, and demonstrate its connection and application to black-box adversarial attacks in robust deep learning
1,473
scitldr
The non-stationarity characteristic of the solar power renders traditional point forecasting methods to be less useful due to large prediction errors. This in increased uncertainties in the grid operation, thereby negatively affecting the reliability and ing in increased cost of operation. This research paper proposes a unified architecture for multi-time-horizon solar forecasting for short and long-term predictions using Recurrent Neural Networks (RNN). The paper describes an end-to-end pipeline to implement the architecture along with methods to test and validate the performance of the prediction model. The demonstrate that the proposed method based on the unified architecture is effective for multi-horizon solar forecasting and achieves a lower root-mean-squared prediction error compared to the previous best performing methods which use one model for each time-horizon. The proposed method enables multi-horizon forecasts with real-time inputs, which have a high potential for practical applications in the evolving smart grid. Today's power grid has become dynamic in nature mainly because of three changes in the modern grid: 1. Higher penetration level of renewables, 2. Introduction (and rapidly increasing deployment) of storage devices, and 3. Loads becoming active (by participating in demand response). This dynamic modern grid faces the challenge of strong fluctuations due to uncertainty. There is a critical need of gaining real time observability, control, and improving renewable generation forecast accuracy to enhance the resiliency and keep the operational costs sustainable. Independent system operators (ISOs) with higher renewable penetration on the grid have already been facing challenges with the uncertainties associated with short-term forecasting errors. In year 2016, California ISO doubled its frequency regulation service requirements (causing a sharp rise in the cost of requirements) to manage the recurring short-term forecasting errors in renewable generation BID0. The Western Electricity Coordinating Council (WECC) could achieve $5 billion savings per year by integrating wind and solar forecasts into unit commitment, according to the study conducted by Lew et al BID1. Thus, it is clear that the increased grid penetration levels of solar with its inherent variability (a combination of intermittence, high-frequency and non-stationarity) poses problems with grid reliability and cost of operating the grid on various time-scales. For example, day-ahead solar forecast accuracy plays a significant role in the effectiveness of Unit Commitment (UC); very-short-term solar forecasts errors due to fluctuations caused by the passing clouds lead to sudden changes in PV plant outputs that can cause strain to the grid by inducing voltage-flickers and real-time balancing issues. Thus, solar power generation forecast becomes an area of paramount research, as the need for robust forecast for all timescales (weekly, day-ahead, hourly and intra-hour) is critical for effectively incorporating increasing amount of solar energy resources at a global level and contributing to the evolution of the smart grid. Moreover, improving the accuracy of solar forecast is one of the lowest cost methods of efficiently integrating solar energy into the grid. The rest of the paper is organized as follows. The literature is reviewed and the significant shortcomings of the current forecasting approaches are recognized in Section II. Section II further introduces the capabilities of the proposed unified architecture and the novel algorithm to fill in the gap between the need to improve the forecasting techniques and the existing approaches. Section III introduces the proposed unified architecture based on RNN and the training algorithms utilized for implementing the neural network. Exploratory data analysis, evaluation metric and structure of input data, and the proposed algorithm are presented in Section IV. Section V discusses the and their interpretation. The paper is concluded with Section VI, which also identifies the future avenue of research in this method of solar forecasting.. Forecasting methods which have been used for renewable generation and electric load forecasting prediction can be mainly classified into five categories: 1) Regressive methods, such as Autoregressive (AR), AR integrated moving average (ARIMA), and exponential smoothing (ES) models BID2, BID3, BID4, nonlinear stationary models; 2) Artificial Intelligence (AI) techniques, such as Artificial Neural Networks (ANN) BID5 - BID9, k-nearest neighbors BID10 - BID13, fuzzy logic systems (FLSs) BID14 - BID16; 3) Numerical Weather Prediction (NWP) BID17; 4) Sensing (remote and local) BID18. 5) Hybrid models, such as neuro-fuzzy systems BID19 - BID20, ANN and satellite derived cloud indices BID21, to name a few. Numerical Weather Prediction (NWP) models are based on physical laws of motion and thermodynamics that govern the weather. For the places where ground data is not available, NWP models are powerful tools to forecast solar radiation. However, they pose significant limitations in predicting the precise position and extent of cloud fields due to their relatively coarse spatial Multi-time-horizon Solar Forecasting Using Recurrent Neural Network T resolution. Their inability to resolve the micro-scale physics associated with cloud formation renders them with relatively large error in terms of cloud prediction accuracy. In order to mitigate this limitation, NWPs are simulated at regional level (called Regional NWP) models, downscaled to derive improved site-specific forecasts. NWP has another limitation of temporal-resolution. The timescale of output variables of NWP model is from 3 hour -6 hours for the Global Forecast System (GFS) and 1-hour (for mesoscale model), which is not useful for predicting the ramp-rate and very-short-term output fluctuations. For the areas where the previous ground-based measurement are not available, satellite based irradiance measurement proves to be a useful tool BID21. The images from satellite are used to analyze the time evolution of air mass by the superimposition of images of the same area. Radiometer installed in the satellite records the radiance, states of the atmosphere (clear sky to overcast) impacts the radiance. Satellite sensing has the main limitation of determining an accurate set point for the radiance value under clear sky conditions and under dense cloudiness condition from every pixel and every image. Another limitation of solar irradiance forecasting using remotes sensing with satellite is the algorithms that are classified as empirical or statistical BID22 - BID23. These algorithms are based on simple statistical regression between surface measurements and satellite information and do not need accurate information of the parameters that model the solar radiation attenuation through the atmosphere. So the ground-based solar data is required for these satellite statistical algorithms anyway. The aforementioned limitations of NWP and sensing models have steered the short-term solar forecasting research towards timeseries analysis using statistical models and more recently AI techniques. Statistical techniques can mainly be classified as BID24: 1) Linear stationary models (Autoregressive models, Moving Average models, Mixed Autoregressive Moving Average Models, and Mixed Autoregressive Moving Average models with exogenous variables); 2) Nonlinear stationary models; 3) Linear nonstationary models (Autoregressive integrated moving average models and Autoregressive integrated moving average models with exogenous variables). Though these conventional statistical techniques provide a number of advantages over NWP and sensing methods, but these are often limited by strict assumptions of normality, linearity, variable independence. Artificial Neural Networks (ANN) are able to represent complex non-linear behaviors in higher dimensional settings. When exogenous variables like humidity, temperature and pressure are considered in the process of solar forecasting -ANNs act as universal function approximators to model the complex non-linear relationships between these variables and their relationship with the Global Horizontal Irradiance (GHI). An ANN with multiple hidden layers can be called A Deep Neural Network (DNN). With the advancements in computational capabilities, DNNs have proven to be effective and efficient in solving complex problems in many fields including image recognition, automatic speech recognition and natural language processing etc BID25. Although, feed-forward neural network models have been used for solar forecasting problem, use of Recurrent Neural Networks (RNN) models have not been explored yet, to the best of author's knowledge. RNN is a class of ANN that capture the dynamics of sequences using directed cyclic feedback connections BID26. Feedforward neural networks rely on the assumption of independence among the data points or samples. The entire state of the network is lost after processing each data point (sample). Unlike vanilla feedforward neural networks, recurrent neural networks (RNNs) exhibit dynamic temporal behavior by using their internal memory to process arbitrary sequences if inputs, which can be harnessed in predicting the irradiance for the next time step by considering the input from many previous times steps. Recent advances in parallelism, network architectures, optimization techniques, and graphics processing units (GPUs) have enabled successful large-scale learning with RNNs overcoming their traditional limitations of being difficult to train due to having millions of parameters. Several methods have been proposed for solar forecasting in the past but most of them were modeled for a particular timehorizon and no single model performed well compared to others for multi-time-horizon forecasting/prediction. In addition, the state-of-the-art methods used for solar forecasting primarily focus on averaged rather than instantaneous forecasts. This paper proposes two approaches using RNNs. 1). A single algorithm that is capable of being trained to output solar forecast for 1-hour or 2-hour or 3-hour, or for 4-hour time horizons. ii) A unified architecture that can predict/forecast the solar irradiance for multitime-horizons; for example, the trained model can predict/forecast the solar irradiance values for the 1-hour, 2-hour, 3-hour and 4-hour. Our proposed method is capable of taking a time-series data as the input and provides predictions with a forward inference time in the order of milliseconds, enabling real-time forecasts based on live measured data. This offers a great value for industrial applications that require real-time multi-time-horizon forecasting for overcoming the current operational challenges with high penetration of renewable source of energy. The RNN resembles a feedforward neural network except for an additional directed edges. These edges span adjacent time steps, introducing the notion of temporal component to the model. Theoretically, the RNN architecture enables the network to make use of past information in sequential data. The input to an RNN is a sequence, and its target can be a sequence or a single value. An input sequence is denoted by (x BID0, x BID1,...x (T) ), where each sample/data-point x (t) is a real valued vector. The target sequence is denoted by (y BID0, y BID1,...y (T) ) and the predicted target data-point is denoted by (y BID0, y BID1,...y (T) ). There are three dimensions to the input of the RNN (shown in FIG0): 1) Mini-batch Size; 2) Number of columns in the vector per time-step; and 3) Number of time-steps. Mini-batch size is the samplelength (data-points in the time-series). Number of columns are the input features in the input vector. The number of time-steps is the differentiating factor of RNN, which unfolds the input vector over time. In a typical multilayer feedforward neural network, the input vector is fed to the neurons at the input layer, which then gets multiplied by the activation function to produce the intermediate output of the neuron, this output then becomes the input to the neuron in the next layer. The net input (denoted by input_sumi) to this neuron belonging to the next layer is the weight on connections (W) multiplied by previous neuron's output with the bias term, as shown in Equation BID0. An activation function (denoted by g) is then applied to the input_sumi to produce the output from the neuron Equation 2 and 3. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 where, Whx is the conventional weight matrix between the input and the hidden layer and Whh is the recurrent weights matrix between the hidden layer and itself at adjacent time steps. bh and by are bias parameters. The proposed architecture uses Rectified Linear Units (ReLU) as the activation function. The network unfolds the given input at time t as shown in FIG0. The unfolded network is trained across the time steps using an algorithm called backpropagation through time (BPTT) BID27. The loss function used for this regression problem is Mean Squared Loss (MSE). The loss function finds the error between the target output and the predicted output from the network. Gradients are computed using back-propagation-through time BID27 and the stochastic gradient descent optimizer is used to update the weights so as to minimize the loss. The RMSE is calculated for benchmarking purposes. The motivation to use a RNN is to identify and learn the complex relationship between sequences of various exogenous variables and their combined impact on the solar irradiance. This, in the author's view, enables the algorithm to recognize non-linear contributing factors for example the atmospheric conditions, which may lead to cloud formation in nearby time horizon. This is one of the reasons the prediction RMSE is lower in the proposed approach compared to other reported approaches. The algorithm and the unified architecture developed in this paper were trained and tested on data from the NOAA's SURFRAD BID30 sites similar to the previous works in the literature BID28 BID31. The input features are: downwelling global solar (Watts/m^2), upwelling global solar (Watts/ m^2), direct-normal solar (Watts/ m^2), downwelling diffuse solar (Watts/ m^2), downwelling thermal infrared (Watts/ m^2), downwelling IR case temp. (K), downwelling IR dome temp. (K), upwelling thermal infrared (Watts/ m^2), upwelling IR case temp. (K), upwelling IR dome temp. (K), global UVB (milliWatts/ m^2), photosynthetically active radiation (Watts/ m^2), net solar (dw_solar -uw_solar) (Watts/ m^2), net infrared (dw_ir -uw_ir) (Watts/ m^2), net radiation (netsolar+netir) (Watts/ m^2), 10-meter air temperature (C), relative humidity (%), wind speed (ms^1), wind direction (degrees, clockwise from north), and station pressure (mb). According to Dobbs BID28, Global downwelling solar measurements best represent the Global Horizontal Irradiance (GHI) at the SURFRAD sites, which was validated in this paper through exploratory data analysis.[Figure 3] shows the daily averages of Clear Sky GHI and global downwelling solar at SURFRAD site for a year, both the variable follow the same trend. [Figure 4] shows that both these variables are positively correlated. Bird model is used to calculate clear sky GHI BID29. At time t, clear sky GHI is denoted by, representing the theoretical GHI at time t assuming zero cloud coverage. At time t, the ratio between the instantaneously observed and the theoretical maximum is called clear sky index, denoted by , this parameter is introduced in BID28. The algorithm uses Mean Squared Error (MSE) as a measure to find the difference between the target and the output neural network is producing during the training process, this is shown in Equation 7. Later in the process, for the purpose of benchmarking, the Root Mean Squared Error is calculated by taking square root of the MSE values. DISPLAYFORM0 Where is Y a vector of target values and Y is vector of n predicted values. The flowchart of the proposed Unified Recurrent Neural Network Architecture algorithm is shown in Figure 4. The overall algorithm can be divided into three main blocks. The site-specific data is imported and clear sky global horizontal irradiance values for that site are obtained from the Bird Model. The two are merged. Dataset is split into train and testing sets. The clear sky index parameter is created as the ratio of observed global downwelling solar (Watts /m^2) and GHI (Watts /m^2). Kt is a dimensionless parameter. The missing values in the data are replaced by the mean and/or the neighborhood values. Exploratory data analysis is conducted to identify and eliminate extreme outliers in order to normalize the data. Recurrent Neural Network model is defined. The model is instantiated by specifying the architectural parameters: input dimension (number of nodes at the input layer), hidden dimension (number of nodes in the hidden layer), layer dimension (number of hidden layers) and output dimension. Sequence length, which unfolds as time-steps is also defined here along with the batch size. The model is trained and test by iterating through the whole dataset based on pre-set number of epochs. Once the training and testing is over, the stored MSE is first de-normalized and then it is used to calculate RMSE. If the RMSE is not satisfactory the hyperparameters (learning rate and number of epochs) are tuned and the model is trained again. When a satisfactory (or as expected) RMSE is achieved, the algorithm terminates. The algorithm is trained using the data for the year 2010 and 2011 from the SURFRAD observations sites in Boulder, CO; Desert Rock, NV; Fort Peck, MT; Sioux Falls, SD; Bondville, IL; Goodwin Creek, MS; and Penn State, PA. The test year for each respective site was chosen to be 2009 for the purpose of benchmarking against BID28 and other previously reported in the literature. Results from the two methods proposed in this paper are presented below: The first method uses the proposed RNN architecture and algorithm to predict for 1-hour, 2-hour, 3-hour and 4-hour time horizons, independently. In other words, four independent models are developed for 1-hour, 2-hour, 3-hour and 4-hour predictions for each of the seven SURFRAD sites. The RMSE values in TAB1 show that the proposed architecture and algorithm has lower RMSE values for all four forecasting horizons and all the seven sites, compared to the best RMSE values reported in BID28 from a suite of other machine learning algorithms (Random Forests, Support Vector Machine, Gradient Boosting and vanilla FeedForward). In this method, the architecture predicts for all four forecasting time horizons (1-hour, 2-hour, 3-hour and 4-hour) in parallel; i.e. one model per SURFRAD site is developed which makes predictions for all the four time horizons. This method is the multitime-horizon implementation of the proposed architecture. None of the methods discussed in the literature have been shown to be capable of producing multi-time-horizon predictions. TAB1 and 2017. To quantify the overall performance of the predictive model in terms of its combined forecasting accuracy for all four forecasting horizons, the mean of the RMSE values is calculated. Although, even the best RMSE values reported in the literature (for example in BID28) were for a single time horizon forecast at a time, the proposed method achieves a significantly lower RMSE in predictions across all the short-term (1 to 4 hours) forecasting time-horizons as seen in Table 2. Capability to predict for multi-time-horizons makes the proposed method very relevant for industry applications. The real-time data can be fed to the RNN and due to its lower forward inference time, predictions can be made for multiple time horizons. Short-term solar forecasting is of great importance for optimizing the operational efficiencies of smart grids, as the uncertainties in the power systems are ever-increasing, spanning from the generation arena to the demand-side domain. A number of methods and applications have been developed for solar forecasting, with some level of predictive success. The main limitation of the approaches developed so far is their specificity with a given temporal and/or spatial resolution. For predictive analysis problems, the field of AI has become promising with the recent advances in optimization techniques, parallelism, and GPUs. AI (especially deep neural networks) thrives on data, and with decreasing cost of sensor and measurement equipment, plethora of solar data is getting available. Data availability is only going to keep increasing in the coming years. The proposed novel Unified Recurrent Neural Network Architecture harnesses the power of AI to form a high-fidelity solar forecasting engine. This architecture has the potential to be implemented as a complete forecasting system, which spans the entire spectrum of spatial and temporal horizons with a capability to take real-time data as input to produce multi-time-scale (intra-hour, hourly and day-ahead scales) predictions. In addition, the proposed algorithm outperforms traditional Machine Learning methods in terms of quality of the forecast and its low forward inference time makes it a robust real-time solar forecasting engine. Although a deeper neural network will have more capacity, we experimentally observed that it leads to high variance in the model and therefore a reduced generalization power for the particular problem dealt in this paper. The performance of the proposed method can be further improved in several ways including hyper-parameter tuning and architectural changes like the activation functions used or the type of layers. Extension of the proposed architecture with LSTM cells and intra-hour forecasting horizons are potential future research avenues in this domain.
This paper proposes a Unified Recurrent Neural Network Architecture for short-term multi-time-horizon solar forecasting and validates the forecast performance gains over the previously reported methods
1,474
scitldr
The ResNet and the batch-normalization (BN) achieved high performance even when only a few labeled data are available. However, the reasons for its high performance are unclear. To clear the reasons, we analyzed the effect of the skip-connection in ResNet and the BN on the data separation ability, which is an important ability for the classification problem. Our show that, in the multilayer perceptron with randomly initialized weights, the angle between two input vectors converges to zero in an exponential order of its depth, that the skip-connection makes this exponential decrease into a sub-exponential decrease, and that the BN relaxes this sub-exponential decrease into a reciprocal decrease. Moreover, our analysis shows that the preservation of the angle at initialization encourages trained neural networks to separate points from different classes. These imply that the skip-connection and the BN improve the data separation ability and achieve high performance even when only a few labeled data are available. The architecture of a neural network heavily affects its performance especially when only a few labeled data are available. The most famous example of one such architecture is the convolutional neural network (CNN) BID6. Even when convolutional layers of CNN were randomly initialized and kept fixed and only the last fully-connected layer was trained, it achieved a competitive performance compared with the traditional CNN BID5 BID14. Recent other examples are the ResNet BID3 and the batch-normalization (BN) BID4. The ResNet and the BN are widely used in few-shot learning problems and achieved high performance BID8 BID9.One reason for the success of neural networks is that their architectures enable its feature vector to capture prior knowledge about the problem. The convolutional layer of CNN enable its feature vector to capture statistical properties of data such as the shift invariance and the compositionality through local features, which present in images BID13. However, effects of the skip-connection in ResNet and the BN on its feature vector are still unclear. To clear the effects of the skip-connection and the BN, we analyzed the transformations of input vectors by the multilayer perceptron, the ResNet, and the ResNet with BN. Our show that the skip-connection and the BN preserve the angle between input vectors. This preservation of the angle is a desirable ability for the classification problem because the last output layer should separate points from different classes and input vectors in different classes have a large angle BID11 BID10. Moreover, our analysis shows that the preservation of the angle at initialization encourages trained neural networks to separate points from different classes. These imply that the skip-connection and the BN improve the data separation ability and achieve high performance even when only a few labeled data are available. We consider the following L layers neural networks, which transform an input vector x ∈ R D into a new feature vector h L ∈ R D through layers. Let h 0 = x and ϕ(·) = max(0, ·) be the ReLU activation function. ResNet BID12 BID1: DISPLAYFORM0 ResNet with batch-normalization (BN): DISPLAYFORM1 where the expectation is taken under the distribution of input vectors in the mini-batch of the stochastic gradient descent (SGD). Without loss of generality, we assume that the variance of input vectors in the mini-batch is one, Var (DISPLAYFORM2 We analyzed the average behavior of these neural networks when the weights were randomly initialized as follows. In the MLP, the weights were initialized by the He initialization BID2 because the activation function is the ReLU function. DISPLAYFORM3 In the ResNet and the ResNet with BN, the first internal weights were initialized by the He initialization, but the second internal weights were initialized by the Xavier initialization BID0 because the second internal activation function is the identity. DISPLAYFORM4 We analyzed the transformation of input vectors through hidden layers of the neural networks. Now we define the quantity studied in this paper., we define the angle and the cosine similarity, DISPLAYFORM0 where DISPLAYFORM1 is the length of the feature vector and DISPLAYFORM2 is the inner product between the pair of the feature vectors. Note that the expectation is taken under the probability distribution of initial weights. We derived the recurrence relation of the angle TAB0 . Its plot FIG0 shows that the MLP contracts the angle between input vectors, which is an undesirable property for the classification DISPLAYFORM0 DISPLAYFORM1 Figure 2: Transformation of the angle between input vectors. We plotted the mean and the standard deviation of the angle over 10 randomly initialized parameters.problem, and that the skip-connection in ResNet and the BN relax this contraction. Numerical simulations (Fig.2) on the MNIST dataset validated our analysis. TAB0 gives us the clear interpretation how the skip-connection in ResNet and the BN preserve the angle between input vectors. The ReLU activation function contracts the angle because the ReLU activation function truncates negative value of its input. The skip-connection bypasses the ReLU activation function and thus reduces the effect of the ReLU activation function to the half. Moreover, the BN reduces the effect of the ReLU activation function to the reciprocal of the depth. We derived the dynamics of the angle through layers (Table 2) by applying the recurrence relation of the angle TAB0 iteratively and using the fact that, if θ is small, arccos(ψ(θ)) can be well approximated by the linear function, a · θ where a < 1 is constant. Table 2 shows that, in the MLP with randomly initialized weights, the angle between input vectors converges to zero in an exponential order of its depth, that the skip-connection in ResNet makes this exponential decrease into a sub-exponential decrease, and that the BN relaxes this sub-exponential decrease into a reciprocal decrease. In other words, the skip-connection in ResNet and the BN preserve the angle between input vectors. Numerical simulation (Fig.3) on the MNIST dataset validated our analysis. Table 2: Dynamics of the angle through layers. DISPLAYFORM0 Figure 3: Dynamics of the angle. We plotted the mean and the standard deviation of the angle over 10 randomly initialized parameters. A desirable ability of the neural network for the classification problem is to separate points from different classes. However, our show that the randomly initialized neural networks contract the angle between input vectors from different classes. Our analysis provide us with an insight how training tackle this problem. We can show that the cosine similarity c l+1 (n, m) is proportional to DISPLAYFORM0 where θ is a parameter we can control by training. Its plot FIG1 implies that training makes small angles smaller and large angles larger by taking the extreme value of θ like 0 or π. In order to validate this insight, we stacked the softmax layer on top of an 1 layer MLP and trained this model by the SGD with 100 labeled examples in the MNIST dataset. Fig.5 shows the change of the angles of feature vectors by training, which validated our insight. The above discussion also shows the relationship between training and the preservation of the angle. The angle of feature vectors at high layer of the initialized MLP is small, which implies that training doesn't take extreme value of θ and doesn't separate points from different classes. On the other hand, the skip-connection and the BN preserve the angle between input vectors even at high layer. Thus, training takes extreme value of θ and separates points from different classes. Numerical simulations (Fig.6), which is the same as the previous one, validated our insight. The ResNet and the BN achieved high performance even when only a few labeled data are available. To clear the reasons for its high performance, we analyzed effects of the skip-connection in ResNet and the BN on the transformation of input vectors through layers. Our show that the skip-connection and the BN preserve the angle between input vectors, which is a desirable ability for the classification problem. Moreover, our analysis shows that the preservation of the angle at initialization encourages trained neural networks to separate points from different classes. These imply that the skip-connection and the BN improve the data separation ability and achieve high performance even when only a few labeled data are available.
The Skip-connection in ResNet and the batch-normalization improve the data separation ability and help to train a deep neural network.
1,475
scitldr
Learning representations of data is an important issue in machine learning. Though GAN has led to significant improvements in the data representations, it still has several problems such as unstable training, hidden manifold of data, and huge computational overhead. GAN tends to produce the data simply without any information about the manifold of the data, which hinders from controlling desired features to generate. Moreover, most of GAN’s have a large size of manifold, ing in poor scalability. In this paper, we propose a novel GAN to control the latent semantic representation, called LSC-GAN, which allows us to produce desired data to generate and learns a representation of the data efficiently. Unlike the conventional GAN models with hidden distribution of latent space, we define the distributions explicitly in advance that are trained to generate the data based on the corresponding features by inputting the latent variables that follow the distribution. As the larger scale of latent space caused by deploying various distributions in one latent space makes training unstable while maintaining the dimension of latent space, we need to separate the process of defining the distributions explicitly and operation of generation. We prove that a VAE is proper for the former and modify a loss function of VAE to map the data into the pre-defined latent space so as to locate the reconstructed data as close to the input data according to its characteristics. Moreover, we add the KL divergence to the loss function of LSC-GAN to include this process. The decoder of VAE, which generates the data with the corresponding features from the pre-defined latent space, is used as the generator of the LSC-GAN. Several experiments on the CelebA dataset are conducted to verify the usefulness of the proposed method to generate desired data stably and efficiently, achieving a high compression ratio that can hold about 24 pixels of information in each dimension of latent space. Besides, our model learns the reverse of features such as not laughing (rather frowning) only with data of ordinary and smiling facial expression. Developing generative model is a crucial issue in artificial intelligence. Creativity was a human proprietary, but many recent studies have attempted to make machines to mimic it. There has been an extensive research on generating data and one of them, generative adversarial network (GAN), has led to significant achievements, which might be helpful to deep learning model because, in general, lots of data in good performance BID12. Many approaches to creating data as better quality as possible have been studied: for example, variational auto-encoder (VAE) BID9 and GAN BID4. The former constructs an explicit density, ing in an explicit likelihood which can be maximized, and the latter constructs an implicit density BID3. Both can generate data from manifold which is hidden to us so that we cannot control the kind of data that we generate. Because it is costly to structure data manually, we need not only data generation but also automatically structuring data. Generative models produce only data from latent variable without any other information so that we cannot control what we want to generate. To cope with this problem, the previous research generated data first and found distributions of features on latent space by investigating the model with data, since the manifold of data is hidden in generative models. This latent space is deceptive for finding an area which represents a specific feature of our interest; it would Figure 1: Examples of the manifold. Left: a complex manifold which can be seen in general models, Right: a relatively simple manifold in the proposed model. The midpoint M of A and B can be easily calculated in the right manifold, but not in the left one. The midpoint of A and B is computed as N in the left manifold, which is incorrect. take a long time even if we can find that area. Besides, in the most of research, generative models had a large latent space, ing in a low compression rate which leads to poor scalability. To work out these problems, we propose a model which can generate the data whose type is what we want and learn a representation of data with a higher compression rate, as well. Our model is based on VAE and GAN. We pre-define distributions corresponding to each feature and modify the loss function of VAE so as to generate the data from the latent variable which follows the specific distribution according to its features. However, this method makes the latent space to become a more complex multimodal distribution which contains many distributions, ing in an instability in training the LSC-GAN. We prove that this problem can be solved and even made more efficiently by using an auto-encoder model with the theorem in Section 3. Although the proposed model compresses the data into small manifold, it is well-defined with Euclidean distance as shown in Fig. 1, which compares the manifolds in general models and in our model. The distance can be calculated with Euclidean distance in adjacent points but not in far points at the left manifold in Fig. 1. However, in the right manifold, we can calculate the distance between points regardless of the distance of them, where we can recognize the manifold more easily as shown in the left side. Thanks to a relatively simple manifold, it can produce neutral features regardless of their location in latent space, so that all features can be said as independent to each other. Our main contribution is summarized as follows.• We propose a method to improve the stability of a LSC-GAN with LSC-VAE by performing the weight initialization, and prove it theoretically.• We achieve conditional generation without additional parameters by controlling the latent space itself, rather than adding additional inputs like the existing model for condition generation.• We propose a novel model that automatically learns the ability to process data continuously through latent space control.• Finally, we achieve an efficient compression rate with LSC-GAN based on weight initialization of LSC-VAE.The rest of the paper is organized as follows. Section 2 reviews the related works and the proposed LSC-GAN model is illustrated in Section 3. In Section 4, we evaluate the performance of the proposed method with some generated data. The and discussion are presented in Section 5. Many research works have been conducted to generate data such as text, grammar, and images BID24 BID10 BID2. We divide the approaches for data generation into three categories: only generation, conditioned generation, and transforming data to have different features. Several researchers proposed generative models of VAE and GAN BID9 BID4. These are basis of the generative models. Both use maximum likelihood approach, but they have different policies to construct density: explicitly and implicitly. There are lots of variations of these models. BID17 constructed deep convolutional GAN (DCGAN) with convolutional neural networks (CNN) for improving the performance with the fact that CNN had been huge adoption in computer vision applications. BID26 introduced energy-based GAN (EBGAN) using autoencoder in discriminator. BID5 BID6 b) proposed transferred encoder-decoder GAN (TED-GAN) for stabilizing process of training GAN and used it to classify the data. These studies focused on high productivity in generation so that they could not control the type of generated data. Recently, some researchers began to set conditions on the data they generate. BID20 and BID23 inputted data and conditions together into VAE and generated data whose type is what they want, called conditional VAE (CVAE). van den Oord et al. FORMULA0 set discrete embedding space for generating a specific data using vector quantized variational auto-encoder (VQ-VAE), but because of discrete space, they could not control latent space continuously. Larsen et al. FORMULA0 used both VAE and GAN in one generative model. As they just mixed two models and did not analyzed a latent space, so that the manifold of data was hidden to us. To generate image with a specific feature, they extracted a visual attribute vector which is a mean of vector in latent space. BID15 inputted not only data but also conditions into GAN to create data that we want, called conditional GAN (CGAN). BID0 used mutual information for inducing latent codes (InfoGAN) and BID16 added a condition network that tells the generator what to generate (PPGN). These two models needed an additional input to generate the type of data we want. These studies make us to generate data with condition, but we still do not know about latent space and it is hard to find the location of a specific feature in the latent space. Therefore, we propose a model that learns to generate concrete features that we want from the latent space determined when LSC-VAE is trained. Some studies attempted to transfer the given data to others which have different features or even in different domain. BID21 proposed disentangled representation learning GAN (DRGAN) for pose-invariant face recognition. BID18 b) tried matching latent space of text and images and finally they translated text to image. BID25 also translated text to image and generated photo-realistic images conditioned on text by stacking models (StackGAN). BID27 and BID8 discovered cross-domain relations with CycleGAN and DiscoGAN. They can translate art style, face features, and bags to shoes. While other models could only do one conversion task, BID1 proposed StarGAN that could do multiple translation tasks with one model. These studies have been conducted to transform the data into those in other domains. However, they could not generate new data without input data. In addition, the size of latent space of most of them was too large. We aim to generate conditioned data even with a small size of latent space. In this section, we present a method to generate the data with the corresponding characteristics by inputting the latent variable which follows the specific distribution in latent space. As the instability caused by the larger scale of latent space in this process, we use the modified VAE, called LSC-VAE 1. As shown in Fig. 2(a), we train the LSC-VAE with L prior for the data to be projected by the encoder into the desired position in the latent space according to the characteristics of the data. The trained decoder of the LSC-VAE is used as a generator of LSC-GAN so that the LSC-GAN generates the data with the corresponding features by using latent variables sampled from a specific distribution. The proposed model is divided into two phases: initializing latent space (Fig. 2(a) ) and generating data (Fig. 2(b) ). In the first phase, latent semantic controlling VAE (LSC-VAE) is trained to project data into a specific location of latent space according to its features, and it learns to reconstruct data which is compressed. The decoder of LSC-VAE is used in the generator (G) of LSC-GAN in the second phase. G and discriminator (D) are trained simultaneously so that G can produce data similar to real data as much as possible and that D can distinguish the real from the fake. The architecture of the generation process is shown in Fig. 2(b). Auto-encoder has been traditionally used to represent manifold without supervision. In particular, VAE, one type of auto-encoders, is one of the most popular approaches to unsupervised learning of complicated distributions. Since any supervision is not in training process, the manifold constructed is hidden to us. As we mentioned in Section 1, this is usually too complex to generate the conditioned Figure 2: (a) The process of pre-defining a latent space. The LSC-VAE is trained to project the data into the appropriate position on latent space. (b) Generating process of the proposed method. The latent space is pre-defined in the process of (a).data. Therefore, we allow LSC-VAE to learn a representation of data with supervision. It compresses data into a particular place on latent space according to its features. The proposed model consists of two modules that encode a data x i to a latent representation z i and decode it back to the data space, respectively. DISPLAYFORM0 Index i means a feature which is included in data x and latent space z. The encoder is regularized by imposing a prior over the latent distribution P (z). In general, z ∼ N (0, I) is chosen, but we choose z i ∼ N (µ i, I)for controlling latent space. In addition, if we want to produce data which has multiple features i, j, we generate data from z ij ∼ N (µ i + µ j, I) 2. The loss function of LSC-VAE is as follows. DISPLAYFORM1 where D KL is the Kullback-Leibler divergence. The first term of equation 3 is related to reconstruction error and the second term is related to appropriate projection of data to the latent space. For example, when LSC-VAE projects the data with i− and j−features into the latent space, it is trained to map the data into the pre-defined latent space (N (µ i + µ j, I)) with L prior in equation 3 so as to locate the reconstructed data as similar to the input data according to its characteristics using L V AE. Therefore, LSC-VAE can be used in initializing GAN and it is demonstrated that LSC-VAE is valid and efficient for LSC-GAN in the next section. GAN has led to significant improvements in data generation BID4. The basic training process of GAN is to adversely interact and simultaneously train G and D. However, because the original GAN has a critical problem, unstable process of training BID17, the least squares GAN (LS-GAN) is proposed to reduce the gap between the distributions of real data and fake data by BID14. FIG1 shows the objective function of the LS-GAN. p data is the probability distribution of the real data. G(z) is generated data from a probability distribution p z, and it is distinguished from the real by D. DISPLAYFORM0 The main differences of the proposed model with VAE-GAN and LS-GAN is that LSC-GAN is based on LSC-VAE for initializing a latent space to control it. To produce the type of data we want, we just input latent variable z i ∼ N (µ i, I) to G, if the data has i−feature. Besides, we add the encoder of LSC-VAE into LSC-GAN to make sure that the generated data actually have the desired features. The encoder projects back to latent space so as to be trained to minimize the difference between latent space where data is generated and the space where the compressed data is projected. Equation 5 is about loss of D and loss of encoder and G. DISPLAYFORM1 Since the original GAN has disadvantage that the generated data are insensible because of the unstable learning process of the G, we pre-train G with decoder of LSC-VAE. The goal of the learning process of generating data of G is the same as equation 6 from equation 5, and it is equivalent to that of equation 7. However, it is not efficient to pre-train the G, because it depends on the parameters of the D. Therefore, we change this equation to equation 8 again, and it is represented only by the parameters of G. In this paper, to train the G with equation 8, we use the decoder of LSC-VAE, which is trained by using Dec(Enc(x)) ≈ x. The of LSC-VAE is that DISPLAYFORM0 can reach a goal of GAN (p data ≈ p G) stably, which is proved by Theorem 1 and 2. DISPLAYFORM1 where X i is real dataset with i−feature. From the game theory point of view, the GAN converges to the optimal point when G and D reach the Nash equilibrium. In this section, let p G be the probability distribution of data created from G. We show that if G(z) ≈ x, i.e., p data ≈ p G, the GAN reaches the Nash equilibrium. We define DISPLAYFORM0 We train G and D to minimize J(D, G) and K(D, G) for each. Then, we can define the Nash equilibrium of the LSC-GAN as a state that satisfies equation equation 9 and equation equation 10. Fully trained G and D are denoted by G * and D *, respectively. DISPLAYFORM1 Theorem 1. If p data ≈ p G almost everywhere, then the Nash equilibrium of the LSC-GAN is reached. Before proving this theorem, we need to prove the following two lemmas. DISPLAYFORM2 The proof of Lemma 1 and 2 were discussed by Kim et al. BID7. We assume that p data ≈ p G. From Lemma 1 and Lemma 2, if p data ≈ p G, then J(D, G) and K(D, G) both reach minima. Therefore, the proposed GAN reaches the Nash equilibrium and converges to optimal points. By theorem 1, GAN converges when p d ≈ p g, and it is done to some extent by the modified VAE, i.e. | p DISPLAYFORM3 Therefore, the proposed method is useful to initialize the weight of the generative model. However, it shows only validity of using VAE when learning GAN. We prove that it is also efficient by proving theorem 2. Assume that a model f is well-trained if and only if ∇L f ≈ 0, where L f is the loss function of f.Theorem 2. Let en k, de k be k epoch-trained encoder and decoder whose goal is DISPLAYFORM4 Proof. Notice that the derivative is unique, and a derivative of linear function is itself. Since en and de are trained with L V AE and L prior, the following statement is satisfied. DISPLAYFORM5 By differentiating the formula, DISPLAYFORM6 Since the derivative of linear function is itself, it derives to DISPLAYFORM7 With the fact that D(x) = 1, ∀x ∈ X and equation 11, it finally derives to DISPLAYFORM8 By theorem 1 and 2, the proposed learning process is valid and efficient. To verify the performance of the proposed model, we use the celebA dataset BID13. It is a large-scale face attributes dataset. We crop the initial 178×218 size to 138×138, and resize them as 64×64. We use 162,769 images in celebA and 14 attributes: black hair, blond hair, gray hair, male, female, smile, mouth slightly open, young, narrow eyes, bags under eyes, mustache, eyeglasses, pale skin, and chubby. We assign 20 dimensions to each feature and set mean of the i th -20 dimensions as 1. For example, if an image has i-feature, the elements of i * 20 th to (i + 1) * 20 th of the image's latent variable are 1 in average and 0 in the remainder and we denote that latent variable as n i. As shown in FIG0, we generate images from a specific latent space by using LSC-GAN. The images in the first column are generated to have'female' and'blond hair' features. We confirm that the condition works well. The images in the remaining columns are transformed using equation 15 for the features listed below. For example, if we generate an image x i which has i−feature from the latent variable z i, we add n j to add j−feature into the image. DISPLAYFORM0 where x ij is an image which has i− and j−features, and z i is the latent variable for i−feature. To show that the proposed model does not simply memorize data but understand features of data and generate them, we generate images from a series between two random images as in DCGAN. As shown in FIG1, the change between images is natural so that we can say that the latent space of LSC-GAN is a manifold. Besides, the images in the middle column have both features of images in leftmost and rightmost, ing in more simple manifold as shown in Fig. 1.Unlike other GAN models, the LSC-GAN fully understands features of data so as to generate data including inverse-feature. We only train the model about the presence of the'pale skin' and'smile' features, but the model also learned about the reverse of'pale skin' and'smile' automatically as shown in the fourth and the ninth column of FIG2. Besides, if we assign a value of 2 rather than 1 to the average of latent variable which is related to'mustache', we can see that more mustaches are created in the last column in FIG2. Therefore, our model can automatically infer and generate the data with inverse-feature that do not exist in the dataset. This shows that the proposed model has the ability to deduce a negative feature by itself although only positive features are used in trainingTo verify the proposed model, we conduct subjective test about the quality of the generated data. We generate data by using DCGAN, EBGAN, and the proposed GAN. We randomly choose 25 generated data for each model. We perform the subjective test on 30 subjects and ask them to evaluate the quality of the generated data in 5 ways: very low, low, medium, high, and very high. We collect the of 750 questionnaires, which are the evaluated of 25 generated images by 30 subjects, and summarize them in TAB0. We score 1,2,3,4, and 5 points for each evaluation which is shown in the last column in TAB0. Our model not only generates images according to input conditions, but also compress efficiently. We calculate the compression rate with rate= size inputdata /size bottleneck /#classes. As shown in TAB1, our proposed model has the best compression rate compared to others. This proves experimentally that LSC-VAE, theoretically proven with theorems 1 and 2, has been helpful in initializing the weights of the LSC-GAN, and it can achieve good performance even in small latent spaces. In this paper, we address some of significant issues in generative models: unstable training, hidden manifold of data, and extensive hardware resource. To generate a data whose type is what we want, we propose a novel model LSC-GAN which can control a latent space to generate the data that we want. To deal with a larger scale of latent space cause by deploying various distributions in one latent space, we use the LSC-VAE and theoretically prove that it is a proper method. Also, we confirm that the proposed model can generate data which we want by controlling the latent space. Unlike the existing generative model, the proposed model deals with features continuously, not discretely and compresses the data efficiently. Based on the present findings, we hope to extend LSC-GAN to more various datasets such as ImageNet or voice dataset. In future work, we plan to conduct more experiments with various parameters to confirm the stability of model. We will also experiment by reducing the dimension of the latent space to verify that the proposed model is efficient. Besides, since the encoder can project the data to the latent space according to the features inherent in data, it could be used as a classifier.
We propose a generative model that not only produces data with desired features from the pre-defined latent space but also fully understands the features of the data to create characteristics that are not in the dataset.
1,476
scitldr
With the ever increasing demand and the ant reduced quality of services, the focus has shifted towards easing network congestion to enable more efficient flow in systems like traffic, supply chains and electrical grids. A step in this direction is to re-imagine the traditional heuristics based training of systems as this approach is incapable of modelling the involved dynamics. While one can apply Multi-Agent Reinforcement Learning (MARL) to such problems by considering each vertex in the network as an agent, most MARL-based models assume the agents to be independent. In many real-world tasks, agents need to behave as a group, rather than as a collection of individuals. In this paper, we propose a framework that induces cooperation and coordination amongst agents, connected via an underlying network, using emergent communication in a MARL-based setup. We formulate the problem in a general network setting and demonstrate the utility of communication in networks with the help of a case study on traffic systems. Furthermore, we study the emergent communication protocol and show the formation of distinct communities with grounded vocabulary. To the best of our knowledge, this is the only work that studies emergent language in a networked MARL setting. Co-existing intelligent agents affect each other in non-trivial ways. Consider for example, two agents playing a modified variant of archery in two dimensions. Agent A controls the speed at which the arrow is released but it can only shoot along y-axis. Agent B controls wind speed along x-axis. The arrow drifts along the direction of wind with a magnitude proportional to wind speed. A target is specified by (x, y) coordinates and the agents must act cooperatively to shoot the target. In this setup, the optimal action for agent A depends on the current policy of agent B and vice versa. Any change in one agent's policy modifies the other agent's perception about the environment dynamics. Formally, this issue is referred to as non-stationarity of environment in a multi-agent setup. This non-stationarity makes the learning problem hard and approaches that try to independently learn optimal behavior for agents do not perform well in practice . Thus, it is important to develop models that have been tailored towards training multiple agents simultaneously. In this paper, we focus on a specific multi-agent setup where the agents are connected to each other via an underlying fixed network topology. We cast the problem in the multi-agent reinforcement learning (MARL) framework and assume that agents are rewarded by the environment based on their actions and their goal is to cooperatively maximize their rewards. We further assume that the agents have been endowed with the ability to communicate with one another along the network edges to achieve cooperation. However, the communication protocol is not fixed and the agents must learn a protocol to communicate with each other in order to maximize their rewards. Communication is essential in a multi-agent setup. In many practical scenarios, agents may only observe a small portion of the global environment state and they must take actions based on their local observations. As discussed above, agents affect each other in non-trivial ways through their actions. Thus, for achieving long term cooperation, it is essential for agents to be able to share their intents to complement the information provided by the local observation of each agent. Communication provides the ability to do so to the agents. Many real world problems can be cast in this framework and we provide a number of concrete examples after formally defining the problem setup in Section 2. For clarity of exposition and to be more concrete, in this paper, we focus on a particular real world problem as a case study, the problem of intelligently managing traffic. We present the traffic management problem as a particular instantiation of the abstract multi-agent reinforcement learning problem that we have informally defined above (see Section 2 for a formal definition). In this context, the agents correspond to traffic lights and the underlying network is the network of roads. Agents receive rewards from the environment based on factors like queue length at the traffic junction and must communicate with each other to cooperatively maximize their rewards, thereby ensuring a smooth flow of traffic. We propose a MARL-based traffic system that allows coordination between traffic signals (agents) via: (i) inter-agent communication; and (ii) a cooperative reward structure. At each time-step, the agents communicate with their immediate neighbours in the underlying network by broadcasting a message in the form of a discrete symbol (each message corresponds to a word, represented by a binary vector in our experiments). Over time, the agents are trained to exploit this broadcasted message to coordinate with each other and maximize their rewards. As the agents are trained, a language emerges between pairs of agents. Since the agents learn the communication protocol themselves, our approach is different from methods that use a fixed protocol for communication, like smart-grids. We empirically demonstrate the utility of communication in this setup and also investigate a cooperative reward structure over the network of traffic junctions. Our model uses a query-based soft attention mechanism to help the agents come up with more complex cooperative strategies. We perform extensive experimental evaluation to demonstrate that (i) our method outperforms baseline approaches; (ii) communication is useful, (iii) communication is grounded in actions taken by the agents, and (iv) the cooperative reward structure promotes communication and hence coordination. Markov games generalize the notion of a Markov decision process (MDP). A Markov game is used to model an environment where N intelligent agents co-exist. Let S ⊆ R ds be the set of all possible environment states (i.e., state-space) and denote by O i ⊆ R do i the observation space of agent i. At each step, agent i chooses an action from its action space A i and in response to the actions chosen by all agents, the environment state is updated using the transition function T: S × A 1 × · · · × A N → ∆(S) where ∆(S) is the set of all probability distributions defined over set S. The environment provides a reward to each agent at each time-step using a reward function r i: S × A i → R. For i = 1, 2,..., N, the goal of i th agent is to find a policy π θi: O i → ∆(A i) that chooses optimal actions so as to maximize the long term reward, R i = t γ t r t i where r t i is the reward received by agent i at time-step t and γ ∈ is the discount factor. We model the problem as a Markov game with two additional assumptions: (i) let V = {1, 2, . . ., N} be the set of all agents, we assume that the agents are connected to each other via an underlying network whose edge set is given by E; and (ii) agents can communicate with their immediate neighbors in the underlying network. To communicate, at each step, agents broadcast a message. This message is received by the neighbors at the next time-step. The observation space of each agent is augmented to consider the messages received from all of its neighbors in addition to considering the local observation made by that agent. The agents act in a cooperative setting as follows: first, the environment provides a reward r t i to all agents. Then, each agent i, augments this reward to include the information about rewards received by its neighbors, i.e.,r Here, β assigns relative importance to agent's own rewards and the rewards of its neighbors (β ∈) and U(i) is the set of neighbors of agent i. The agents are trained to maximize long term rewardsR i = t γ trt i. As agents also benefit from the rewards obtained by their neighbors, they act cooperatively. Such a setup can for instance be used to model an intelligent electrical distribution network. In this application, agents represent power stations and the underlying network corresponds to the electrical grid. Each agent has a production capacity and it may choose to share the power generated by it with a neighboring agent (action space). All agents have to meet the local demand which changes stochastically and they observe various attributes like power requirements, consumer demand and so on (observation space). Rewards r i are provided based on how successful an agent is in meeting the demand. The agents must learn to communicate with each other to effectively share the power generated by them to maximize their cooperative long term rewardsR i. As another application, consider a supply chain where interconnected warehouses (agents) have to manage their inventory to meet the local demand. As before, agents can choose to ship goods that they have in their inventory, to their neighbors (action space). The warehouses have to meet stochastically changing consumer demands and must learn to communicate effectively in order to share goods so that an appropriate level of inventory is maintained at each warehouse. As in the case of electrical distribution network, rewards received by agents depend on their ability to effectively meet the local demand. While many applications are possible, in this paper, we focus on the problem of traffic management as a concrete instantiation of the proposed abstract problem. Each traffic junction corresponds to an agent which are connected to each other via the underlying network of roads. Agents can control the flow of traffic in the system by modulating the traffic lights (action space). They take actions based on their local observations, which in our case, is a image containing snapshot of the immediate surroundings of the agent. Agents are rewarded based on predefined attributes like queue length at the traffic junction, average waiting time of vehicles at the junction and so on. We again train the agents to maximize the long term cooperative rewards. Clearly, in order to ensure a smooth flow of traffic, the agents must communicate with each other to share their intent. We use the SUMO traffic simulator to simulate the traffic for the experiments. In our setup, we use binary messages with d = 8. As such, agents are limited to a vocabulary of 256 words. Communication is one-hop, i.e. we only allow agents to communicate with its immediate neighbours. At each time-step, the agents broadcast 8-bit messages to its respective neighbours. Agents rely on their local observations and received messages to modulate the traffic signals and to decide the next message to be broadcasted. We opted for discrete communication rather than continuous since it is more interpretable. For the real world applications that we have mentioned above, transparency in decision making is important and using discrete communication is a step in this direction. Towards this end, we demonstrate that the emergent language is grounded, i.e. messages correspond to actions in Section 5. By forcing agents to communicate with the help of discrete symbols, we ensure that humans can inspect and interpret the conversation logs. The simplest and one of the most widely used approaches to tackle the traffic management problem is the Fixed-time Control , which uses a predefined cycle for planning. Network level control like the Max-Pressure Controller differ from the conventional approaches in respect to greedily switching phases based on demand (queue length in adjoining lanes). Similarly, Self-Organizing Traffic Light Control (SOTL) method switches the traffic lights based on when the number of vehicles in the lanes crosses a predefined threshold. These methods are locally adapted to the traffic conditions which in turn is used to achieve global synchronization. Optimization methods like TUC assume uniform traffic flow in a certain time period to minimize the vehicle travel time. We argue that conventional traffic control methods base their problem setup on assumptions which do not hold while modelling the dynamics of traffic. In the recent past, people have resorted to the use of Reinforcement Learning methods to dynamically adapt to the traffic conditions (Prabuchandran K. J. et al., 2014; ;), wherein, each junction is treated as an agent and the changes in traffic lights are actions. The agents try to maximize their long term reward by mapping a predefined observation space (representation of the traffic conditions at its junction) to the action space. However, these Deep-Q Learning (DQN) based approaches often consider the agents to be independent of one another, thus rendering the environment non-stationary, and raising convergence and stability issues. A straightforward way to address the above problems is to make the agents coordinate amongst themselves by including the information from the neighbouring agents in its state space (; ;), assuming total observability. Another way would be to use centralized training to address stationary related issues. However, centralized training poses scalabilty problems (especially in a traffic-management system where the complexity increases linearly with the junctions) and cannot be used to learn policies in real time. Scalable multi-agent approaches like Van der propose to train a smaller source problem involving fewer agents (2 agents) and transfer it to a larger target problem using transfer-learning. Prior work on extending MARL setup to networks using a fully decentralized training has been proposed by , wherein the authors propose to share the local parameters of the agents through communication. Drawing motivation from the recent advancements in emergent communication , we try to induce coordination amongst agents with partial observability. We argue that with a cooperative reward structure, agents can be made to pass on relevant information to their neighbours, which the independent agents were previously incapable of modelling using their own observation space. Concurrent to our work, proposed a similar idea of using communication in traffic networks. Their approach differs from ours in the following ways: (i) They consider cars as agents, whereas we formulate this as a network problems with nodes, which are fixed, as agents; (ii) They use continuous communication, whereas we use discrete communication in our setup for better interpretability; (iii) In their setup, every agent can communicate with the other agents, whereas we restrict the communication to its immediate neighbours with a priority assigned to each message. Image Encoder: As shown in Fig. 1, we use a 10-agent traffic network (Network 1) which comprises both, 3-way as well as 4-way junctions. The observation space of the agents in the traffic network comprises the image representation of the junction. This amorphous representation of the observation space propels the agent to extract necessary information from the image, like length of the queue in the adjoining lanes or the current phase at the junction. One can always add these information to the state directly and, in general, adding more information to the state space often leads to better , nonetheless, such systems carry redundant information and are impractical to deploy. For instance, consider lining the lanes with sensors to calculate the number of vehicles. A Convolution Neural Network (CNN) is used to process the input image and extract specific features required for optimizing traffic. The agents, then, use this information to take actions. Since, it is unreasonable to change the traffic lights every second, we constrain the agents to take an action once in every t timesteps (t = 5s in our setup). The Accumulator is a Recurrent Neural Network (RNN) which keeps track of the observations for t time-steps before an action is taken. Let the encoded image information of agent i at time-step k be o is the output dimension of the encoder CNN). The Accumulator uses this encoded vector to update its memory by h Acc Communicator: In order to establish complex cooperative strategies, the agents must learn to prioritize messages received from its neighbours. In that context, we incorporate a query-based softattention mechanism in our communication setup. At each time-step, an agent generates a query (q) which is used to assign weights to the received messages in the previous time-step. Intuitively, an agent enquires about the unaccounted information (which it cannot model) in O i and the weights (α) are a way to understand the contributions of its neighbours in the state information. Let us assume that the messages broadcasted are denoted by m, where m ∈ M (M is set of all messages). For agent i at k th time-step with neighbours belonging to the set U i, Here W ∈ R d×d h, q ∈ R d and α ∈ R d are the attention parameters. At each time-step, the pooled message is fed to the Communicator RNN and is processed as h com It should be noted that we reinitialize the hidden state of the RNN as soon as the action is taken (i.e. after every t time-steps). A weighted mixture of the output of the RNN along with the output of the image encoder (o k i) is passed through a setup which emits discrete symbols with the help of StraightThrough Gumbel-Softmax (G) . This renders the model differentiable even with discrete symbols. The output message is then broadcasted to all its neighbours (U i). Refer Appendix A.2 for more details. Action Selector: A 4-way junction can have 2 16 combinations of actions (2 9 for a 3-way junction), however, most of them are either highly dangerous for traffic or don't contribute to a smoother flow. Adopting conventional approaches , we fix the action space to 4 actions for a 4-way junction and 3 for a 3-way junction (Fig. 3). The actions are determined such that the vehicles can move smoothly without conflicts. The action values are a function of Accumulator-RNN and Communicator-RNN hidden states i.e. f actioni (h Acc k i, h com k i). At every t th time-step, the action values are generated and the traffic lights are switched to the action with the maximum value. Reward Structure: The reward at a junction i is a combination of the following factors: (i) Queue Length: Total number of waiting vehicles (with speed < 0.1m/s) on the adjoining lanes; (ii) Waiting Time: Sum of waiting time of all vehicles in the adjoining lanes, wherein, a vehicle is considered to be waiting if it travels at a speed < 0.1m/s; (iii) Delay: For lane l, delay is calculated as D Communication What gives the agents the incentive to communicate? An agent might turn out to be selfish and focus on solving its own problems even while it broadcasts unstructured messages to its neighbours. Even if the agents are forced to communicate, what makes us believe that the agents will pay any heed to the messages that they receive, considering the individual reward structure? In view of this, we highlight two important aspects of communication: (i) Cooperation: Meaningful information exchange will arise only if there is a dearth of knowledge in the local observations of the agents. In other words, there are factors that the agent is uncertain of and cannot model in the current setup. To see why, let us consider a scenario where there are two interconnected independent agents (say A and B) trying to model the traffic. The arrival rate of agent A is modelled using Poisson, with rate λ, which is a function of the environment. As agent B takes an action, the environment is no longer stationary and λ varies in a way that agent A is incapable of modelling (let us assume λ now becomes (λ − δλ)). Hence, the agent learns to ignore these variables altogether. In order to make sense of the unaccounted changes in its observation space, agent B has to communicate the difference in λ to agent A which can be assumed to be Poisson distributed with rate δλ (superposition of independent Poisson processes in a new Poisson process whose rate is the sum of the rates of the independent Poisson processes); (ii) Coordination: Even while groups of agents implicitly come up with a common communication protocol, there should be a higher level harmony amongst all agents, i.e. the groups of agents in a network must systematically synchronize or coordinate to realize the same goal. In order to drive the agents to work for the greater good, they must arrive at a common protocol which can be used universally. This leads to the development of a common language with overlapping vocabularies from different communities of agents. Additionally, if an agent in the sub-network fails, the other agents in that group can still communicate with other groups. Comparison with baselines: We compare our model with the following baselines: We use our action space and periodically switch from one action to the other in a round-robin fashion after fixed time intervals of duration t = 5 steps. (ii) Self Organizing Traffic Light control (SOTL): SOTL switches from one phase to other when the queue length at any of the adjoining lanes exceed a predefined threshold. In our implementation, we fix this threshold to 5 while retaining our action space. (iii) Deep-Q Learning (DQN): We also implement the standard DQN framework with the same observation and action space as ours where each agent is trained independently. (iv) IntelliLight: IntelliLight uses a more refined state representation which includes queue length, number of vehicles and updated waiting time at the adjoining lanes of a junction, in addition to its image representation. We replicate their architecture with the same state representation but with our action space, since their action space is fraught with danger in a real life traffic management system. The (Fig. 4) reveal that our method outperforms all baselines in terms of the end (final reward at convergence). While DQN training commences at a high reward (because of high initial value of in the -greedy strategy), it converges quickly without much improvement. We also wish to point out that parameter updates in DQN happen every timestep once the buffer-size exceeds the batch-size, as opposed to once every episode in our setup. On an average, the difference between the final rewards at convergence of the DQN and the our setup is ≈ 55. An important question that naturally follows is whether communication is required at all. It may indeed be possible for the agents to adapt even while they broadcast random set of symbols which do not bear any significance. To test this possibility, we perform two sets of experiments: (i) We made the agents broadcast 8-bit messages with all bits set to zero. We found that the training not only converged slowly, it also stabilized at a lower reward compared to the original setting. Post convergence the difference in rewards was ≈ 40. (ii) We visually impaired one of the agents (we call it a blind-agent) such that it no longer receives its local observation while taking an action, although it can still receive messages from its neighbours and transmit its state information to them i.e. the agent processes its input observation at each time-step but the weight W CNN is set to zero. Hence, it cannot backpropagate valuable gradients from its actions to the image encoder. We noticed that the overall setup still converged to the same as the original setup. We attribute it to the fact that the blind-agent receives necessary information from its neighbours through communication. On repeating the experiment for two neighbouring blindagents, the convergence worsens (Fig. 5). Thus, we can safely conclude that communication not only works but is also necessary. We performed the above experiments on agents 4 & 5 (Network 1 of Fig. 1). In order to verify grounding, we performed experiments using Pointwise Mutual Information (PMI) . We constructed a matrix for each pair of agents where the rows correspond to the actions of one agent (say i) and the columns correspond to the message sent by the other agent in the pair. We computed the Singular Value Decomposition (SVD) of the PMI matrix (pmi ∈ R |Ai|×256) given as pmi = USV T, where U ∈ R |Ai|×k, S ∈ R k×k and V ∈ R k×256 (k ≤ |A i |). Figure 6 shows the t-SNE (van der) plot of the rows of V matrix. The rows of V can be interpreted as word embeddings. As shown in Fig. 6, we see distinct clusters forming for different actions. In other words, we can say that neighbouring agents use specific set of words to indicate actions. The words which are haphazardly placed can be considered as those that haven't been assigned a specific meaning or the ones which can be used in many different contexts. Additionally, we plot the the rows of U matrix (with k = 2) i.e. the action embeddings corresponding to the broadcasted messages of all agents. For instance, as shown in Fig. 8, we pair the actions of agent 0 with the broadcasted messages from the rest of the agents. For each such pair, we get a U matrix which we plot using different colors for different agents. We notice that the points corresponding to the U matrix of the neighbours tend to overlap as highlighted by the red circles (for agent 0, agents 1 and 3 overlaps). This implies that neighbours U 0 are consistent in the use of messages to agent 0, which in turn bases its actions on the received messages. See Appendix A.3 for the remaining plots on grounding. Fig.(a,b) ]. The numbers {0, . . ., 1} denote the agents in the network. Fig.(c) on the right represents the clustering in the 28-agent network (Network 2) with A & B denoting two 14-agent sub-networks. The position of the numbers denote the mean of the clusters. We obtain a tf-idf matrix where rows correspond to agents and columns correspond to the words spoken by these agents. On plotting a t-SNE plot of the rows of this matrix for Network 1 (Fig. 1), we noticed that the agents that are broadcasting to the same neighbour tend to be clustered together. For instance, from Experiments on larger networks: Due to communication, we expect well-defined communities to emerge (in terms of the vocabulary usage) if huge networks are sparsely connected to one another. To test this hypothesis, we take a pair of networks, each having 14 agents and connect them by a single one-way lane (Network 2 in Fig. 1). We plot the t-SNE embeddings of the rows of tf-idf matrix (Fig. 7 (c) ). We notice two distinct clusters (denoted by A and B) with a overlapping region, which we argue, comprises the common vocabulary used by agents from both networks. To sum it up, we not only find evidence of cooperation among agents but also coordination among groups of agents across networks. On removing the cooperative reward structure, the model takes twice as much time to converge, presumably because the agents have little incentive to communicate. In this paper, we proposed an approach to mitigate network congestion with the help of traffic networks. Though extensive experiments, we demonstrated the benefit of emergent communication to optimize traffic flow over existing MARL approaches. Additionally, we performed qualitative studies on the emergent language and showed that it is grounded in actions. Human communication is discrete in nature and can, in general, be represented by categorical variables. Additionally, discrete variables are more interpretable which makes it well suited for real life problems like traffic management, where one needs transparency. However, the use of discrete latent variables render the neural network non-differentiable. The Gumbel Softmax gives a differentiable sample from a discrete distribution by approximating the hard one-hot vector into a soft version. The Gumbel distribution has two parameters: µ and β. The standard Gumbel distribution where µ and β are 0,1 respectively has probability density function: G = e −(z+e −z). Suppose, for a given agent, our model (Communicator) outputs a Multinomial distribution of message bits with logits: p = (p 1, . . ., p d) where d = 8. These logits are functions of inputs and weights which need to be trained. A simple way to draw samples from a discrete distribution would be to use the Gumbel-max trick as shown, Here, the noise in form of z i is independently sampled from standard Gumbel distribution and are obtained as z i = − log(− log(u i)), u i being i. β is the temperature parameter, which, in our experiments, is set to 0.5. As β > 0, we obtain welldefined gradients ∂m i /∂p i with respect to parameters p i. Gumbel-Softmax can produce a 0.9999-hot vector instead of a hard one-hot vector depending on β. Since we want the communication to be discrete, we employ the Straight-Through version of the Gumbel-Softmax estimator with a simple reformulation of the form, Here,m i is the one-hot vector ofm i i.e. such thatm i = I{arg max im i = k, k ≤ k}. We use binary 8-bit messages, therefore k = 1 and k is fixed during the training process. Now, detach prevents the gradients from flowing through that node, hence, ∇ p i m i = ∇ p imi. This makes the communication discrete in the forward pass even as the gradients flow is smooth during the backward pass. The final output message of agent is given as m = (m 1, . . ., m d). A.3 COMMUNICATION. Action embeddings from matrix U (orthonormal matrix) of agent 0 corresponding to messages from all agents. As highlighted by the red circles, the action embeddings of agent i in response to messages from neighbours U i (as highlighted in gray at the centres of the plots) are overlapped. The color bar represents different agents. Here, we plots are for agents 0, 1, 8, 9. Similar trends are observed for rest of the agents as well.
A framework for studying emergent communication in a networked multi-agent reinforcement learning setup.
1,477
scitldr
We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data. Translation equivariance is an important inductive bias for many learning problems including time series modelling, spatial data, and images. The model embeds data sets into an infinite-dimensional function space, as opposed to finite-dimensional vector spaces. To formalize this notion, we extend the theory of neural representations of sets to include functional representations, and demonstrate that any translation-equivariant embedding can be represented using a convolutional deep-set. We evaluate ConvCNPs in several settings, demonstrating that they achieve state-of-the-art performance compared to existing NPs. We demonstrate that building in translation equivariance enables zero-shot generalization to challenging, out-of-domain tasks. Neural Processes (NPs; b; a) are a rich class of models that define a conditional distribution p(y|x, Z, θ) over output variables y given input variables x, parameters θ, and a set of observed data points in a context set Z = {x m, y m} M m=1. A key component of NPs is the embedding of context sets Z into a representation space through an encoder Z → E(Z), which is achieved using a DEEPSETS function approximator . This simple model specification allows NPs to be used for (i) meta-learning , since predictions can be generated on the fly from new context sets at test time; and (ii) multi-task or transfer learning , since they provide a natural way of sharing information between data sets. Moreover, conditional NPs (CNPs; a), a deterministic variant of NPs, can be trained in a particularly simple way with maximum likelihood learning of the parameters θ, which mimics how the system is used at test time, leading to strong performance . Natural application areas of NPs include time series, spatial data, and images with missing values. Consequently, such domains have been used extensively to benchmark current NPs (a; b;). Often, ideal solutions to prediction problems in such domains should be translation equivariant: if the data are translated in time or space, then the predictions should be translated correspondingly . This relates to the notion of stationarity. As such, NPs would ideally have translation equivariance built directly into the modelling assumptions as an inductive bias. Unfortunately, current NP models must learn this structure from the data set instead, which is sample and parameter inefficient as well as impacting the ability of the models to generalize. The goal of this paper is to build translation equivariance into NPs. Famously, convolutional neural networks (CNNs) added translation equivariance to standard multilayer perceptrons . However, it is not straightforward to generalize NPs in an analogous way: (i) CNNs require data to live "on the grid" (e.g. image pixels form a regularly spaced grid), while many of the above domains have data that live "off the grid" (e.g. time series data may be observed irregularly at any time t ∈ R). (ii) NPs operate on partially observed context sets whereas CNNs typically do not. (iii) NPs rely on embedding sets into a finite-dimensional vector space for which the notion of equivariance with respect to input translations is not natural, as we detail in Section 3. In this work, we introduce the CONVCNP, a new member of the NP family that accounts for translation equivariance. 1 This is achieved by extending the theory of learning on sets to include functional representations, which in turn can be used to express any translation-equivariant NP model. Our key contributions can be summarized as follows. (i) We provide a representation theorem for translation-equivariant functions on sets, extending a key of to functional embeddings, including sets of varying size. (ii) We extend the NP family of models to include translation equivariance. (iii) We evaluate the CONVCNP and demonstrate that it exhibits excellent performance on several synthetic and real-world benchmarks. In this section we introduce the notation and precisely define the problem this paper addresses. Notation. In the following, let X = R d and Y ⊆ R d (with Y compact) be the spaces of inputs and outputs respectively. To ease notation, we often assume scalar outputs Y ⊆ R. Define Z M = (X × Y) M as the collection of M input-output pairs, Z ≤M = M m=1 Z m as the collection of at most M pairs, and Z = ∞ m=1 Z M as the collection of finitely many pairs. Since we will consider permutation-invariant functions on Z (defined later in Property 1), we may refer to elements of Z as sets or data sets. Furthermore, we will use the notation [n] = {1, . . ., n}. CNPs model predictive distributions as p(y|x, Z) = p(y|Φ(x, Z), θ), where Φ is defined as a composition ρ • E of an encoder E: Z → R e mapping into the embedding space R e and a decoder ρ: R e → C b (X, Y). Here E(Z) ∈ R e is a vector representation of the set Z, and C b (X, Y) is the space of continuous, bounded functions X → Y endowed with the supremum norm. While NPs (b) employ latent variables to indirectly specify predictive distributions, in this work we focus on CNP models which do not. Then a mapping Φ: Z → H is called translation equivariant if Φ(T τ Z) = T τ Φ(Z) for all τ ∈ X and Z ∈ Z. Having formalized the problem, we now describe how to construct CNPs that translation equivariant. We are interested in translation equivariance (Property 2) with respect to translations on X. The NP family encoder maps sets Z to an embedding in a vector space R d, for which the notion of equivariance with respect to input translations in X is not well defined. For example, a function f on X can be translated by τ ∈ X: f (· − τ). However, for a vector x ∈ R d, which can be seen as a function [d] → R, x(i) = x i, the translation x(· − τ) is not well-defined. To overcome this issue, we enrich the encoder E: Z → H to map into a function space H containing functions on X. Since functions in H map from X, our notion of translation equivariance (Property 2) is now also well defined for E(Z). As we demonstrate below, every translation-equivariant function on sets has a representation in terms of a specific functional embedding. Definition 1 (Functional mappings on sets and functional representations of sets). Call a map E: Z → H a functional mapping on sets if it maps from sets Z to an appropriate space of functions H. Furthermore, call E(Z) the functional representation of the set Z. Considering functional representations of sets leads to the key of this work, which can be summarized as follows. For Z ⊆ Z appropriate, a continuous function Φ: Z → C b (X, Y) satisfies Properties 1 and 2 if and only if it has a representation of the form Φ(Z) = ρ (E(Z)), E(Z) = (x,y)∈Z φ(y)ψ(· − x) ∈ H, for some continuous and translation-equivariant ρ: H → C b (X, Y), and appropriate φ and ψ. Note that ρ is a map between function spaces. We also remark that continuity of Φ is not in the usual sense; we return to this below. Equation defines the encoder used by our proposed model, the CONVCNP. In Section 3.1, we present our theoretical in more detail. In particular, Theorem 1 establishes equivalence between any function satisfying Properties 1 and 2 and the representational form in Equation. In doing so, we provide an extension of the key of to functional representations on sets, and show that it can naturally be extended to handle varying-size sets. The practical implementation of CONVCNPs -the design of ρ, φ, and ψ -is informed by our in Section 3.1 (as well as the proofs, provided in Appendix A), and is discussed for domains of interest in Section 4. In this section we establish the theoretical foundation of the CONVCNP. We begin by stating a definition that is used in our main . We denote [m] = {1, . . ., m}. Definition 2 (Multiplicity). A collection Z ⊆ Z is said to have multiplicity K if, for every set Z ∈ Z, every x occurs at most K times: For example, in the case of real-world data like time series and images, we often observe only one (possibly multi-dimensional) observation per input location, which corresponds to multiplicity one. We are now ready to state our key theorem. 3, permutation invariant (Property 1), and translation Apply CNN and predict require: ρ = (CNN, ψρ), ψ, and density γ require: context (xn, yn) 10 end (b) require: ρ = CNN and E = CONV θ require: image I, context Mc, and target mask Mt 1 begin 2 // We discretize at the pixel locations. Figure 1: (a) Illustration of the CONVCNP forward pass in the off-the-grid case and pseudo-code for (b) off-the-grid and (c) on-the-grid data. The function pos: R → (0, ∞) is used to enforce positivity. for some continuous and translation-equivariant ρ: H → C b (X, Y) and some continuous φ: Y → R K+1 and ψ: X → R, where H is an appropriate space of functions that includes the image of E. We call a function Φ of the above form CONVDEEPSET. The proof of Theorem 1 is provided in Appendix A. We here discuss several key points from the proof that have practical implications and provide insights for the design of CONVCNPs: (i) For the construction of ρ and E, ψ is set to a flexible positive-definite kernel associated with a Reproducing Kernel Hilbert Space (RKHS;), which in desirable properties for E. (ii) Using the work by , we set φ(y) = (y 0, y 1, · · ·, y K) to be the powers of y up to order K. (iii) Theorem 1 requires ρ to be a powerful function approximator of continuous, translation-equivariant maps between functions. In Section 4, we discuss how these theoretical inform our implementations of CONVCNPs. Theorem 1 extends the of discussed in Section 2 by embedding the set into an infinite-dimensional space -the RKHS -instead of a finite-dimensional space. Beyond allowing the model to exhibit translation equivariance, the RKHS formalism allows us to naturally deal with finite sets of varying sizes, which turns out to be challenging with finite-dimensional embeddings. Furthermore, our formalism requires φ(y) = (y 0, y 1, y 2, . . ., y K) to expand up to order no more than the multiplicity of the sets K; if K is bounded, then our hold for sets up to any arbitrarily large finite size M, while fixing φ to be only (K + 1)-dimensional. In this section we discuss the architectures and implementation details for CONVCNPs. Similar to NPs, CONVCNPs model the conditional distribution as where Z is the observed data and Φ a CONVDEEPSET. The key considerations are the design of ρ, φ, and ψ for Φ. We provide separate models for data that lie on-the-grid and data that lie off-the-grid. Form of φ. The applications considered in this work have a single (potentially multi-dimensional) output per input location, so the multiplicity of Z is one (i.e., K = 1). It then suffices to let φ be a power series of order one, which is equivalent to appending a constant to y in all data sets, i.e. φ(y) = [1 y]. The first output φ 1 thus provides the model with information regarding where data has been observed, which is necessary to distinguish between no observed datapoint at x and a datapoint at x with y = 0. Denoting the functional representation as h, we can think of the first channel h as a "density channel". We found it helpful to divide the remaining channels h (1:) by h (Figure 1b, line 5), as this improved performance when there is large variation in the density of input locations. In the image processing literature, this is known as normalized convolution . The normalization operation can be reversed by ρ and is therefore not restrictive. CONVCNPs for off-the-grid data. Having specified φ, it remains to specify the form of ψ and ρ. Our proof of Theorem 1 suggests that ψ should be a stationary, non-negative, positive-definite kernel. The exponentiated-quadratic (EQ) kernel with a learnable length scale parameter is a natural choice. This kernel is multiplied by φ to form the functional representation E(Z) (Figure 1b, line 4; and Figure 1a, arrow 1). Next, Theorem 1 suggests that ρ should be a continuous, translation-equivariant map between function spaces. show that, in deep learning, any translation-equivariant model has a representation as a CNN. However, CNNs operate on discrete (on-the-grid) input spaces and produce discrete outputs. In order to approximate ρ with a CNN, we discretize the input of ρ, apply the CNN, and finally transform the CNN output back to a continuous function X → Y. To do this, for each context and test set, we space points (t i) n i=1 ⊆ X on a uniform grid (at a pre-specified density) over a hyper-cube that covers both the context and target inputs. We then evaluate (E(Z)(t i)) To map the output of the CNN back to a continuous function X → Y, we use the CNN outputs as weights for evenly-spaced basis functions (again employing the EQ kernel), which we denote by ψ ρ (Figure 1b, lines 7-8; Figure 1a, arrow 3). The ing approximation to ρ is not perfectly translation equivariant, but will be approximately so for length scales larger than the spacing of (E(Z)(t i)) n i=1. The ing continuous functions are then used to generate the (Gaussian) predictive mean and variance at any input. This, in turn, can be used to evaluate the log-likelihood. CONVCNP for on-the-grid data. While CONVCNP is readily applicable to many settings where data live on a grid, in this work we focus on the image setting. As such, the following description uses the image completion task as an example, which is often used to benchmark NPs (a;). Compared to the off-the-grid case, the implementation becomes simpler as we can choose the discretization (t i) n i=1 to be the pixel locations. Let I ∈ R H×W ×C be an image -H, W, C denote the height, width, and number of channels, respectively -and let M c be the context mask, which is such that [M c] i,j = 1 if pixel location (i, j) is in the context set, and 0 otherwise. To implement φ, we select all context points, Z c:= M c I, and prepend the context mask: (Figure 1c, line 4). Next, we apply a convolution to the context mask to form the density channel: Figure 1c, line 4). To all other channels, we apply a normalized convolution: (Figure 1c, line 5), where the division is element-wise. The filter of the convolution is analogous to ψ, which means that h is the functional representation, with the convolution performing the role of E (the summation in Figure 1b, line 4). Although the theory suggests using a non-negative, positive-definite kernel, we did not find significant empirical differences between an EQ kernel and using a fully trainable kernel restricted to positive values to enforce non-negativity (see Appendices D.4 and D.5 for details). Lastly, we describe the on-the-grid version of ρ(·), which consists of two stages. First, we apply a CNN to E(Z) (Figure 1c, line 6). Second, we apply a shared, pointwise MLP that maps the output of the CNN at each pixel location in the target set to R 2C, where we absorb MLP into the CNN (MLP can be viewed as an 1×1 convolution). The first C outputs are the means of a Gaussian predictive distribution and the second C the standard deviations, which then pass through a positivity-enforcing function (Figure 1c, . To summarise, the on-the-grid algorithm is given by multiplies by ψ and sums where (µ, σ) are the image mean and standard deviation, ρ is implemented with CNN, and E is implemented with the mask M c and convolution CONV. Training. Denoting the data set D = {Z n} N n=1 ⊆ Z and the parameters by θ, maximum-likelihood training involves (a; b) where we have split Z n into context (Z n,c) and target (Z n,t) sets. This is standard practice in the NP (a; b) and meta-learning settings and relates to neural auto-regressive models . Note that the context set and target set are disjoint (Z n,c ∩ Z n,t = ∅), which differs from the protocol for the NP (a). Practically, stochastic gradient descent methods can be used for optimization. We evaluate the performance of CONVCNPs in both on-the-grid and off-the-grid settings focusing on two central questions: (i) Do translation-equivariant models improve performance in appropriate domains? (ii) Can translation equivariance enable CONVCNPs to generalize to settings outside of those encountered during training? We use several off-the-grid data-sets which are irregularly sampled time series (X = R), comparing to Gaussian processes (GPs;) and ATTNCNP(which is identical to the ANP , but without the latent path in the encoder), the best performing member of the CNP family. We then evaluate on several on-the-grid image data sets (X = Z 2). In all settings we demonstrate substantial improvements over existing neural process models. For the CNN component of our model, we propose a small and large architecture for each experiment (in the experimental sections named CONVCNP and CONVCNPXL, respectively). We note that these architectures are different for off-the-grid and on-the-grid experiments, with full details regarding the architectures given in the appendices. First we consider synthetic regression problems. At each iteration, a function is sampled, followed by context and target sets. Beyond EQ-kernel GPs (as proposed in Garnelo et al. (2018a); ), we consider more complex data arising from Matern-5 2 and weakly-periodic kernels, as well as a challenging, non-Gaussian sawtooth process with random shift and frequency (see Figure 2, for example). CONVCNP is compared to CNP (a) and ATTNCNP. Training and testing procedures are fixed across all models. Full details on models, data generation, and training procedures are provided in Appendix C. Params EQ Weak Periodic Matern Sawtooth CNP 66818 -1.49 ± 3e-3 -2.00 ± 2e-3 -1.61 ± 1e-3 -0.51 ± 1e-5 ATTNCNP 149250 1.08 ± 4e-3 -2.01 ± 2e-3 0.42 ± 2e-3 -0.50 ± 2e-3 CONVCNP 6537 1.03 ± 5e-3 -1.52 ± 2e-3 0.51 ± 4e-3 4.38 ± 4e-3 CONVCNPXL 50617 1.90 ± 4e-3 -1.19 ± 2e-3 0.81 ± 4e-3 6.01 ± 1e-3 Table 1 reports the log-likelihood means and standard errors of the models over 1000 tasks. The context and target points for both training and testing lie within the interval [−2, 2] where training data was observed (marked "training data range" in Figure 2). Table 1 demonstrates that, even when extrapolation is not required, CONVCNP significantly outperforms other models in all cases, despite having fewer parameters. of CONVCNP and ATTNCNP when data is observed outside the range where the models were trained: translation equivariance enables CONVCNP to elegantly generalize to this setting, whereas ATTNCNP is unable to generate reasonable predictions. The PLAsTiCC data set is a simulation of transients observed by the LSST telescope under realistic observational conditions. The data set contains 3,500,734 "light curves", where each measurement is of an object's brightness as a function of time, taken by measuring the photon flux in six different astronomical filters. The data can be treated as a six-dimensional time series. The data set was introduced in a Kaggle competition, 4 where the task was to use these light curves to classify the variable sources. The winning entry modeled the light curves with GPs and used these models to generate features for a gradient boosted decision tree classifier. We compare a multi-input-multi-output CONVCNP with the GP models used in Avocado. 5 CONVCNP accepts six channels as inputs, one for each astronomical filter, and returns 12 outputs -the means and standard deviations of six Gaussians. Full experimental details are given in Appendix C.3. The mean squared error of both approaches is similar, but the held-out log-likelihood from the CONVCNP is far higher (see Table 2). Table 2: Mean and standard errors of log-likelihood and root mean squared error over 1000 test objects from the PLastiCC dataset. Log-likelihood MSE Kaggle GP -0.335 ± 0.09 0.037 ± 4e-3 ConvCP (ours) 1.31 ± 0.30 0.040 ± 5e-3 The CONVCNP model is well suited for applications where simulation data is plentiful, but real world training data is scarce (Sim2Real). The CONVCNP can be trained on a large amount of simulation data and then be deployed with real-world training data as the context set. We consider the Lotka-Volterra model , which is used to describe the evolution of predator-prey populations. This model has been used in the Approximate Bayesian Computation literature where the task is to infer the parameters from samples drawn from the Lotka-Volterra process . These methods do not simply extend to prediction problems such as interpolation or forecasting. In contrast, we train CONVCNP on synthetic data sampled from the Lotka-Volterra model and can then condition on real-world data from the Hudson's Bay lynx-hare data set to perform interpolation (see Figure 3 ; full experimental details are given in Appendix C.4). The CONVCNP performs accurate interpolation as shown in Figure 3. We were unable to successfully train the ATTNCNP for this task. We suspect this is because the simulation data are variable lengthtime series, which requires models to leverage translation equivariance at training time. As shown in Section 5.1, the ATTNCNP struggles to do this (see Appendix C.4 for complete details). To test CONVCNP beyond one-dimensional features, we evaluate our model on on-the-grid image completion tasks and compare it to ATTNCNP. Image completion can be cast as a prediction of pixel intensities y * i (∈ R 3 for RGB, ∈ R for greyscale) given a target 2D pixel location x * i conditioned on an observed (context) set of pixel values Z = (x n, y n) N n=1. In the following experiments, the context set can vary but the target set contains all pixels from the image. Further experimental details are in Appendix D.1. Standard benchmarks. We first evaluate the model on four common benchmarks: MNIST , SVHN , and 32 × 32 and 64 × 64 CelebA . Importantly, these data sets are biased towards images containing a single, well-centered object. As a , perfect translation-equivariance might hinder the performance of the model when the test data are similarly structured. We therefore also evaluated a larger CONVCNP that can learn such non-stationarity, while still sharing parameters across the input space (CONVCNPXL). Table 3 shows that CONVCNP significantly outperforms ATTNCNP when it has a large receptive field size, while being at least as good with a small receptive field size. Qualitative samples for various context sets can be seen in Figure 5. Further qualitative comparisons and ablation studies can be found in Appendix D.3 and Appendix D.4 respectively. Generalization to multiple, non-centered objects. The data sets from the previous paragraphs were centered and contained single objects. Here we test whether CONVCNPs trained on such data can generalize to images containing multiple, non-centered objects. The last column of Table 3 evaluates the models in a zero shot multi-MNIST (ZSMM) setting, where images contain multiple digits at test time (Appendix D.2). CONVCNP significantly outperforms ATTNCNP on such tasks. Figure 4a shows a histogram of the image log-likelihoods for CONVCNP and ATTNCNP, as well as qualitative at different percentiles of the CONVCNP distribution. CONVCNP is able to extrapolate to this out-of-distribution test set, while ATTNCNP appears to model the bias of the training data and predict a centered "mean" digit independently of the context. Interestingly, CONVCNPXL does not perform as well on this task. In particular, we find that, as the receptive field becomes very large, performance on this task decreases. We hypothesize that this has to do with behavior of the model at the edges of the image. CNNs with larger receptive fields -the region of input pixels that affect a particular output pixel -are able to model non-stationary behavior by looking at the distance from any pixel to the image boundary. We expand on this discussion and provide further experimental evidence regarding the effects of receptive field on the ZSMM task in Appendix D.6. Although ZSMM is a contrived task, note that our field of view usually contains multiple independent objects, thereby requiring translation equivariance. As a more realistic example, we took a CONVCNP model trained on CelebA and tested it on a natural image of different shape which contains multiple people (Figure 4b). Even with 95% of the pixels removed, the CONVCNP was able to produce a qualitatively reasonable reconstruction. A comparison with ATTNCNP is given in Appendix D.3. Computational efficiency. Beyond the performance and generalization improvements, a key advantage of the CONVCNP is its computational efficiency. The memory and time complexity of a single self-attention layer grows quadratically with the number of inputs M (the number of pixels for images) but only linearly for a convolutional layer. Empirically, with a batch size of 16 on 32 × 32 MNIST, CONVCNPXL requires 945MB of VRAM, while ATTNCNP requires 5839 MB. For the 56×56 ZSMM CONVCNPXL increases its requirements to 1443 MB, while ATTNCNP could not fit onto a 32GB GPU. Ultimately, ATTNCNP had to be trained with a batch size of 6 (using 19139 MB) and we were not able to fit it for CelebA64. Recently, restricted attention has been proposed to overcome this computational issue , but we leave an investigation of this and its relationship to CONVCNPs to future work. We have introduced CONVCNP, a new member of the CNP family that leverages embedding sets into function space to achieve translation equivariance. The relationship to (i) the NP family, and (ii) representing functions on sets, each imply extensions and avenues for future work. Deep sets. Two key issues in the existing theory on learning with sets (; a;) are (i) the restriction to fixed-size sets, and (ii) that the dimensionality of the embedding space must be no less than the cardinality of the embedded sets. Our work implies that by considering appropriate embeddings into a function space, both issues are alleviated. In future work, we aim to further this analysis and formalize it in a more general context. Point-cloud models. Another line of related research focuses on 3D point-cloud modelling (a; b). While original work focused on permutation invariance (a;), more recent work has considered translation equivariance as well , leading to a model closely resembling CONVDEEPSETS. The key differences with our work are the following: (i) Correlated samples and consistency under marginalization. In the predictive distribution of CON-VCNP (Equation), predicted ys are conditionally independent given the context set. Consequently, samples from the predictive distribution lack correlations and appear noisy. One solution is to instead define the predictive distribution in an autoregressive way, like e.g. PixelCNN++ . Although samples are now correlated, the quality of the samples depends on the order in which the points are sampled. Moreover, the predicted ys are then not consistent under marginalization (b;). Consistency under marginalization is more generally an issue for neural autoregressive models , although consistent variants have been devised . To overcome the consistency issue for CONVCNP, exchangeable neural process models (e.g. ;) may provide an interesting avenue. Another way to introduce dependencies between ys is to employ latent variables as is done in neural processes (b). However, such an approach only achieves conditional consistency: given a context set, the predicted ys will be dependent and consistent under marginalization, but this does not lead to a consistent joint model that also includes the context set itself. For each dataset, an image is randomly sampled, the first row shows the given context points while the second is the mean of the estimated conditional distribution. From left to right the first seven columns correspond to a context set with 3, 1%, 5%, 10%, 20%, 30%, 50%, 100% randomly sampled context points. In the last two columns, the context sets respectively contain all the pixels in the left and top half of the image. CONVCNPXL is shown for all datasets besides ZSMM, for which we show the fully translation equivariant CONVCNP. In this section, we provide the proof of Theorem 1. Our proof strategy is as follows. We first define an appropriate topology for fixed-sized sets (Appendix A.1). With this topology in place, we demonstrate that our proposed embedding into function space is homeomorphic (Lemmas 1 and 2). We then show that the embeddings of fixed-sized sets can be extended to varying-sized sets by "pasting" the embeddings together while maintaining their homeomorphic properties (Lemma 3). Following this, we demonstrate that the ing embedding may be composed with a continuous mapping to our desired target space, ing in a continuous mapping between two metric spaces (Lemma 4). Finally, in Appendix A.3 we combine the above-mentioned to prove Theorem 1. We begin with definitions that we will use throughout the section and then present our . Let X = R d and let Y ⊆ R be compact. Let ψ be a symmetric, positive-definite kernel on X. By the Moore-Aronszajn Theorem, there is a unique Hilbert space (H, ·, · H) of real-valued functions on X for which ψ is a reproducing kernel. This means that (i) ψ(·, x) ∈ H for all x ∈ X and (ii) f, ψ(·, x) H = f (x) for all f ∈ H and x ∈ X (reproducing property). For ψ: X × X → R, X = (x 1, . . ., x n) ∈ X n, and X = (x 1, . . ., x n) ∈ X n, we denote Definition 3 (Interpolating RKHS). Call H interpolating if it interpolates any finite number of points: for every ((x i, y i)) For example, the RKHS induced by any strictly positive-definite kernel, e.g. the exponentiated quadratic (EQ) kernel ψ(x, A.1 THE QUOTIENT SPACE A n / S n Let A be a Banach space. For x = (x 1, . . ., x n) ∈ A n and y = (y 1, . . ., y n) ∈ A n, let x ∼ y if x is a permutation of y; that is, x ∼ y if and only if x = πy for some π ∈ S n where πy = (y π,..., y π(n) ). Let A n / S n be the collection of equivalence classes of ∼. Denote the equivalence class of Proof. We first show that d is well defined on A n / S n. Assume x ∼ x and y ∼ y. Then, x = π x x and y = π y y. Using the group properties of S n and the permutation invariance of · A n: To show the triangle inequality, note that using permutation invariance of · A n. Hence, taking the minimum over π 1, so taking the minimum over π 2 gives the triangle inequality for d. Proposition 2. The canonical map A n → A n / S n is continuous under the metric topology induced by d. Proposition 3. Let A ⊆ A n be topologically closed and closed under permutations. Then [A] is topologically closed in A n / S n under the metric topology. Then there are permutations (π n) ∞ n=1 ⊆ S n such that π n a n → x. Here π n a n ∈ A, because A is closed under permutations. Thus x ∈ A, as A is also topologically closed. We conclude that Proposition 5. The quotient topology on A n / S n induced by the canonical map is metrizable with the metric d. Proof. Since the canonical map is surjective, there exists exactly one topology on A n / S n relative to which the canonical map is a quotient map: the quotient topology . Let p: A n → A n / S n denote the canonical map. It remains to show that p is a quotient map under the metric topology induced by d; that is, we show that U ⊂ A n / S n is open in A n / S n under the metric topology if and only if p Whereas A previously denoted an arbitrary Banach space, in this section we specialize to A = X × Y. We denote an element in A by (x, y) and an element in Z M = A M by ((x 1, y 1),..., (x M, y M)). Alternatively, we denote ((x 1, y 1),..., (x M, y M)) by (X, y) where X = (x 1, . . ., x M) ∈ X M and y = (y 1, . . ., y M) ∈ Y M. We clarify that an element in Z M = A M is permuted as follows: for π ∈ S M, π(X, y) = π ((x 1, y 1),..., (x M, y M)) = ((x π, y π ),..., (x π(n), y π(n) )) = (πX, πy). Note that permutation-invariant functions on Z M are in correspondence to functions on the quotient space induced by the equivalence class of permutations, Z M / S m The latter is a more natural representation. Lemma 3 states that it is possible to homeomorphically embed sets into an RKHS. This is key to proving our main . Before proving Lemma 3, we provide several useful . We begin by demonstrating that an embedding of sets of a fixed size into a RKHS is continuous and injective. Lemma 1. Consider a collection Z M ⊆ Z M that has multiplicity K. Set and let ψ be an interpolating, continuous positive-definite kernel. Define where H K+1 = H × · · · × H is the (K + 1)-dimensional-vector-valued-function Hilbert space constructed from the RKHS H for which ψ is a reproducing kernel and endowed with the inner product f, g is injective, hence invertible, and continuous. Proof. First, we show that E M is injective. Suppose that Denote X = (x 1, . . ., x M) and y = (y 1, . . ., y M), and denote X and y similarly. Taking the inner product with any f ∈ H on both sides and using the reproducing property of ψ, this implies that for all f ∈ H. In particular, since by construction φ 1 (·) = 1, for all f ∈ H. Using that H is interpolating, choose a particularx ∈ X ∪ X, and let f ∈ H be such that f (x) = 1 and f (·) = 0 at all other x i and x i. Then so the number of suchx in X and the number of suchx in X are the same. Since this holds for everyx, X is a permutation of X: X = π(X) for some permutation π ∈ S M. Plugging in the permutation, we can write Then, by a similar argument, for any particularx, Let the number of terms in each sum equal S. Since Z M has multiplicity K, S ≤ K. By Lemma 4 from , the'sum-of-power mapping' from {y i : x i =x} to the first S + 1 elements of i:xi=x φ(y i), i.e. i:xi=x y 0 i,..., i:xi=x y S i, is injective. Therefore, (y i) i:xi=x is a permutation of (y π(i) ) i:xi=x. Note that x i =x for all above y i. Furthermore, note that also x π(i) = x i =x for all above y π(i). We may therefore adjust the permutation π such that y i = y π(i) for all i such that x i =x whilst retaining that x = π(x). Performing this adjustment for allx, we find that y = π(y) and x = π(x). Second, we show that E M is continuous. Compute Having established the injection, we now show that this mapping is a homeomorphism, i.e. that the inverse is continuous. This is formalized in the following lemma. Lemma 2. Consider Lemma 1. Suppose that Z M is also topologically closed in A M and closed under permutations, and that ψ also satisfies (i) M is continuous. Remark 1. To define Z 2 with multiplicity one, one might be tempted to define which indeed has multiplicity one. Unfortunately, Z 2 is not closed: if ⊆ X and ⊆ Y, then (, (1/n, 2)) ∞ n=1 ⊆ Z 2, but (, (1/n, 2)) → (,) / ∈ Z 2, because 0 then has two observations 1 and 2. To get around this issue, one can require an arbitrarily small, but non-zero spacing > 0 between input locations: This construction can be generalized to higher numbers of observations and multiplicities as follows: Remark 2. Before moving on to the proof of Lemma 2, we remark that Lemma 2 would directly follow if Z M were bounded: then Z M is compact, so E M is a continuous, invertible map between a compact space and a Hausdorff space, which means that E −1 M must be continuous. The intuition that the must hold for unbounded Z M is as follows. Since φ 1 (·) = 1, for every f ∈ H M, f 1 is a summation of M "bumps" (imagine the EQ kernel) of the form ψ(·, x i) placed throughout X. If one of these bumps goes off to infinity, then the function cannot uniformly converge pointwise, which means that the function cannot converge in H (if ψ is sufficiently nice). Therefore, if the function does converge in H, (x i) M i=1 must be bounded, which brings us to the compact case. What makes this work is the density channel φ 1 (·) = 1, which forces (x i) M i=1 to be well behaved. The above argument is formalized in the proof of Lemma 2. First, we demonstrate that, assuming the claim, H M is closed. Note that by boundedness of is compact and therefore closed, since every compact subset of a metric space is closed. Therefore, the image of E M | [Z J] contains the limit f. Since the image of E M | [Z J] is included in H M, we have that f ∈ H M, which shows that H M is closed. Next, we prove that, assuming the claim, E −1 −1 is continuous, because a continuous bijection from a compact space to a metric space is a homeomorphism. Therefore M is continuous. It remains to show the claim. Let f 1 denote the first element of f, i.e. the density channel. Using the reproducing property of ψ, → f 1 in H means that it does so uniformly pointwise (over x). Hence, we can let N ∈ N be such that n ≥ N implies that |f At the same time, by pointwise non-negativity of ψ, we have that Towards contradiction, suppose that (which is a contradiction. The following lemma states that we may construct an encoding for sets containing no more than M elements into a function space, where the encoding is injective and every restriction to a fixed set size is a homeomorphism. and let ψ be an interpolating, continuous positive-definite kernel that satisfies where H K+1 = H × · · · × H is the (K + 1)-dimensional-vector-valued-function Hilbert space constructed from the RKHS H for which ψ is a reproducing kernel and endowed with the inner product f, g is injective, hence invertible. Denote this inverse by E −1, where Proof. Recall that E m is injective for every m ∈ [M]. Hence, to demonstrate that E is injective it remains to show that (H m) M m=1 are pairwise disjoint. To this end, suppose that for m = m. Then, by arguments like in the proof of Lemma 1, Since φ 1 (·) = 1, this gives m = m, which is a contradiction. Finally, by repeated application of Lemma 2, E −1, the space of continuous bounded functions from X to Y, such that every restriction Φ| [Z m] is continuous, and let E be from Lemma 3. Then is continuous. is continuous. From here on, we let ψ be a stationary kernel, which means that it only depends on the difference of its arguments and can be seen as a function X → R. With the above in place, we are finally ready to prove our central , Theorem 1. is closed under permutations, and (iv) is closed under translations. Set and let ψ be an interpolating, continuous positive-definite kernel that satisfies where H K+1 = H × · · · × H is the (K + 1)-dimensional-vector-valued-function Hilbert space constructed from the RKHS H for which ψ is a reproducing kernel and endowed with the inner product f, g Then a function Φ: Z ≤M → C b (X, Y) satisfies (i) continuity of the restriction Φ| Zm for every m ∈ [M], (ii) permutation invariance (Property 1), and (iii) translation equivariance (Property 2) if and only if it has a representation of the form is continuous and translation equivariant. Proof of sufficiency. To begin with, note that permutation invariance (Property 1) and translation equivariance (Property 2) for Φ are well defined, because Z ≤M is closed under permutations and translations by assumption. First, Φ is permutation invariant, because addition is commutative and associative. Second, that Φ is translation equivariant (Property 2) follows from a direct verification and that ρ is also translation equivariant: Proof of necessity. Our proof follows the strategy used by;. To begin with, since Φ is permutation invariant (Property 1), we may define for which we verify that every restriction Φ| [Z m] is continuous. By invertibility of E from Lemma 3, we have is translation equivariant, because ψ is stationary. Also, by assumption Φ is translation equivariant (Property 2). Thus, their composition ρ is also translation equivariant. Remark 3. The function ρ: H ≤M → C b (X, Y) may be continuously extended to the entirety of H K+1 using a generalisation of the Tietze Extension Theorem by. There are variants of Dugundji's Theorem that also preserve translation equivariance. In both our 1d and image experiments, our main comparison is to conditional neural process models. In particular, we compare to a vanilla CNP (1d only; Garnelo et al. (2018a) ) and an ATTNCNP . Our architectures largely follow the details given in the relevant publications. CNP baseline. Our baseline CNP follows the implementation provided by the authors. 6 The encoder is a 3-layer MLP with 128 hidden units in each layer, and RELU non-linearities. The encoder embeds every context point into a representation, and the representations are then averaged across each context set. Target inputs are then concatenated with the latent representations, and passed to the decoder. The decoder follows the same architecture, outputting mean and standard deviation channels for each input. Attentive CNP baseline. The ATTNCNP we use corresponds to the deterministic path of the model described by for image experiments. Namely, an encoder first embeds each context point c to a latent representation (x (c), xy ∈ R 128. For the image experiments, this is achieved using a 2-hidden layer MLP of hidden dimensions 128. For the 1d experiments, we use the same encoder as the CNP above. Every context point then goes through two stacked self-attention layers. Each self-attention layer is implemented with an 8-headed attention, a skip connection, and two layer normalizations (as described in , modulo the dropout layer). To predict values at each target point t, we embed x (t) → r (t) x and x (c) → r (c) x using the same single hidden layer MLP of dimensions 128. A target representation r (t) xy is then estimated by applying cross-attention (using an 8-headed attention described above) with keys K:= {r, and query q := r (t) x. Given the estimated target representationr xy, the conditional predictive posterior is given by a Gaussian pdf with diagonal covariance parametrised by pre ∈ R 3 and decoder is a 4 hidden layer MLP with 64 hidden units per layer for the images, and the same decoder as the CNP for the 1d experiments. , we enforce we set a minimum standard deviation σ (t) min = [0.1; 0.1; 0.1] to avoid infinite log-likelihoods by using the following post-processed standard deviation: In this section, we give details regarding our experiments for the 1d data. We begin by detailing model architectures, and then provide details for the data generating processes and training procedures. The density at which we evaluate the grid differs from experiment to experiment, and so the values are given in the relevant subsections. In all experiments, the weights are optimized using Adam and weight decay of 10 −5 is applied to all model parameters. The learning rates are specified in the following subsections. Throughout the experiments (Sections 5.1 to 5.3), we consider two models: CONVCNP (which utilizes a smaller architecture), and CONVCNPXL (with a larger architecture). For all architectures, the input kernel ψ was an EQ (exponentiated quadratic) kernel with a learnable length scale parameter, as detailed in Section 4, as was the kernel for the final output layer ψ ρ. When dividing by the density channel, we add ε = 10 −8 to avoid numerical issues. The length scales for the EQ kernels are initialized to twice the spacing 1/γ 1/d between the discretization points (t i), where γ is the density of these points and d is the dimensionality of the input space X. Moreover, we emphasize that the size of the receptive field is a product of the width of the CNN filters and the spacing between the discretization points. Consequently, for a fixed width kernel of the CNN, as the number of discretization points increases, the receptive field size decreases. One potential improvement that was not employed in our experiments, is the use of depthwise-separable convolutions . These dramatically reduce the number of parameters in a convolutional layer, and can be used to increase the CNN filter widths, thus allowing one to increase the number of discretization points without reducing the receptive field. The architectures for CONVCNP and CONVCNPXL are described below. CONVCNP. For the 1d experiments, we use a simple, 4-layer convolutional architecture, with RELU nonlinearities. The kernel size of the convolutional layers was chosen to be 5, and all employed a stride of length 1 and zero padding of 2 units. The number of channels per layer was set to, where the final channels where then processed by the final, EQ-based layer of ρ as mean and standard deviation channels. We employ a SOFTPLUS nonlinearity on the standard deviation channel to enforce positivity. This model has 6,537 parameters. CONVCNPXL. Our large architecture takes inspiration from UNet . We employ a 12-layer architecture with skip connections. The number of channels is doubled every layer for the first 6 layers, and halved every layer for the final 6 layers. We use concatenation for the skip connections. The following describes which layers are concatenated, where L i ← [L j, L k] means that the input to layer i is the concatenation of the activations of layers j and k: Like for the smaller architecture, we use RELU nonlinearities, kernels of size 5, stride 1, and zero padding for two units on all layers. The kernels used for the Gaussian Processes which generate the data in this experiment are defined as follows: • EQ: CNP when trained on an EQ kernel (with length scale parameter 1). "True function" refers to the sample from the GP prior from which the context and target sets were sub-sampled. " Ground Truth GP" refers to the GP posterior distribution when using the exact kernel and performing posterior inference based on the context set. The left column shows the predictive posterior of the models when data is presented in same range as training. The centre column shows the model predicting outside the training data range when no data is observed there. The right-most column shows the model predictive posteriors when presented with data outside the training data range. • weakly periodic: with f 1 (x) = cos(8πx) and f 2 (x) = sin(8πx), and with During the training procedure, the number of context points and target points for a training batch are each selected randomly from a uniform distribution over the integers between 3 and 50. This number of context and target points are randomly sampled from a function sampled from the process (a Gaussian process with one of the above kernels or the sawtooth process), where input locations are uniformly sampled from the interval [−2, 2]. All models in this experiment were trained for 200 epochs using 256 batches per epoch of batch size 16. We discretize E(Z) by evaluating 64 points per unit in this setting. We use a learning rate of 3e−4 for all models, except for CONVCNPXL on the sawtooth data, where we use a learning rate of 1e−3 (this learning rate was too large for the other models). The random sawtooth samples are generated from the following function: where A is the amplitude, f is the frequency, and t is "time". Throughout training, we fix the amplitude to be one. We truncate the series at an integer K. At every iteration, we sample a frequency uniformly in, K in, and a random shift in [−5, 5]. As the task is much harder, we sample context and target set sizes over. Here the CNP and ATTNCNP employ learning rates of 10 −3. All other hyperparameters remain unchanged. The CONVCNP was trained for 200 epochs using 1024 batches of batch size 4 per epoch. For training and testing, the number of context points for a batch are each selected randomly from a uniform distribution over the integers between 1 and the number of points available in the series (usually between 10-30 per bandwidth). The remaining points in the series are used as the target set. For testing, a batch size of 1 was used and statistics were computed over 1000 evaluations. We compare CONVCNP to the GP models used in using the implementation in https:// github.com/kboone/avocado. The data used for training and testing is normalized according to t(v) = (v − m)/s with the values in Table 4. These values are estimated from a batch sampled from the training data. To remove outliers in the GP , log-likelihood values less than −10 are removed from the evaluation. These same datapoints were removed from the CONVCNP as well. For this dataset, we only used the CONVCNPXL, as we found the CONVCNP to underfit. The learning rate was set to 10 −3, and we discretize E(Z) by evaluating 256 points per unit. We describe the way simulated training data for the experiment in Section 5.3 was generated from the Lotka-Volterra model. The description is borrowed from . Let X be the number of predators and Y the number of prey at any point in our simulation. According to the model, one of the following four events can occur: A: A single predator is born according to rate θ 1 XY, increasing X by one. B: A single predator dies according to rate θ 2 X, decreasing X by one. C: A single prey is born according to rate θ 3 Y, increasing Y by one. A single prey dies (is eaten) according to rate θ 4 XY, decreasing Y by one. The parameter values θ 1, θ 2, θ 3, and θ 4, as well as the initial values of X and Y govern the behavior of the simulation. We choose θ 1 = 0.01, θ 2 = 0.5, θ 3 = 1, and θ 4 = 0.01, which are also used in and generate reasonable time series. Note that these are likely not the parameter values that would be estimated from the Hudson's Bay lynx-hare data set , but they are used because they yield reasonably oscillating time series. Obtaining oscillating time series from the simulation is sensitive to the choice of parameters and many parametrizations in populations that simply die out. Time series are simulated using Gillespie's algorithm : 1. Draw the time to the next event from an exponential distribution with rate equal to the total rate θ 1 XY + θ 2 X + θ 3 Y + θ 4 XY. 2. Select one of the above events A, B, C, or D at random with probability proportional to its rate. 3. Adjust the appropriate population according to the selected event, and go to 1. The simulations using these parameter settings can yield a maximum population of approximately 300 while the context set in the lynx-hare data set has an approximate maximum population of about 80 so we scaled our simulation population by a factor of 2/7. We also remove time series which are longer than 100 units of time, which have more than 10000 events, or where one of the populations is entirely zero. The number of context points n for a training batch are each selected randomly from a uniform distribution between 3 and 80, and the number of target points is 150 − n. These target and context points are then sampled from the simulated series. The Hudson's Bay lynx-hare data set has time values that range from 1845 to 1935. However, the values supplied to the model range from 0 to 90 to remain consistent with the simulated data. For evaluation, an interval of 18 points is removed from the the Hudson's Bay lynx-hare data set to act as a target set, while the remaining 72 points act as the context set. This construction highlights the model's interpolation as well as its uncertainty in the presence of missing data. Models in this setting were trained for 200 epochs with 256 batches per epoch, each batch containing 50 tasks. For this data set, we only used the CONVCNP, as we found the CONVCNPXL to overfit. The learning rate was set to 10 −3, and we discretize E(Z) by evaluating 100 points per unit. We attempted to train an ATTNCNP for comparison, but due to the nature of the synthetic data generation, many of the training series end before 90 time units, the length of the Hudson's Bay lynx-hare series. Effectively, this means that the ATTNCNP was asked to predict outside of its training interval, a task that it struggles with, as shown in Section 5.1. The plots in Figure 9 show that the ATTNCNP is able to learn the first part of the time series but is unable to model data outside of the first 20 or so time units. Perhaps with more capacity and training epochs the ATTNCNP training would be more successful. Note from Figure 3 that our model does better on the synthetic data than on the real data. This could be due to the parameters of the Lotka-Volterra model used being a poor estimate for the real data. Training details. In all experiments, we sample the number of context points uniformly from U(ntotal 100, ntotal 2), and the number of target points is set to n total. The context and target points are sampled randomly from each of the 16 images per batch. The weights are optimised using Adam with learning rate 5 × 10 −4. We use a maximum of 100 epochs, with early stopping of 15 epochs patience. All pixel values are divided by 255 to rescale them to the range. In the following discussion, we assume that images are RGB, but very similar models can be used for greyscale images or other gridded inputs (e.g. 1d time series sampled at uniform intervals). Proposed convolutional CNP. Unlike ATTNCNP and off-the-grid CONVCNP, on-the-grid CON-VCNP takes advantage of the gridded structure. Namely, the target and context points can be specified in terms of the image, a context mask M c, and a target mask M t instead of sets of input-value pairs. Although this is an equivalent formulation, it makes it more natural and simpler to implement in standard deep learning libraries. In the following, we dissect the architecture and algorithmic steps succinctly summarized in Section 4. Note that all the convolutional layers are actually depthwise separable ; this enables a large kernel size (i.e. receptive fields) while being parameter and computationally efficient. 1. Let I denote the image. Select all context points signal:= M c I and append a density channel density:= M c, which intuitively says that "there is a point at this position": [signal, density]. Each pixel value will now have 4 channels: 3 RGB channels and 1 density channel M c. Note that the mask will set the pixel value to 0 at a location where the density channel is 0, indicating there are no points at this position (a missing value). 2. Apply a convolution to the density channel density = CONV θ (density) and a normalized convolution to the signal signal:= CONV θ (signal)/density. The normalized convolution makes sure that the output mostly depends on the scale of the signal rather than the number of observed points. The output channel size is 128 dimensional. The kernel size of CONV θ depends on the image shape and model used (Table 5). We also enforce element-wise positivity of the trainable filter by taking the absolute value of the kernel weights θ before applying the convolution. As discussed in Appendix D.4, the normalization and positivity constraints do not empirically lead to improvements for on-the-grid data. Note that in this setting, E(Z) is [signal, density]. 3. Now we describe the on-the-grid version of ρ(·), which we decompose into two stages. In the first stage, we apply a CNN to [signal, density]. This CNN is composed of residual blocks , each consisting of 2 convolutional layers with ReLU activations and no batch normalization. The number of output channels in each layer is 128. The kernel size is the same across the whole network, but depends on the image shape and model used (Table 5). 4. In the second stage of ρ(·), we apply a shared pointwise MLP: R 128 → R 2C (we use the same architecture as used for the ATTNCNP decoder) to the output of the first stage at each pixel location in the target set. Here C denotes the number of channels in the image. The first C outputs of the MLP are treated as the means of a Gaussian predictive distribution, and the last C outputs are treated as the standard deviations. These then pass through the positivity-enforcing function shown in Equation. Figure 10: Samples from our generated Zero Shot Multi MNIST (ZSMM) data set. In the real world, it is very common to have multiple objects in our field of view which do not interact with each other. Yet, many image data sets in machine learning contain only a single, well-centered object. To evaluate the translation equivariance and generalization capabilities of our model, we introduce the zero-shot multi-MNIST setting. The training set contains all 60000 28 × 28 MNIST training digits centered on a black 56 × 56 . (Figure 10a). For the test set, we randomly sample with replacement 10000 pairs of digits from the MNIST test set, place them on a black 56 × 56 , and translate the digits in such a way that the digits can be arbitrarily close but cannot overlap (Figure 10b). Importantly, the scale of the digits and the image size are the same during training and testing. D.3 ATTNCNP AND CONVCNP QUALITATIVE COMPARISON Figure 11 shows the test log-likelihood distributions of an ATTNCNP and CONVCNP model as well as some qualitative comparisons between the two. Although most mean predictions of both models look relatively similar for SVHN and CelebA32, the real advantage of CONVCNP becomes apparent when testing the generalization capacity of both models. Figure 12 shows CONVCNP and ATTNCNP trained on CelebA32 and tested on a downscaled version of Ellen's famous Oscar selfie. We see that CONVCNP generalizes better in this setting. To understand the importance of the different components of the first layer, we performed an ablation study by removing the density normalization (CONVCNP no norm.), removing the density channel (CONVCNP no dens.), removing the positivity constraints (CONVCNP no abs.), removing the positivity constraints and the normalization (CONVCNP no abs. norm.), and replacing the fully trainable first layer by an EQ kernel similar to the continuous case (CONVCNP EQ). Table 6 shows the following: (i) Appending a density channel helps. (ii) Enforcing the positivity constraint is only important when using a normalized convolution. (iii) Using a less expressive EQ filter does not significantly decrease performance, suggesting that the model might be learning similar filters (Appendix D.5). Figure 13: First filter learned by CONVCNPXL, CONVCNP, and CONVCNP EQ for all our datasets. In the case of RGB images, the plotted filters are for the first channel (red). Note that not all filters are of the same size. As discussed in Appendix D.4, using a less expressive EQ filter does not significantly decrease performance. Figure 13 shows that this happens because the fully trainable kernel learns to approximate the EQ filter. As seen in Table 3, a CONVCNPXL with large receptive field performs significantly worse on the ZSMM task than CONVCNP, which has a smaller receptive field. Figure 14 shows a more detailed comparison of the models, and suggests that CONVCNPXL learns to model non-stationary behaviour, namely that digits in the training set are centred. We hypothesize that this issue stems from the the treatment of the image boundaries. Indeed, if the receptive field is large enough and the padding values are significantly different than the inputs to each convolutional layer, the model can learn position-dependent behaviour by "looking" at the distance from the padded boundaries. For ZSMM, Figure 15 suggests that "circular" padding, where the padding is implied by tiling the image, helps prevent the model from learning non-stationarities, even as the size of the receptive field becomes larger. We hypothesize that this is due to the fact that "circularly" padded values are harder to distinguish from actual values than zeros. We have not tested the effect of padding on other datasets, and note that "circular" padding could in other issues. Figure 15: Effect of the receptive field size on ZSMM's log-likelihood. The line plot shows the mean and standard deviation over 6 runs. The blue curve corresponds to a model with zero padding, while the orange one corresponds to "circular" padding.
We extend deep sets to functional embeddings and Neural Processes to include translation equivariant members
1,478
scitldr
Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that convolutional neural networks (CNNs) can be trained to predict V1 activity more accurately, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework for identifying common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this rotation-equivariant CNN to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network outperforms a regular CNN with the same number of feature maps and reveals a number of common features, which are shared by many V1 neurons and are pooled sparsely to predict neural activity. Our findings are a first step towards a powerful new tool to study the nonlinear functional organization of visual cortex. The mammalian retina processes image information using a number of distinct parallel channels consisting of functionally, anatomically, and transcriptomically defined distinct cell types. In the mouse, there are 14 types of bipolar cells BID8, which provide input to 30-50 types of ganglion cells BID2 BID23. In visual cortex, in contrast, it is currently unknown whether excitatory neurons are similarly organized into functionally distinct cell types. A functional classification of V1 neurons would greatly facilitate understanding its computations just like it has for the retina, because we could focus our efforts on identifying the function of a small number of cell types instead of characterizing thousands of anonymous neurons. Recent work proposed a framework for learning functional cell types from data in an unsupervised fashion while optimizing predictive performance of a model that employs a common feature space shared among many neurons BID16. The key insight in this work is that all neurons that perform the same computation but have their receptive fields at different locations, can be represented by a feature map in a convolutional network. Unfortunately, this approach cannot be applied directly to neocortical areas. Neurons in area V1 extract local oriented features such as edges at different orientations, and most image features can appear at arbitrary orientations -just like they can appear at arbitrary locations. Thus, to define functional cell types in V1, we would like to treat orientation as a nuisance parameter (like receptive field location) and learn features independent of orientation. In the present paper, we work towards this goal. While we do not answer the biological question whether there are indeed well-defined clusters of functional cell types in V1, we provide the technical foundation by extending the work of Klindt and colleagues BID16 and introducing a rotation-equivariant convolutional neural network model of V1. We train this model directly on the responses of 6000 mouse V1 neurons to learn a shared feature space, whose features are independent of orientation. We show that this model outperforms state-of-the-art CNNs for system identification and allows predicting V1 responses of thousands of neurons with only 16 learned features. Moreover, for most neurons, pooling from only a small number of features is sufficient for accurate predictions. Functional classification of cell types. Characterizing neurons according to some identified, potentially nonlinear response properties has a long history. For instance, researchers have identified simple and complex cells BID12, pattern-and component-selective cells BID10, end-stopping cells BID24, gain control mechanisms BID5 and many more. However, these properties have been identified using simple stimuli and the models that have been built for them do not apply to natural stimuli in a straightforward way. In addition, most of these studies were done using single-cell electrophysiology, which does not sample cells in an unbiased fashion. Thus, we currently do not know how important certain known features of V1 computation are under natural stimulation conditions and what fraction of neurons express them. Recent work using two-photon imaging in the monkey BID26 started closing this gap by systematically investigating response properties of V1 neurons to an array of complex stimuli and found, for instance, that about half of the neurons in V1 are selective to complex patterns such as curvature, corners and junctions rather than just oriented line segments. Recent work in the retina showed that different types of Ganglion cells BID2 ) -and to some extend also bipolar cells BID9 ) -can be identified based on functional properties. Thus, in the retina there is a relatively clear correspondence of anatomical and genetic cell types and their functional output, and we are getting closer to understanding each retinal cell type's function. Learning shared feature spaces for neural populations. Antolík and colleagues pioneered the idea of learning a shared nonlinear feature representation for a large population of neurons BID1, which others have also used in both retina and V1 BID3 BID14 BID4. BID16 proposed a framework to learn functional cell types in an unsupervised fashion as a by-product of performing system identification. They propose a structurally constrained readout layer, which provides a compact characterization of each neuron's computation. By enforcing this representation to be sparse, they suggest that the learned features may correspond to distinct functional cell types. Our present work builds upon this idea and extends it to be applicable to V1.Rotation equivariance in CNNs. There is a large body of work on equivariant representations BID20. Here we review only the most closely related approches. BID6 introduce group-equivariant CNNs, which include rotation equivariance as a special case and form the basis of our approach. BID31 use a steerable filter basis to learn rotation-equivariant CNNs. We essentially use the same approach, but with a different set of steerable filters (2d Hermite functions instead of circular harmonics). RotEqNet BID18 ) is related to our work in the sense that it also applies multiple rotated versions of each filter to the input. However, instead of maintaining all ing feature maps as inputs for the next layer, they apply an orientation pooling operation, which reduces each set of feature maps to a two-dimensional vector field. Harmonic networks BID32 are an alternative approach that achieves full 360• rotation equivariance by limiting the structure of the convolutional filters. Finally, BID7 The network consists of a rotation-equivariant convolutional core common for all neurons (blue box) and a neuron-specific readout [red box BID16]. Inputs are static images from ImageNet. Prediction targets are responses of 6005 V1 neurons to these images. Rotation equivariance is achieved by using weight sharing across filter orientations. Therefore, eight rotated versions of each filter exist, ing in eight groups of feature maps (depicted by rainbow colored boxes). B. Illustration of weight sharing across orientations for the second and third layer. The previous layer's output consists of eight groups of feature maps (one for each rotation angle). The output is generated by applying a learned filter to each of the input feature maps (here 8 × 16 kernels shown in the first row). Rotated versions (2nd and following rows) are generated by rotating each kernel and cyclically permuting the feature maps. C. Filters are represented in a steerable basis (2d Hermite functions). Functions up to rank 5 are shown here. Network architecture. Our architecture follows that of BID16. The image is first processed by a multi-layer convolutional core shared by all neurons FIG0, blue), which we describe in detail below. The ing feature representation is then turned into a response prediction for each neuron by applying a sparse linear combination across space, followed by a sparse linear combination across features and a pointwise, soft-thresholding nonlinearity FIG0, red). The entire model is trained end-to-end directly on predicting neural responses, without employing transfer learning or pre-training on auxiliary tasks BID33 BID4.This model has a number of desirable properties in terms of data efficiency and interpretability. The separation into convolutional core and readout, together with the strong structural constraints on the readout pushes all the'heavy lifting' into the core while the readout weights (spatial mask and feature weights) provide a relatively low-dimensional characterization of each neuron's function. Because the core is shared among all neurons (here: thousands), many of which implement similar functions, we can learn complex non-linear functions very accurately. Equivariance. A function f: X → Y is called equivariant with respect to a group of transformations Π if such transformations of the input lead to predictable transformations of the output. Formally, for every π ∈ Π there is a transformation ψ ∈ Ψ such that for every DISPLAYFORM0 CNNs are shift-equivariant by construction, meaning that every translation of the image leads to a matching translation in the feature maps (i. e. π and ψ are both translation by a number of pixels horizontally and vertically). Shift equivariance is a useful property for neural system identification, because it allows us to represent many neurons that perform similar computations but in different locations in space by a single convolutional feature map instead of learning each neurons' nonlinear input-output function'from scratch.'Rotation-equivariant CNN. Neurons in V1 do not only perform similar functions at different locations, but also extract similar features with different orientations. Thus, for modeling populations of V1 neurons, learning a representation that is equivariant to rotation in addition to translation would be desirable. To achieve rotation equivariance, we use group convolutions BID6 ).Here we use the group of all rotations by multiples of 45 •. That is, for each convolutional filter in the first layer, we have eight rotated copies, each of which produces one feature map FIG0. Thus, if we learn 16 different filters in the first layer, it will have a total of 8 × 16 = 128 feature maps. Formally, if π is a rotation of the image by, say, 45•, then ψ is a rotation of the feature maps by also 45• combined with a cyclic permutation of the feature maps by one rotation step. For the second (and all subsequent) layers, the procedure becomes a bit more involved. Sticking with the numbers from above, for every feature in the second layer we now learn 128 filters -one for each input feature map FIG0. To preserve rotation equivariance, we need to create all rotated copies of these filters and cyclically permute the feature maps such that each rotated filter receives the rotated version of the input (depicted by the colored boxed in FIG0, second and following rows).To implement weight sharing across filter orientation without aliasing artifacts, we represent the filters in a steerable basis BID31. We use the two-dimensional Hermite functions in polar coordinates, which form a steerable, orthonormal basis (FIG0 ; see also BID28 BID11 . For filters of size k, we use all 2d Hermite functions up to rank k, which means we have k(k + 1)/2 basis functions. We sample each filter at twice the resolution and then downsample by 2×2 average pooling to reduce aliasing. Neural data. We recorded the responses of 6005 excitatory neurons in primary visual cortex (layers 2/3 and 4) from one mouse by taking two consecutive scans with a large-field-of-view twophoton mesoscope BID25. Activity was measured using the genetically encoded calcium indicator GCaMP6f. V1 was targeted based on anatomical location as verified by numerous previous experiments performing retinotopic mapping using intrinsic imaging. We selected cells based on a classifier for somata on the segmented cell masks and deconvolved their fluorescence traces BID21. We did not filter cells according to visual responsiveness. The aquisition frame rate was 4.8 Hz in both scans. We monitored pupil position, dilation, and absolute running speed of the animal. However, because eye movements were rare and we are primarily interested in the average response of neurons given the visual stimulus, we did not further take into account eye position and running speed. Visual stimuli. Stimuli consisted of 5500 images taken from ImageNet BID22, cropped to fit a 16:9 monitor, and converted to gray-scale. The screen was 55 × 31 cm at a distance of 15 cm, covering roughly 120 DISPLAYFORM0 In each scan, we showed 5400 of these images once (training and validation set) and the remaining 100 images 20 times each (test set). Each image was presented for 500ms followed by a blank screen lasting between 500ms and 1s. For each neuron, we extract the accumulated activity between 50ms and 550ms after stimulus onset using a Hamming window. Preprocessing. We rescaled the images to 64 × 36 pixels and standardized them by subtracting the mean over all images in the training set and all pixels from each image and dividing by the standard deviation (also taken over all images in the training set and all pixels). We divided the responses of each neuron by its standard deviation over time. We did not center the neural responses, because they are non-negative after deconvolution and zero has a clear meaning. Model fitting and evaluation. We initialized all weights randomly from a truncated normal distribution with mean zero and standard deviation 0.01. The biases of the batch normalization layers were initially set to zero. In contrast, we set the biases in each neuron's readout to a non-zero initial value, since the neural responses are not centered on zero. We initialized these biases such that the initial model prediction was on average half the average response of each neuron. To fit the models, we used the Adam Optimizer BID15 with an initial learning rate of 0.002, a single learning rate decay and early stopping. We monitored the validation loss every 50 iterations and decreased the learning rate once by a factor of 10 when the validation loss had not decreased for five validation steps in a row. We then further optimized the model until the same criterion is reached again. To evaluate the models, we use the following procedure. For each neuron we compute the Pearson correlation coefficient between the model prediction and the average response over the 20 repetitions of the 100 test images. We then average the correlations over all neurons. This approach tells us how well the model predicts the average response of neurons to a given stimulus, ignoring trial-to-trial variability (which is interesting in itself, but not the focus of the present work).Architecture details. Our architecture consists of three convolutional layers with filter sizes of 13, 5 and 5 pixels, respectively. Thus, the receptive fields of the CNN's last layer's units were 21 px, corresponding to ∼ 60• and covering both classical and extra-classical receptive field. We use zero padding such that the feature maps maintain the same size across layers. We use 16 filter sets (i. e. 128 feature maps) in the first two layers and 8 to 48 filter sets in the third layer (number cross-validated, see below). After each layer, we apply batch normalization followed by a learned scale and bias term for each feature map.1 After the first and second layer, but not after the third, we apply a soft-thresholding nonlinearity f (x) = log(1 + exp(x)). The feature maps of the third layer then provide the input for each neuron's readout, which consists of a linear combination first over space and then over features, followed by an added bias term and a final soft-thresholding nonlinearity. Thus, each neuron implements a cascade of three LN operations. Regularization. We use the same three regularizers as BID16. For smoothness of convolution kernels we set the relative weight of the first layer to twice as strong as in the second and third layer to account for the larger kernels. We apply group sparsity to the convolution kernels of the second and third layer. We regularize the spatial masks and feature weights in the readout to be sparse by applying the L1 penalty on the 3d tensor that from taking their outer tensor product. The weights of all three regularizers are cross-validated as described in the next paragraph. Model selection. We cross-validated over the number of filter sets in the third layer, ranging from 8 to 48 (i. e. 64 to 384 feature maps 2) and the strength of the regularizers. For each architecture, we fit 32 models with different initial conditions and randomly drawn hyperparameters (smoothness of filters: 0.001-0.03, group sparsity of filters: 0.001-0.1, sparsity of readout: 0.005-0.03) and chose the best one according to its loss on the validation set (i. e. not using the test set). For all baseline and control models (see below), we also cross-validated over 32 randomly sampled sets of hyperparameters drawn from the same range of values. Baseline and control experiments. As a baseline, we fit a number of regular CNNs without rotation equivariance. These models are completely identical in terms of number of layers, filter sizes, readout layer and fitting procedure, except for the weight sharing constraint across orientation. We fit models with the same number of feature maps as their rotation-equivariant counterparts as well as smaller models with fewer feature maps (see Table 1).Previous work has enforced the feature weights in the readout to be positive BID16 ). Because we do not enforce such constraint in the present work, we ran a control experiment, in which we enforce the readout weights (both masks and feature weights) to be positive. Sparsity of the spatial masks is well justified, because we know that receptive fields are localized. However, it is unclear whether sparsity on the feature weights is desirable, since the function each neuron implements may not be well aligned with the coordinate axes spanned by the small number of features we learn. Thus, we also fit a model without the sparsity regularizer for the feature weights. 1 In our experiments the network did not implement exact rotation equivariance, because batch normalization was applied to each feature map individually instead of jointly to all rotated versions. We therefore re-ran a subset of experiments where we corrected this issue and verified that model performance was indistinguishable.2 384 feature maps was the maximum we could fit into 16 GB of available GPU memory. Number of feature maps in last layer Table 1: Average correlation on test set of our rotation-equivariant CNN, two controls, and several regular CNNs as baselines. The three numbers for the CNNs are the number of feature maps in each layer; other parameters are identical to the rotation-equivariant CNN. Standard deviations are across the top-five models of each type. Tools. We performed our model fitting and analyses using DataJoint BID34, Numpy/Scipy BID29 ), Matplotlib (, Seaborn BID30, Jupyter BID17, Tensorflow BID0, and Docker BID19 .Availability of code and models. Code to reproduce all experiments and models as well as pretrained models are available at https://github.com/aecker/cnn-sys-ident. Architecture search and model selection. We start by evaluating the predictive performance of our rotation-equivariant CNN on a dataset of 6005 neurons from mouse primary visual cortex. First, we performed an initial architecture search, drawing inspiration from earlier work BID16 and exploring different numbers of layers and feature maps and different filter sizes. We settled on an architecture with three convolutional layers with filter sizes of 13, 5 and 5 pixels, respectively, and used 16 filter sets (i. e. 128 feature maps) in the first two layers. For the third layer, which provides the nonlinear feature space from which all neurons pool linearly, we cross-validated over the number of features, ranging from 8 to 48 (i. e. 64 to 384 feature maps).Our model achieved an average correlation on the test set of 0.47. The performance improved with the number of features in the third layer, but the improvement was small beyond 16 features (Fig. 2). For the following analyses we therefore focus on this model with 16 features as a compromise between model complexity and performance. It is encouraging that a model with only 16 features is sufficient to accurately model a population as large as 6000 neurons. Earlier work modeling V1 responses used a shared feature space of 10-20 BID1 or 48 features BID16 ) for populations of 50-100 neurons, which reduced the dimensionality of the feature space by a factor of 2-5 compared to the number of neurons. Here, we reduce the dimensionality by a factor of 375 while achieving a similar level of performance. Rotation-equivariant CNN outperforms regular CNN. To compare our new model to existing approaches, we also fit a regular CNN with identical architecture except for the weight sharing constraint across orientations. In addition, we fit a number of smaller CNNs with fewer feature maps in each layer, which are more similar in terms of number of parameters, but potentially have less expressive power. Our rotation-equivariant CNN outperforms all baselines (Table 1) and generally requires less data FIG1, showing that enforcing weight sharing across orientations is not only potentially useful for interpreting the model (as we show below), but also serves as a good regularizer to fit a larger, more expressive model. Feature space generalizes to unseen neurons. To show that our network learns common features of V1 neurons, we excluded half of the neurons when fitting the network. We then fixed the rotationequivariant convolutional core and trained only the readout (spatial mask and feature weights) for the other half of the neurons. The ing test correlation for these neurons (0.46) was indistinguishable from that of the full model (0.47), showing that the learned features transfer to neurons not used to train the feature space. Feature weights are sparse. The intuition behind the sparse, factorized readout layer is that the spatial mask encodes the receptive field location of each neuron while the feature weights parameterize the neuron's nonlinear computation. We now ask how the neurons are organized within this 16-dimensional function space. On the one extreme of the spectrum, each neuron could pick a random direction, in which case sparsity would not be the right prior and the feature weights should be dense. On the other end of the spectrum, there could be a discrete number of functional cell types. In this case, each cell type would be represented by a single feature and the feature weights should be maximally sparse, i. e. one-hot encodings of the cell type. To analyze the feature weights, we first marginalize them over orientation. To do so, we take the sum of squares over all 8 orientations for each of the 16 features and then normalize them such that the energy of the 16 features sums to one. We find that the feature weights are indeed quite sparse •.(FIG2 : most of the neurons use only 1-5 features FIG2) and the strongest feature captures more than 50% of the energy for 63% of the neurons. To ensure that the sparsity of the feature weights is a property of the data and not a trivial of our L 1 penalty, we performed an ablation study, where we fit a model that applies sparsity only on the spatial masks, but uses L 2 regularization for the feature weights. This model performed worse than the original model (Table 1) and produced a significantly denser weight distribution FIG2, suggesting that sparsity is indeed a justified assumption. Feature weights have consistent sign. There is a second line of evidence that the features learned by our model are meaningful beyond just providing an arbitrary basis of a 16-dimensional feature space in which neurons are placed at random: the weights that different neurons assign to any given feature have remarkably consistent signs FIG4 ). For 11 of 16 features, more than 90% of the non-zero weights have the same sign. 3 Thus, the negation of one feature that drives a large group of neurons is not a feature that drives many neurons. Visualization of the different groups of neurons. Having presented evidence that the features identified by our model do indeed represent a simple and compact, yet accurate description of a large population of neurons, we now ask what these features are. As a first step, we group all neurons into one of 16 groups based on their strongest feature weight. This approach is obviously to some extent arbitrary. By no means to we want to argue that there are exactly 16 types of excitatory neurons in V1 or that excitatory neurons in V1 can even be classified into distinct functional types. Nevertheless, we believe that the 16 groups defined in this way are useful in a practical sense, because they represent features that cover more than 50% of the variance of a large fraction of neurons and therefore yield a very compact description of the most important features of V1 computation across the entire excitatory neuronal population. To visualize what each of these features compute, we pick the 16 most representative examples 4 of each group and use the model to approximate their linear receptive field (RF). To this end, we compute the model gradient at a gray image FIG5. The crucial point of this plot is that it shows that the premise of rotation equivariance holds: cells are clustered according to similar spatial patterns, but independent of their preferred orientation. As expected, most linear RFs resemble Gabor filters, which show differences in symmetry (odd: #2, #11, #15 vs even: #3, #6, #12) as well polarity (#3 vs. #6, #12), while some groups exhibit center-surround structure (#4, #14, #16). We developed a rotation-equivariant convolutional neural network model of V1 that allows us to characterize and study V1 computation independent of orientation preference. Although the visual system is not equivariant to rotation -there are known biases in the distribution of preferred orientations -, enforcing weight sharing across orientations allowed us to fit larger, more expressive models given a limited dataset. While our work lays out the technical foundation, we only scratched the surface of the many biological questions that can now be addressed. Future work will have to investigate the learned features in much more detail, test to what extent they generalize across recording sessions and animals, whether they are consistent across changes in the architecture andmost importantly -whether neurons in V1 indeed cluster into distinct, well-defined functional types and this organization finds any resemblance in anatomical or genetic properties BID27 of the neurons recorded.
A rotation-equivariant CNN model of V1 that outperforms previous models and suggest functional groupings of V1 neurons.
1,479
scitldr
In classic papers, Zellner demonstrated that Bayesian inference could be derived as the solution to an information theoretic functional. Below we derive a generalized form of this functional as a variational lower bound of a predictive information bottleneck objective. This generalized functional encompasses most modern inference procedures and suggests novel ones. Consider a data generating process φ ∼ p(φ) from which we have some N draws that constitute our training set, x P = {x 1, x 2, . . ., x N} ∼ p(x|φ). We can also imagine (potentially infinitely many) future draws from this same process x F = {x N +1, . . .} ∼ p(x|φ). The predictive information I(x P ; x F) 1 gives a unique measure of the complexity of a data generating process . The goal of learning is to capture this complexity. To perform learning, we form a global representation of the dataset p(θ|x P). This can be thought of as a learning algorithm, that, given a set of observations, produces a summary statistic of the dataset that we hope is useful for predicting future draws from the same process. This algorithm could be deterministic or more generally, stochastic. For example, imagine training a neural network on some data with stochastic gradient descent. Here the training data would be x P, the test data x F and the neural network parameters would be θ. Our training procedure implicitly samples from the distribution p(θ|x P). How do we judge the utility of this learned global representation? The mutual information I(θ; x F) quantifies the amount of information our representation captures about future draws. 2 To maximize learning we therefore aim to maximize this quantity. 1. We use I(x; y) for the mutual information between two random variables: 2. It is interesting to note that in the limit of an infinite number of future draws, I(θ; x F) approaches I(θ; φ). Therefore, the amount of information we have about an infinite number of future draws from This is, of course, only interesting if we constrain how expressive our global representation is, for otherwise we could simply retain the full dataset. The amount of information retained about the observed data: I(θ; x P) is a direct measure of our representation's complexity. The bits a learner extracts from data provides upper bounds on generalization . Combined, these motivate the predictive information bottleneck objective, a generalized information bottleneck : We can turn this into an unconstrained optimization problem with the use of a Lagrange multiplier β: max While this objective seems wholly out of reach, we can make progress by noting that our random variables satisfy the Markov chain: x F ← φ → x P → θ, in which θ and x F are conditionally independent given x P: This implies: and the equivalent unconstrained optimization problem: 3 The first term here: I(θ; x P |x F) is the residual information between our global representation and the dataset after we condition on full knowledge of the data generating procedure. This is a direct measure of the inefficiency of our proposed representation. Simple variational bounds can be derived for this objective, just as was done for the (local) information bottleneck objective in. First, we demonstrate a variational upper bound on I(θ; the process is the same as the amount of information we have about the nature and identity of the data generating process itself. 3. A similar transformation for the (local) variational information bottleneck appeared in. 4. · is used to denote expectations, and unless denoted otherwise with respect to the full joint density Here we upper bound the residual information by using a variational approximation to p(θ|x F), the marginal of our global representation over all datasets drawn from the same data generating procedure. Any distribution q(θ) independent of x F suffices. Next we variationally lower bound I(θ; x P) with: The entropy of the training data H(x P) is a constant outside of our control that can be ignored. Here we variationally approximate the "posterior" of our global representation with a factorized "likelihood": i q(x i |θ) = q(x P |θ) ∼ p(x P |θ). Notice that while p(x P |θ) will not factorize in general, we can certainly consider a family of variational approximations that do. Combining these variational bounds, we generate the objective: We have thus derived, as a variational lower bound on the predictive information bottleneck, the objective postulates (with β = 1) is satisfied for inference procedures that optimally process information. demonstrates, this encompasses a wide array of modern inference procedures, including Generalized Bayesian Inference and a generalized Variational Inference, dubbed Gibbs VI . 5 Below we highlight some of these and other connections. If, in Equation, we identity q(θ) with a fixed prior and q(x|θ) with a fixed likelihood of a generative model, optimizing this objective for p(θ|x P) in the space of all probability densities gives the generalized Boltzmann distribution : where Z is the partition function. 6 This is a generalized form of Bayesian Inference called the power likelihood . Here the inverse temperature β acts as a Lagrange multiplier controlling the trade-off between the amount of information we retain about our observed data (I(θ; x P)) and how much predictive information we capture (I(θ; x F)). As β → ∞ (temperature goes to zero), we recover the maximum likelihood solution. At β = 1 (temperature = 1) we recover ordinary Bayesian inference. As β → 0 (temperature goes to infinity), we recover just prior predictive inference that ignores the data entirely. These limits are summarized in Table 1. 5. To incorporate the Generalized VI with divergence measures other than KL, we need only replace our mutual informations (which are KL based) with their corresponding generalizations. Table 1: Power Bayes can be recovered as a variational lower bound on the predictive information bottleneck objective (Equation). More generally, notice that in Equation the densities q(x|θ) and q(θ) are not literally the likelihood and prior of a generative model, they are variational approximations that we have complete freedom to specify. This allows us to describe other more generalized forms of Bayesian inference such as Divergence Bayes or the full Generalized Bayes provided we can interpret the chosen loss function as a conditional distribution. If we limit the domain of p(θ|x P) to a restricted family of parametric distributions, we immediately recover not only standard variational inference, but a broad generalization known as Gibbs Variational Inference (; ;). Furthermore, nothing prevents us from making q(x|θ) or q(θ) themselves parametric and simultaneously optimizing those. Optimizing the prior with a fixed likelihood, unconstrained p(θ|x P), and β = 1 the objective mirrors Empirical Bayesian approaches, including the notion of reference priors . Alternatively, optimizing a parametric likelihood with a parametric representation p(θ|x P), fixed prior, and β = 1 equates to a Neural Process . Consider next data augmentation, where we have some stochastic process that modifies our data with implicit conditional density t(x |x). If the augmentation procedure is centered about zero so that x t(x |x) = x and our chosen likelihood function is concave, then we have: which maintains our bound. For example, for an exponential family likelihood and any centered augmentation procedure (like additive mean zero noise), doing generalized Bayesian inference on an augmented dataset is also a lower bound on the predictive information bottleneck objective. We have shown that a wide range of existing inference techniques are variational lower bounds on a single predictive information bottleneck objective. This connection highlights the drawbacks of these traditional forms of inference. In all cases considered in the previous section, we made two choices that loosened our variational bounds. First, we approximated p(x P |θ), with a factorized approximation q(x P |θ) = i q(x i |θ). Second, we approximated the future conditional marginal p(θ|x F) = dx P p(θ|x P)p(x P |x F) as an unconditional "prior". Neither of these approximations is necessary. For example, consider the following tighter "prior": q(θ|x F) ∼ dx P p(θ|x P)q(x P |x F). Here we reuse a tractable global representation p(θ|x P) and instead create a variational approximation to the density of alternative datasets drawn from the same process: q(x P |x F). We believe this information-theoretic, representation-first perspective on learning has the potential to motivate new and better forms of inference. 7
Rederive a wide class of inference procedures from an global information bottleneck objective.
1,480
scitldr
In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant. The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target. In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input. This is typically the case when we have a standard conditioning input, such as a state observation, and a ``privileged'' input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent. In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication). Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access. In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not. We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information. A model that generalizes effectively should be able to pick up on relevant cues in the input while ignoring irrelevant distractors. For example, if one want to cross the street, one should only pay attention to the positions and velocities of the cars, disregarding their color. The information bottleneck formalizes this in terms of minimizing the mutual information between the bottleneck representation layer with the input, while maximizing its mutual information with the correct output. This type of input compression can improve generalization , and has recently been extended to deep parametric models, such as neural networks where it has been shown to improve generalization . The information bottleneck is generally intractable, but can be approximated using variational inference . This variational approach parameterizes the information bottleneck model using a neural network (i.e., an encoder). While the variational bound makes it feasible to train (approximate) information bottleneck layers with deep neural networks, the encoder in these networks -the layer that predicts the bottleneck variable distribution conditioned on the input -must still process the full input, before it is compressed and irrelevant information is removed. The encoder itself can therefore fail to generalize, and although the information bottleneck minimizes mutual information with the input on the training data, it might not compress successfully on new inputs. To The information bottleneck (IB) objective is formulated as the maximization of I(Z; Y) − βI(Z; X), where X refers to the input signal, Y refers to the target signal, Z refers to the compressed representation of X, and β controls the trade-off between compression and prediction. The IB has its roots in channel coding, where a compression metric I(Z; X) represents the capacity of the communication channel between Z and X. Assuming a prior distribution r(Z) over the random variable Z, constraining the channel capacity corresponds to limiting the information by which the posterior p(Z|X) is permitted to differ from the prior r(Z). This difference can be measured using the Kullback-Leibler (KL) divergence, such that D KL (p(Z|X) r(Z)) refers to the channel capacity. Now, we write the equations for the variational information bottleneck, where the bottleneck is learnt on both the standard input S as well as a privileged input G. The Data Processing Inequality (DPI) for a Markov chain x → z → y ensures that I(x; z) ≥ I(x; y). Hence for a bottleneck where the input is comprised of both the standard input as well as privileged input, we have I(Z; G|S) ≥ I(Y ; G|S). To obtain an upper bound on I(Z; G|S), we must first obtain an upper bound on I(Z; G|S = s), and then average over p(s). We get the following : We ask the reader to refer to the section on the conditional bottleneck in the supplementary material for the full derivation. The variational bandwidth bottleneck: Based on the standard input S, the channel capacity network determines the capacity of the bottleneck Z. The channel capacity then determines the probability of accessing the privileged input. In the event that the privileged input is not accessed, no part of the model actually reads its value. We now introduce our proposed method, the variational bandwidth bottleneck (VBB). The goal of the variational bandwidth bottleneck is to avoid accessing the privileged input G if it is not required to make an informed decision about the output Y. This means that the decision about whether or not to access G must be made only on the basis of the standard input S. The standard input is used to determine a channel capacity, d cap, which controls how much information about G is available to compute Z. If d cap denotes the channel capacity, one way to satisfy this channel capacity is to access the input losslessly with probability d cap, and otherwise send no information about the input at all. In this communication strategy, we have p(Z|S, G) = δ(f enc (S, G)) if we choose to access the privileged input (with probability d cap), where f enc (S, G) is a deterministic encoder, and δ denotes the Dirac delta function. The full posterior distribution p(Z|S, G) over the compressed representation can be written as a weighted mixture of (a) (deterministically) accessing the privileged input and standard input and (b) sampling from the prior (when channel capacity is low), such that z is sampled using This modified distribution p(Z|S, G) allows us to dynamically adjusts how much information about G is transmitted through Z. As shown in the Figure 1, if d cap is set to zero, Z is simply sampled from the prior and contains no information about G. If it is set to one, the privileged information in G is deterministically transmitted. The amount of information about G that is transmitted is therefore determined by d cap, which will depend only on the standard input S. This means that the model must decide how much information about the privileged input is required before accessing it. Optimizing the information bottleneck objective with this type of bottleneck requires computing gradients through the term D KL (p(Z|S, G) r(Z)) (as in Eq. 1), where z ∼ p(Z|S, G) is sampled as in Eq. 2. The non-differentiable binary event, whose probability is represented by d cap, precludes us from differentiating through the channel capacity directly. In the next sections, we will first show that this mixture can be used within a variational approximation to the information bottleneck, and then describe a practical approximation that allows us to train the model with standard backpropagation. In this section, we show how we can evaluate the channel capacity in a tractable way. We learn a deterministic function B(S) of the standard input S which determines channel capacity. This function outputs a scalar value for d cap ∈, which is treated as the probability of accessing the information about the privileged input. This deterministic function B(S) is parameterized as a neural network. We then access the privileged input with probability d cap = B(S). Hence, the ing distribution over Z is a weighted mixture of accessing the privileged input f enc (S, G) with probability d cap and sampling from the prior with probability 1 − d cap. At inference time, using d cap, we sample from the Bernoulli distribution b ∼ Bernoulli(d prob) to decide whether to access the privileged input or not. Here, we show the KL objective which allows for tractable optimization of D KL (p(Z|S, G) r(Z)) (as in Eq. 2, 1). Proposition 1 Given the standard input s, privileged input g, bottleneck variable z, and a deterministic encoder f enc (s, g), we can express the D KL between the weighed mixture and the prior as The proof is given in the section "Tractable Optimization of the KL Objective" in the supplementary appendix. This equation is fully differentiable with respect to the parameters of f (g, s) and B(s) = d cap, making it feasible to use standard gradient-based optimizers. Summary: As in Eq. 2, we approximate p(Z|S, G) as a weighted mixture of f enc (S, G) and the normal prior, such that can be seen as a bound on the information bottleneck objective. When we access the privileged input G, we pay a cost equal to I(Z, G|S), which is bounded by D KL (p(Z|S, G) r(Z)) as in Eq. 1. Hence, optimizing this objective causes the model to avoid accessing the privileged input when it is not necessary. In order to show how the proposed model can be implemented, we consider a sequential decision making setting, though our variational bandwidth bottleneck could also be applied to other learning problems. In reinforcement learning, the problem of sequential decision making is cast within the framework of MDPs . Our proposed method depends on two sources of input, standard input and "privileged" input. In reinforcement learning, privileged inputs could be the of performing any upstream computation, such as running model based planning. It can also be the information from the environment, such as the goal or the of active perception. In all these settings, the agent must decide whether to access the privileged input or not. If the agent decides to access the privileged input, then the the agent pays an "information cost." The objective is to maximize the expected reward and reduce the cost associated with accessing privileged input, such that across all states on average, the information cost of using the privileged information is minimal. We parameterize the agent's policy π θ (A|S, G) using an encoder p enc (Z|S, G) and a decoder p dec (A|S, Z), parameterized as neural networks. Here, the channel capacity network B(S) would take in the standard input that would be used to determine channel capacity, depending on which we decide to access the privileged input as in Section 4.1, such that we would output the distribution over the actions. That is, Y is A, and, z). This would correspond to minimizing I(A; G|S), ing in the objective 6 RELATED WORK A number of prior works have studied information-theoretic regularization in RL. For instance, van use information theoretic measures to define relevant goal-information, which then could be used to find subgoals. Our work is related in that our proposed method could be used to find relevant goal information, but without accessing the goal first. Information theoretic measures have also been used for exploration (; ; ;). More recently proposed InfoBot, where "decision" states are identified by training a goal conditioned policy with an information bottleneck. In InfoBot, the goal conditioned policy always accesses the goal information, while the proposed method conditionally access the goal information. The VBB is also related to work on conditional computation. Conditional computation aims to reduce computation costs by activating only a part of the entire network for each example . Our work is related in the sense that we activate the entire network, but only conditionally access the privileged input. Another point of comparison for our work is the research on attention models ((; ;) ). These models typically learn a policy, that allows them to selectively attend to parts of their input. However, these models still need to access the entire input in order to decide where to attend. Our method dynamically decides whether to access privileged information or not. As shown in our experiments, our method performs better than the attention method of. Recently, many models have been shown to be effective at learning communication in multi-agent reinforcement learning . learns a deep neural network that maps inputs of all the agents to their respective actions. In this particular architecture, each agent sends its state as the communication message to other agents. Thus, when each agent takes a decision, it takes information from all the other agents. In our proposed method, each agent communicates with other agents only when its necessary. In this section, we evaluate our proposed method and study the following questions: (a) Better generalization? Does the proposed method learn an effective bottleneck that generalizes better on test distributions, as compared to the standard conditional variational information bottleneck? (b) Learn when to access privileged input?: Does the proposed method learn when to access the privileged input dynamically, minimizing unnecessary access?. We compare the proposed method to the following methods and baselines: Conditional variational information bottleneck (VIB): The agent always access the privileged input, with a VIB using both the standard and the privileged input (InfoBot ). Deterministically accessing privileged input: The agent can deterministically access both the state as well as the privileged input. This has been shown to improve generalization in RL problems UVFA . We compare the proposed method to simpler reinforcement-learning baselines, where accessing privileged information can be formalized as one of the available actions that lead to the same state but with more information, at the cost of a small negative reward. This baseline evaluates whether the explicit VBB formulation provides a benefit over a more conventional approach, where the MDP itself is reformulated to account for the cost of information. Randomly accessing goal (RAG) -Here, we compared the proposed method to the scenario where we randomly access the privileged input (e.g., 50% of the time). This baseline evaluates whether the VBB is selecting when to access the goal in an intentional and intelligent way. Model-based planning can be computationally expensive, but beneficial in temporally extended decision making domains. In this setting, we evaluate whether the VBB can dynamically choose to invoke the planner as infrequently as possible, while still attaining good performance. While it is easy to plan using a planner (like a model based planner, which learns the dynamics model of the environment), it is not very cheap, as it involves running a planner at every step (which is expensive). So, here we try to answer whether the agent can decide based on the standard input when to access privileged input (the output of model based planner by running the planner). Experimental Setup: We consider a maze world as shown in Figure 2 (a). The agent is represented by a blue dot, and the agent has to reach the goal (represented by a green dot). The agent has access to a dynamics model of the environment (which is pretrained and represented using a parameterized neural network). In this task, the agent only gets a partial view of the surrounding i.e. the agent observes a small number of squares in front of it. The agent has to reach the goal position from the start position, and agent can use the pretrained dynamics model to sample multiple plausible trajectories, and the output of the dynamics model is fed as a conditional input to the agent's policy (similar to (Racanière et al., 2017) ), thus the agent can use this dynamics model to predict possible futures, and then make an informed decision based on its current state as well as the of the prediction from the dynamic model. Near the junction 72% ± 5% In the Hallway 28% ± 4% In this setup, the current state of the agent (i.e. the egocentric visual observation) acts as the standard input S, and the of running the planner acts as the privileged input G. In order to avoid running the model based planner, the agent needs to decide when to access the more costly planner. Results: -Here, we analyze when the agent access the output of the planner. We find that most of the times agent access the privileged information (output of model based planner) near the junctions as shown in Table 1. Figure 3: Partially Observable FindObjSX environmentsThe agent is placed in the central room. An object is placed in one of the rooms and the agent must navigate to the object in a randomly chosen outer room to complete the mission. The agent again receives an egocentric observation (7 x 7 pixels), and the difficulty of the task increases with X. For more details refer to supplementary material. The goal of this experiment is to show that, by selectively choosing when to access the privileged input, the agent can generalize better with respect to this input. We consider an agent navigating through a maze comprising sequences of rooms separated by doors, as shown in Figure 7. We use a partially observed formulation of the task, where the agent only observes a small number of squares ahead of it. These tasks are difficult to solve with standard RL algorithms, not only due to the partial observability of the environment but also the sparsity of the reward, since the agent receives a reward only upon reaching the goal. The low probability of reaching the goal randomly further exacerbates these issues. The privileged input in this case corresponds to the agent's relative distance to the goal G. At junctions, the agent needs to know where the goal is so that it can make the right turn. While in a particular room, the agent doesn't need much information about the goal. Hence, the agent needs to learn to access goal information when it is near a door, where it is most valuable. The current visual inputs act as a standard input S, which is used to compute channel capacity d cap. RoomN12S10 ) ) (a conditional variant of VIB). We use different mazes for training, validation, and testing. We evaluate generalization to an unseen distribution of tasks (i.e., more rooms than were seen during training). We experiment on both RoomNXSY (X number of rooms with atmost size Y, for more details, refer to the Appendix G) as well as the FindObjSY environment. For RoomNXSY, we trained on RoomN2S4 (2 rooms of at most size 6), and evaluate on RoomN6S6 (6 rooms of at most size 6) and RoomN12S10 (12 rooms, of at most size 10). We also evaluate on the FindObjSY environment, which consists of 9 connected rooms of size Y − 2 × Y − 2 arranged in a grid. For FindObjSY, we train on FindObjS5, and evaluate on FindObjS7 and FindObjS10. Percentage of times VBB 76% ± 6% InfoBot 60% ± 3% AIC 62% ± 6% Table 3: Goal Driven Navigation -Percentage of time steps on which each method acsess the goal information when the agent is near the junction point (or branching points in the maze. We show that the proposed method learns to access the privileged input (in this case, the goal) only when necessary. Results: Tables 3a, 3b compares an agent trained with the proposed method to a goal conditioned baseline (UVFA) , a conditional variant of the VIB , as well as to the baseline where accessing goal information is formulated as one of the actions (AIC). We also investigate how many times the agent accesses the goal information. We first train the agent on MultiRoomN2S4, and then evaluate this policy on MultiRoomN12S10. We sample 500 trajectories in MultiRoomN12S10env. Ideally, if the agent has learned when to access goal information (i.e., near the doorways), the agent should only access the goal information when it is near a door. We take sample rollouts from the pretrained policy in this new environment and check if the agent is near the junction point (or doorway) when the agent access the goal information. Table 3 quantitatively compares the proposed method with different baselines, showing that the proposed method indeed learns to generalize with respect to the privileged input (i.e., the goal). Next, we investigate the case where the privileged input is expensive to obtain, and we therefore would like to minimize how often the agent must access it. We specifically consider multiagent communication, where in order to solve a task, agents must communicate with other agents. Here we show that selectively deciding when to communicate with another agent can in better learning. Experimental setup: We use the setup proposed by. The environment consists of N agents and M landmarks. Both the agents and landmarks exhibit different characteristics such as different color and shape type. Different agents can act to move in the environment. They can also be affected by the interactions with other agents. Asides from taking physical actions, agents communicate with other agents using verbal communication symbols. Each agent has a private goal that is not observed by another agent, and the goal of the agent is grounded in the real physical environment, which might include moving to a particular location. It could also involve other agents (like requiring a particular agent to move somewhere) and hence communication between agents is required. We consider the cooperative setting, in which the problem is to find a policy that maximizes expected return for all the agents. In this scenario, the current state of the agent is the standard input S, and the information which might be obtained as a of communication with other agents is the privileged input G. For more details refer to the Appendix (D). Emergent Communication 4.85 (100%) ± 0.1% 5.44 (100%) ± 0.2% Randomly Accessing (RAG) 4.95 (50%) ± 0.2% 5.65 (50%) ± 0.1% InfoBot 4.81 (100%) ± 0.2% 5.32 (100%) ± 0.1% VBB (ours) 4.72 (23%) ± 0.1% 5.22 (34%) ± 0.05% Table 4: Multiagent communication: The VBB performs better, as compared to the baselines. In the baseline scenario, all of the agents communicate with all the other agents all the time. Averaged over 5 random seeds. Tasks: Here we consider two tasks: (a) 6 agents and 6 landmarks, (b) 10 agents and 10 landmarks. The goal is for the agents to coordinate with each other and reach their respective landmarks. We measure two metrics: (a) the distance of the agent from its destination landmark, and (b) the percentage of times the agent accesses the privileged input (i.e., information from the other agents). Table 4 shows the relative distance as well as the percentage of times agents access information from other agents (in brackets). Results: Table 4 compares an agent trained with proposed method to and Infobot . We also study how many times an agent access the privileged input. As shown in Table 4 (within brackets) the VBB can achieve better , as compared to other methods, even when accessing the privileged input only less than 40% of the times. The VBB performs better, as compared to the baselines. The VBB transmits a similar number of bits, while accessing privileged information a fraction of the time (in brackets % of times access to privileged information). Using REINFORCE to learn the parameter of the Bernoulli, does not perform as well as the proposed method. Channel Capacity: We can quantify the average information transmission through both the VBB and the VIB in bits. The average information is similar to the conventional VIB, while the input is accessed only a fraction of the time (the VIB accesses it 100% of the time). In order to show empirically that the VBB is minimizing information transmission (Eq. 1 in main paper), we measure average channel capacity D KL (p(z|s, g) r(z)) numerically and compare the proposed method with the VIB, which must access the privileged input every time (See Table 5). We demonstrated how the proposed variational bandwidth bottleneck (VBB) helps in generalization over the standard variational information bottleneck, in the case where the input is divided into a standard and privileged component. Unlike the VIB, the VBB does not actually access the privileged input before deciding how much information about it is needed. Our experiments show that the VBB improves generalization and can achieve similar or better performance while accessing the privileged input less often. Hence, the VBB provides a framework for adaptive computation in deep network models, and further study applying it to domains where reasoning about access to data and computation is an exciting direction for future work. Current limitation of the proposed method is that it assumes independence between standard input and the privileged input but we observe in practice assuming independence does not seem to hurt the . Future work would be to investigate how we can remove this assumption. In this section, we construct our objective function, such that minimizing this objective function minimizes I(Y, G|S). Recall that the IB objective is formulated as the minimization of I(Z, X) − βI(Z, Y), where X refers to the input, Y refers to the model output, Z refers to compressed representation or the bottleneck. For the proposed method, we construct our objective as follows: we minimize the mutual information between privileged input and output given the standard input, I(Y, G|S), to encode the idea that the we should avoid unnecessary access to privileged input G, and maximize the I(Z, Y). Hence, for the VBB, using the data processing inequality , this implies that To obtain an upper bound on I(G; Z|S), we must first obtain an upper bound on I(G; Z|S = s), and then we average over p(s). We get the following : We assume that the privileged input G and the standard input S are independent of each other, and hence p(g|s) = p(g). we get the following upper bound: where the inequality in the last line is because we replace p(z|s) with p prior (z). We also drop the dependence of the prior z on the standard input s. While this loses some generality, recall that the predictive distribution p(y|s, z) is already conditioned on s, so information about s itself does not need to be transmitted through z. Therefore, we have that Marginalizing over the standard input therefore gives us We approximate p(z|s, g) as a weighted mixture of p enc (z enc |s, g) and the normal prior such that z ∼ d cap * (p enc (z enc |s, g)) + (1 − d cap) * N. Hence, the weighted mixture p(z|s, g) can be seen as a bound on the information bottleneck objective. Whenever we access the privileged input G, we pay an information cost (equal to I(Z, G|S) which is bounded by D KL (p(z|s, g) p prior (z)). Hence, the objective is to avoid accessing the privileged input, such that on average, the information cost of using the privileged input is minimal. Here, we first show how the weighted mixture can be a bound on the information bottleneck objective. Recall, Hence, D KL (p(z|s, g) p prior (z)) where p(z|s, g) is expressed as a mixture of direc delta and prior, and hence it can be written as Further expanding the RHS using eq. 9, we get Here, we can assume the δ(f (s, g)) to be zero under the prior (as it is a Direc delta function). This can further be simplified to: And hence, reducing the above term reduces t0 D KL (p(z|s, g) p prior (z)), our original objective. In the main paper we show how can we evaluate channel capacity in a tractable way. The way we do is to learn a function B(S) which determines channel capacity. Here's another way, which we (empirically) found that parameterizing the channel capacity network helps. In order to represent this function B(S) which satisfies these constraints, we use an encoder of the form, where S refers to the standard input, and f µ, f σ are learned functions (e.g., as a multi-layer perceptron) that outputs µ and σ respectively for the distribution over z cap. Here, D KL (B(S)|N) refers to the channel capacity of the bottleneck. In order to get a probability prob out of B(S), we convert B(S) into a scalar prob ∈ such that the prob can be treated as a probability of accessing the privileged input. We perform this transformation by normalizing B(S) such that B(S) ∈ [−k, k], (in practice we perform this by clamping B(S) ∈ [−2, 2]) and then we pass the normalized B(S) through a sigmoid activation function, and treating the output as a probability, prob, we access the privileged input with probability prob. Hence, the ing distribution over z is a weighted mixture of accessing the privileged input f enc (s, g) with probability prob and sampling from the prior with probability 1 − prob. Here we assume prior to be N, but it can also be learned. At test time, using prob, we can sample from the Bernouilli distribution b ∼ Bernoulli(prob) to decide whether to access the privileged input or not. Experimental Setup: We use the setup proposed by. The environment consists of N agents and M landmarks. Both the agents and landmarks exhibit different characteristics such as different color and shape type. Different agents can act to move in the environment. They can also be affected by the interactions with other agents. Asides from taking physical actions, agents communicate with other agents using verbal communication symbols. Each agent has a private goal which is not observed by another agent, and the goal of the agent is grounded in the real physical environment, which might include moving to a particular location, and could also involve other agents (like requiring a particular agent to move somewhere) and hence communication between agents is required. Each agent performs actions and communicates utterances according to a policy, which is identically instantiated for all of the agents in the environment, and also receive the same reward signal. This policy determines both the actions and communication protocols. We assume all agents have identical action and observation spaces and receive the same reward signal. We consider the cooperative setting, in which the problem is to find a policy that maximizes expected return for all the agents. In order to study generalization across a wide variety of environmental conditions and linguistic inputs, develop an extension of the puddle world reinforcement learning benchmark. States in a 10 X 10 grid are first filled with either grass or water cells, such that the grass forms one connected component. We then populate the grass region with six unique objects which appear only once per map (triangle, star, diamond, circle, heart, and spade) and four non-unique objects (rock, tree, horse, and house) which can appear any number of times on a given map. We followed the same experimental setup and hyperparameters as in . Here, an agent is rewarded for reaching the location specified by the language instruction. Agent is allowed to take actions in the world. Here the goal is to be able to generalize the learned representation for a given instruction such that even if the environment observations are rearranged, this representation is still useful. Hence, we want to learn such representations that ties observations from the environment and the language expressions. Here we consider the Puddle World Navigation map as introduced in . We followed the same experiment setup as . Here, the current state of the agent acts as a standard input. Based on this, agent decides to access the privileged input. We start by converting the instruction text into a real valued vector using an LSTM. It first convolves the map layout to a low-dimensional repesentation (as opposed to the MLP of the UVFA) and concatenates this to the LSTM's instruction embedding (as opposed to a dot product). These concatenated representations are then input to a two layered MLP. Generalization over both environment configurations and text instructions requires a model that meets two desiderata. First, it must have a flexible representation of goals, one which can encode both the local structure and global spatial attributes inherent to natural language instructions. Second, it must be compositional, in order to learn a generalizable representation of the language even though each unique instruction will only be observed with a single map during training. Namely, the learned representation for a given instruction should still be useful even if the objects on a map are rearranged or the layout is changed entirely. The goal of this experiment is to study if using the proposed method enables learning a dynamic representation of an image which can be then used to accurately classify an image. In order to show this, we follow the setup of the Recurrent Attention Model (RAM) . Here, the attention process is modeled as a sequential decision process of a goal-directed agent interacting with the visual image. A recurrent neural network is trained to process the input sequentially, attending to different parts within the image one at a time and hence combining information from these different parts to build up a dynamic representation of the image. The agent incrementally combines information because of attending to different parts and then chooses this integrated information to choose where next to attend to. In this case, the information due to attending at a particular part of the image acts as a standard input, and the information which is being integrated over time acts as a privileged input, which is then used to select where the model should attend next. The entire process repeats for N steps (for our experiment N = 6). FC denotes a fully connected network with two layers of rectifier units, each containing 256 hidden units. MNIST 60 * 60 Cluttered MNIST FC (2 layers) 1.69% 11.63% RAM Model (6 locs) 1.55% 4.3% VIB (6 locs) 1.58% 4.2% VBB (6 locs) (Ours) 1.42% 3.8% Table 6: Classification error . Averaged over 3 random seeds. Quantitative Results: Table 6 shows the classification error for the proposed model, as well as the baseline model, which is the standard RAM model. For both the proposed model, as well as the RAM model, we fix the number of locations to attend to equal to 6. The proposed method outperforms the standard RAM model. We evaluate the proposed framework using Advantage Actor-Critic (A2C) to learn a policy π θ (a|s, g) conditioned on the goal. To evaluate the performance of proposed method, we use a range of maze multi-room tasks from the gym-minigrid framework and the A2C implementation from. For the maze tasks, we used agent's relative distance to the absolute goal position as "goal". For the maze environments, we use A2C with 48 parallel workers. Our actor network and critic networks consist of two and three fully connected layers respectively, each of which have 128 hidden units. The encoder network is also parameterized as a neural network, which consists of 1 fully connected layer. We use RMSProp with an initial learning rate of 0.0007 to train the models, for both InfoBot and the baseline for a fair comparison. Due to the partially observable nature of the environment, we further use a LSTM to encode the state and summarize the past observations. The MultiRoom environments used for this research are part of MiniGrid, which is an open source gridworld package. This package includes a family of reinforcement learning environments compatible with the OpenAI Gym framework. Many of these environments are parameterizable so that the difficulty of tasks can be adjusted (e.g., the size of rooms is often adjustable). In MiniGrid, the world is a grid of size NxN. Each tile in the grid contains exactly zero or one object. The possible object types are wall, door, key, ball, box and goal. Each object has an associated discrete color, which can be one of red, green, blue, purple, yellow and grey. By default, walls are always grey and goal squares are always green. Rewards are sparse for all MiniGrid environments. In the MultiRoom environment, episodes are terminated with a positive reward when the agent reaches the green goal square. Otherwise, episodes are terminated with zero reward when a time step limit is reached. In the FindObj environment, the agent receives a positive reward if it reaches the object to be found, otherwise zero reward if the time step limit is reached. The formula for calculating positive sparse rewards is 1 − 0.9 * (step_count/max_steps). That is, rewards are always between zero and one, and the quicker the agent can successfully complete an episode, the closer to 1 the reward will be. The max_steps parameter is different for each environment, and varies depending on the size of each environment, with larger environments having a higher time step limit. There are seven actions in MiniGrid: turn left, turn right, move forward, pick up an object, drop an object, toggle and done. For the purpose of this paper, the pick up, drop and done actions are irrelevant. The agent can use the turn left and turn right action to rotate and face one of 4 possible directions (north, south, east, west). The move forward action makes the agent move from its current tile onto the tile in the direction it is currently facing, provided there is nothing on that tile, or that the tile contains an open door. The agent can open doors if they are right in front of it by using the toggle action. Observations in MiniGrid are partial and egocentric. By default, the agent sees a square of 7x7 tiles in the direction it is facing. These include the tile the agent is standing on. The agent cannot see through walls or closed doors. The observations are provided as a tensor of shape 7x7x3. However, note that these are not RGB images. Each tile is encoded using 3 integer values: one describing the type of object contained in the cell, one describing its color, and a flag indicating whether doors are open or closed. This compact encoding was chosen for space efficiency and to enable faster training. The fully observable RGB image view of the environments shown in this paper is provided for human viewing. The level generation in this task works as follows: Generate the layout of the map (X number of rooms with different sizes (at most size Y) and green goal) Add the agent to the map at a random location in the first room. Add the goal at a random location in the last room. MultiRoomNXSY -In this task, the agent gets an egocentric view of its surroundings, consisting of 3×3 pixels. A neural network parameterized as MLP is used to process the visual observation. Here, the privileged input involves accessing information from the external memory like neural turing machines (NTM) (; . Reading from external memory is usually an expensive operation, and hence we would like to minimize access to the external memory. For our experiments, we consider external memory in the form of neural turning machines. NTM processes inputs in sequences, much like a normal LSTM but NTM can allow the network to learn by accessing information from the external memory. In this context, the state of controller (the NTM's controller which processes the input) becomes the standard input, and based on this (the standard input), we decide the channel capacity, and based on channel capacity we decide whether to read from external memory or not. In order to test this, we evaluate our approach on copying task. This task tests whether NTMs can store and recall information from the past. We use the same problem setup as. As shown in fig 8, we found that we can perform slightly better as compared to NTMs while accessing external memory only 32% of the times. The only hyperparameter we introduce with the variational information bottleneck is β. For both the VIB baseline and the proposed method, we evaluated with 5 values of β: 0.01, 0.09, 0.001, 0.005, 0.009. We use the following parameters for lower level policies throughout the experiments. Each training iteration consists of 5 environments time steps, and all the networks (value functions, policy, and observation embedding network) are trained at every time step. Every training batch has a size of 64. The value function networks and the embedding network are all neural networks comprised of two hidden layers, with 128 ReLU units at each hidden layer. All the network parameters are updated using Adam optimizer with learning rate 3 · 10 −4.
Training agents with adaptive computation based on information bottleneck can promote generalization.
1,481
scitldr
Breathing exercises are an accessible way to manage stress and many mental illness symptoms. Traditionally, learning breathing exercises involved in-person guidance or audio recordings. The shift to mobile devices has led to a new way of learning and engaging in breathing exercises as seen in the rise of multiple mobile applications with different breathing representations. However, limited work has been done to investigate the effectiveness of these visual representations in supporting breathing pace as measured by synchronization. We utilized a within-subjects study to evaluate four common breathing visuals to understand which is most effective in providing breathing exercise guidance. Through controlled lab studies and interviews, we identified two representations with clear advantages over the others. In addition, we found that auditory guidance was not preferred by all users. We identify potential usability issues with the representations and suggest design guidelines for future development of app-supported breathing training.
We utilized a within-subjects study to evaluate four paced breathing visuals common in mobile apps to understand which is most effective in providing breathing exercise guidance.
1,482
scitldr
Current work on neural code synthesis consists of increasingly sophisticated architectures being trained on highly simplified domain-specific languages, using uniform sampling across program space of those languages for training. By comparison, program space for a C-like language is vast, and extremely sparsely populated in terms of `useful' functionalities; this requires a far more intelligent approach to corpus generation for effective training. We use a genetic programming approach using an iteratively retrained discriminator to produce a population suitable as labelled training data for a neural code synthesis architecture. We demonstrate that use of a discriminator-based training corpus generator, trained using only unlabelled problem specifications in classic Programming-by-Example format, greatly improves network performance compared to current uniform sampling techniques. Automated code synthesis is increasingly being studied as a way to lower the entry bar for nonexperts to create computer software, and to aid in generally taming the complexity of large-scale systems by allowing engineers to specify their intentions at a higher level of abstraction. The approach of neural code synthesis in particular has recently gained a lot of attention, applying advances in neural networks to the problem of automated synthesis. We specifically study the approach of programming by example, in which a small set of input-output examples are presented to the system to serve as a guide to the desired functionality of a program. Based on an analysis of these examples the synthesis system returns a source-code program able to replicate that functionality. Recent research in this field demonstrates promising , including and. However, research to date is limited to using domain-specific languages and often linear sequential programs without conditions or loops. We also take a neural-network-based approach to this problem in an attempt to gain inter-program inference across the training examples given to our system, potentially allowing the system to learn general aspects of programming to help synthesize new programs from unseen input/output examples. Unlike existing recent work, however, we target a general-purpose low-level programming language for code synthesis with a much larger search space of possible programs. This presents a major challenge in generating a training corpus for the neural network. Where related research has used uniform sampling methods through program search space , or even enumerative approaches , such approaches are wholly inadequate over larger search volumes -with sparse sampling producing very poor inference . To solve this training corpus generation problem we propose a novel discriminator-based system, in which new sub-corpora are iteratively created, continually measuring their functional properties against those of the problems it is attempting to solve. This process works by learning how similar the I/O mappings of generated programs are to I/O problems requested by users; by selecting programs which in increasingly similar I/O mappings we simultaneously choose programs with similar underlying source code features, until we are able to solve I/O problems requested by users. We demonstrate that the ant training corpus is greatly superior to a conventionally generated corpus via uniform sampling, when using a more generalised programming language for synthesis. We measure the performance of our approach by comparing against similar research on neural code synthesis which uses uniform or enumerative sampling for training, demonstrating that our discriminator-informed corpus generation approach far exceeds uniform sampling, by a factor of 2, in terms of find-rates. We also compare against a general baseline using genetic programming (GP); this baseline produces a surprising that GP has a broader range of programs found, although its probability of resolving any given user-provided problem is worse. Our approach offers an effective way to generate a training corpus for a high-dimensional program search space, capable of finding a wide range of unseen useful programs based only on input/output examples, without any labelled training data. At a high level our research also demonstrates that the structure of the training corpus provided to a neural network greatly affects its performance on general purpose code generation tasks, and we argue that it should therefore represent a core focus of the code synthesis community's efforts alongside work on neural network and language structures. In the remainder of this paper we firstly assess the literature in the field, focusing on neural code synthesis and specifically its corpus generation techniques. In Sec. 3 we then present the methodology we use to build our system, based on both a synthesis network and a discriminator network for corpus generation. We then evaluate our approach in Sec. 4 by comparing it against traditional training corpus generation approaches for neural code synthesis. [code to reproduce our will be made open-source should this paper be accepted, and this line will be changed to the link to the repository] Program synthesis has been studied for nearly as long as machine learning has been considered, having been proposed in 1950 by. Three main subfields exist: logic-based solvers Feng et al. (2017; 2018); Feser et al. (2015a; b);; stochastic-search-based Genetic Programming (GP);; and neural network based approaches. While solvers are excellent for restricted domains, they have limited applicability in programs containing loops due to their reliance on predicate logic. Genetic programming has been used to good effect in specific domains, but lacks the cross-program inference power that neural networks offer. We hold neural code synthesis to be an interesting area of study following the of DeepCoder , which suggests that neural network can identify higher level features of program outputs (sortedness, evenness, offset from zero mean...); combined with this is the opportunity of inter-program inference within the representational model of the neural network. Within the subfield of neural synthesis, we focus on programming by example , which takes a set of inputs and outputs which demonstrate a desired functionality. One of the core works in this field is , a system able to recognise subfunctions within a larger program consisting of up to 5 of these sub-functions called sequentially. This was significantly improved in subsequent work , which used a novel partial program execution model with fed back into the synthesis neural network, to enhance search speed and increase the length of programs that can be found up to 14. Both approaches operated in the same Domain Specific Language, which focused on arrays of integers. In terms of corpus generation, DeepCoder uses a simple enumerative strategy, as its DSL creates a small enough program space to list all possible programs, while use random sampling. Two other major domains exist in the field of neural code synthesis; In both of these, we see that synthetic training sets are predominantly generated by uniform sampling (; ;). The first of these two domains is string-manipulation functions; , as these have immediate and obvious usefulness to non-expert users. While these tasks can be handled by solvers, there are good applications for neural synthesis systems, such as , or neurally-augmented solver systems. The second domain is deduction of the behavioural algorithm of an agent in an environment, with the examples being sequences of actions taken by the agent in a grid environment. This has been studied both in terms of trained neural networks recognising behavioural patterns or in reinforcement learning based approaches to program generation. Our work differs from these domains' use of random uniform sampling for training sets, as well as from the unscalable enumerative strategy of DeepCoder, in that we introduce a strategy for generating a targeted training corpus without requirement for human-provided labelled data, and we investigate how this generation strategy significantly boosts inference performance. Finally, the most similar work to our own is , which investigate the effects of inputs to programs in terms of the produced I/O examples' effect on training success. This differs from our work in that they explore which inputs to feed into sampled programs, while we are presenting a new program sampling technique. This approach may well be complementary to our own if using as part of our training corpus generation pipeline. Our overall approach to code synthesis is to feed in a set of ten input/output (I/O) examples to a neural network, and have the output layers of that neural network select the most likely operation for each line of code for a function of a given upper length limit (where the number of output layers is equivalent to the maximum number of lines of code in the function, each neuron in an output layer is equivalent to selecting one particular operator for that line, and each line of code can be set to 'no-op'). The probabilistic nature of a neural network's output allows us to gain a set of possible programs to search through up to a given search depth; this basic approach is similar in spirit to in that it is attempting to guess the probability of each'feature' of a program -except in our case these features are relatively low-level instructions. Neural networks for code synthesis are usually trained on a uniformly sampled set of programs from the total space of all possible programs; while this is viable for simple domain-specific languages, it becomes intractable for more general purpose languages. With the aim of achieving code synthesis using a much more general programming language, our methodology focuses on how training data can be effectively drawn from a much larger search space without human input. Our approach to this uses a novel hybrid solution inspired by genetic programming but using a neural network as a fitness function. We term this neural network a discriminator, as it attempts to discriminate between the algorithms the genetic programming element is currently generating, and the likely features needed by requested I/O examples for human-useful functions. We reuse the programming language used in , a simplified C-like language, which can be compiled out to C or Java source code. The language features variable assignments, conditional operators, and loop operators. These allow complex flow-control operations, allowing us to study algorithms which go beyond linear concatenation of functions often used in the literature. We enhance the problem complexity considered by by removing the restrictions on array length, instead implementing an operation which instantiates empty arrays of given length. We also implement operations to allow array length to be determined, and the ability for a function to call itself, to allow recursive functions. Our methodology also avoids using any humanprovided hints about the likely features of source code for problem solutions, instead relying only on a set of I/O examples for unsolved problems. The full set of language operators available and restrictions used is included as an appendix 6.1. Our human useful corpus is a set of I/O mappings for 40 unique functions, each of which takes one array input parameter (of any length) and one variable, and returns an array of any length. The set of problems includes reversing arrays, appending arrays with new values, and summing the values in an array (a full list is given in appendix 6.4). We assume that these I/O mappings have been requested by human users, but that none have yet been solved. We further assume that each function has been requested at least five times, with different I/O mappings for each. This gives a total corpus of 200 I/O examples as a guide to the kinds of input-to-output transformations that are considered'useful'. We stress that at no point is the system ever provided with any source code for these examples. During early experimentation we found it beneficial to apply some conventions around these I/O examples, specifically for the first three examples in the set of ten for a given problem. The first I/O example for any problem is such that the content of each input array cell is the index of that cell, starting from 0, and the input variable has a random value. The second I/O example has the same properties but the second array cell is randomised to a new value between 8 and 8, inclusive. The third I/O example for every problem is the same as the first example except that the input variable is also randomised to the same range. The remaining seven I/O examples can be anything, and are randomly generated in our corpus. Our synthesis pipeline has the challenge of starting from this corpus of unsolved problems, specified by I/O examples, and solving as many of them as possible by synthesising the correct source code which correctly maps the given input to the given output for each problem. Two neural network architectures are used in our synthesis pipeline, a synthesiser and a discriminator. The discriminator is discussed later, and is used in generating corpora on which the synthesiser is trained. The synthesiser network is that which receives I/O examples and attempts to build a source code program to match the required functionality. This neural network has an input layer which accepts both the input and output values of each pair of 10 examples for a given problem, such that each input neuron takes a single bit of each integer. Internally we use of a set of 8 layers of 256 neurons, each connected to all previous layers, with selu activation, a simplified version of the net used in. For output, we have one output layer per line of the program to synthesise. Each output layer has one neuron for each way in which a line can be written (all valid operations), including no-op. A labelled program would be represented as an array of one-hot vectors, with the non-zero value mapping to the way that particular line should be written. We use crossentropy training loss on each line. When reading a program out, the activities of all output layers' neurons are taken and ranked, giving a confidence for each option for each line. We then search over the top 1,000,000 programs the network returns as'most confident', using a beam search technique (detailed in appendix 6.2). Our synthesiser neural network requires a training corpus comprised of the source code of example programs together with the I/O pairs for each program. Based on this training it is then able to solve some of the problems in our set of human useful I/O examples. To generate this training corpus we use an iterative process of genetic programming and discriminator training to create a series of increasingly relevant corpora. At the start of this process, we generate an initial corpus of 1,000 functionally unique programs sampled at random from the total space of all possible programs (where we quantify a program as unique if at least one output value is different from any other program when given the same five randomly-generated input parameters). Corpora other than this starting corpus are created by using a parent corpus from the set of accepted corpora (creation process detailed below). A child corpus is accepted if it finds an implementation for a human-useful IO mapping which was not found by an existing accepted lower-ancestry corpus, otherwise it is discarded and cannot be used as a parent. A corpus' ancestry is the number of parentchild relationships it is removed from the initial corpus. When creating a new child corpus, a parent corpus is selected by roulette selection from all currently accepted potential parents, with each potential parent's weighting being (0.1 + number of successf ul children)/(0.1 + number of children). After the first corpus, further corpora are generated based on the use of our discriminator. This is a neural network designed to classify input/output pairs generated by programs in our generated corpus as being closer to / further away from, those of I/O pairs in the human-useful set from users. A new corpus is created by selecting programs from a parent corpus which are measured to be most similar to human-useful programs in their input/output mappings (specifically the form of their outputs, and how these outputs seem to relate to corresponding inputs). By doing this, we hypothesise that the kinds of source code features found in these programs will similarly move closer to those needed to synthesise programs solving the human-useful I/O examples. Our discriminator neural network then works as follows. Architecturally, it consists of 2 dense layers of 16 nodes, with a single output. The featurisation of programs is identical to our synthesis neural network (as described above). We train this network by providing all of our I/O examples for all unfound human-useful target programs, and generate I/O examples for each of the 1,000 programs in the parent corpus. We train the discriminator network as a classifier to determine which of the I/O examples are from our human-useful set, and which are from the generated set. This training continues until a threshold T k is reached. T k is the proportion of programs which would be retained by the discriminator (as described below), if run on the parent corpus. T k is a randomly set value between 0.1 and 1, set as max(0.1, r 2) where r is a random real value uniformly distributed between 0 and 1. We use a random value here to increase the diversity in corpora formed, some being highly similar to the parent and some being fairly different, in a bid to maximise coverage. This trained discriminator therefore returns an estimate F d for how human-like a program is, ranging between 0 and 1. A program is said to pass the discriminator if it has an estimate of F d > 0.1. This second threshold was chosen based on preliminary experiments, particularly based on analysis of the distribution of estimates, which was found to be highly biased towards either end of the spectrum. These selected programs form the basis of a new child corpus. This child corpus is then expanded to have 1,000 functionally unique programs of its own, by using roulette wheel selection in which we take one of the existing programs in the corpus and mutate it, then accept or reject that mutated program as an additional member of the corpus based on a fitness function F q. This value F q is simply how much it exceeded the discrimination threshold (F d − 0.1). During development, roulette selection was found to produce far superior than tournament selection if the discriminator values are offset by 0.1, due to bias away from programs which only just passed the threshold. We continue to create new child corpora in this way up to a desired total number. We emphasise that, during this process, we have no access to the human-useful programs, only the I/O examples that users have requested be generated. After multiple rounds of generating self-training data in the above fashion, to reach source code features that are increasingly likely to be involved in solving the requested I/O examples, we then begin to be able to successfully synthesise solutions to the I/O examples of programs requested by users. Our synthesis pipeline returns a collection of solutions to I/O problems in the human useful set, and also returns a set of generated training corpora which were used in finding these solutions. We keep all generated corpora, and their corresponding trained synthesiser networks, which we'kept' in the above iterative process as either a parent or final child. Note that we keep the intermediate parents along the way because a child corpus is sometimes unable to solve some of the problems of its parent, even though it can solve new problems that its parent could not. These trained neural networks can then be re-used when given new I/O problems not present in the initial human useful set; each individual trained synthesiser network processes an I/O problem in ∼ 0.25 seconds. We can also collate all corpora into a single combined corpus on which to train a combined synthesiser neural network. We report the of both alternatives (the set of individual experts and the combined expert) in the following section. Our evaluation compares our approach to training corpus generation against competitive baselines. For all experiments we generate a total of 200 sub-corpora using our approach, with each experiment repeated 20 times. The HU corpus was fixed for all experiments, and the I/O examples did not vary, including in baselines, to allow consistent testing and repeatability. Noise between repeated experiments therefore derives only from the different systems' internal stochasticity. To evaluate our system we compare it against two baselines: one without using our discriminator, instead using randomly-generated corpora, and one using genetic programming on the same problem set. We then compare our approach to one using uniformly-sampled training data, as is used in related research for simpler domain-specific languages, and lastly we explore the the effects of decisions made by our discriminator in more detail. In this experiment we compare the find rates of programs against two baselines. For the first baseline we use an identical system to our own but with the discriminator removed as a fitness function, such that successive training corpora are generated only using randomly selected parents and mutations. The remainder of our pipeline is kept the same for this baseline. For our second baseline, we use genetic programming due to the inability to re-use baselines from other work in the literature, as for example seen in the DeepCoder framework, caused by the relative generality of our programming language for synthesis. Our genetic programming technique was designed to require roughly the same total computational time as the full sub-corpus generation pass, to fairly compare the options for user IO mapping resolution. It uses a population size of 1,000, and a maximum generation count of 10,000. It uses tournament selection, with a tournament size of 6 and a probability of mutation of 0.15. The fitness function is a function of the Euclidean distance between the desired output and the target output, unless the outputs differ in length, in which case it is set to an extremely negative penalty value. Our are shown in Table 1, demonstrating that the set of networks produced by discriminated sub-corpus generation between them produced the highest resolution rate for the 200 human-useful I/O mappings, returning an average of 81.5(σ = 8.3). Interestingly, the genetic programming technique found more unique functionalities. Of the 40 unique ground truth programs, the GP baseline found 22.7, while the sub-corpora found 20.3. This means that while the sub-corpus technique had a higher probability of finding a program to match a user-supplied IO mapping, there existed a stronger inequality between the find rates than the GP algorithm, which was more consistent in its probability of finding any given program. It is highly likely that this is a phenomenon produced by the nature of the discriminator itself, driving the system towards producing certain types of algorithms predominantly. It seems likely that certain properties of certain programs would be more recognisable by a neural network, and so emphasised by the discriminator's fitness function; this is a key avenue of future work. In this set of experiments we evaluate the performance of our iteratively-produced training corpus against a baseline randomly-generated training corpus. To do this we take a subset of 5 of the subcorpora produced, attempting to maximise the number of separate I/O examples found by the set of networks. The collated corpus then discards any duplicated functionalities, giving a training corpus of approximately 4,200 examples. For each of these, we write out a set of training examples, with 5 randomly generated inputs into each function being used to generate the feature I/O examples. We then train the same synthesis neural network on these as before. For the comparative baseline training data we simply sample 5,000 programs uniformly at random from the space of all possible programs, and again generate 5 training I/O examples using each of these programs. In both cases the corpora are then divided between training and validation in a 0.9:0.1 split, with validation programs being functionally distinct from those in the training set; the training data is then fed into our synthesis neural network and tested. As can be seen in Table 2, the collated corpus produced by the discriminated sub-corpus generation process more than doubles the performance of the randomly generated training corpus. The discrim- inator has driven program generation towards a set of programs which is far more representative of the types of behaviour present within the human-useful programs. Inspection of the programs within the corpora matches this expectation: there is no drive for a randomly generated program to, say, contain a loop, and if they do there is no drive towards writing to all/most cells in the output array. While the behaviour of each training program is thus distinct, this is not necessarily in a'useful' way, relative to the types of problems the user wishes to solve. The discriminator, however, can drive towards useful training programs, by emphasising programs which write relatively uniformly to output arrays, using loops and with values remaining within sensible ranges; as well as generating programs with outputs dependent on the inputs, rather than fixed-output programs. Without ever receiving training labels or information about the nature of useful features, the discriminator learns to maximise their presence in the sampled programs. In this section we explore in more detail of effect of our discriminator design. We first demonstrate the difference-over-time from our first baseline in Sec. 4.1, in which we generate successive corpora using either our discriminator or using simple random selection of parent. The of this are shown in Figure 1, in which we see how many programs from our target human useful set are solved over time as successive corpora are generated. As expected, both the discriminator setup and the random setup start at the same point, as the initial corpus does not use a discriminator. The first discriminated sub-corpus trains a network which finds an additional 6.7 solutions to I/O examples on average, compared to the non-discriminated corpus' increase of only 2.8. This progress continues as new sub-corpora are added, and is also seen on the graph of unique functionalities found. This clearly demonstrates the value of the discriminator in corpus generation. We next examine the find rates over the'ancestry' of sub-corpora in detail, to give indications as to the behaviour of the system in response to the discriminator's iterated use. Each sub-corpus past the first uses another as a parent; the ancestry of a sub-corpus is therefore the number of parents since the starting corpus. This varied by experiment, with all accepting a corpus with ancestry of at least 5, and the maximum being a single run which had a sub-corpus of ancestry 12. In Figure 2 we plot the find rates of each program over ancestry progression. We discover that certain programs are trivial to find, and do not require discriminator use (Array Length; Array to Zero...). These have a find rate of nearly 1.0 before the discriminator is used (black in first cell). Program find rates over sub-corpus ancestry. From white at 0 find rate, to fully black indicating 1.0 find rate. Border used to indicate a non-zero value. Plot cut off at ancestry = 8 due to low sample size past this point. The way actual program outputs evolve is detailed in appendix 6.3. Certain programs are found with high reliability past ancestry of 1, for example the identity program. This program was almost never found without use of the discriminator, but a single use lead to it having a nearly 1.0 find rate. This indicates that the discriminator lead to a set of subcorpora which represented the identity function's programmatic behaviour far better. We found that these sub-corpora's programs nearly universally featured loops and sequential array write operations, functionalities required to produce the identity function. The second use of the discriminator showed similar programs, such as Identity Parity and'Iterate from Start'. These were rarely found in both a non-discriminated sub-corpus, or an ancestry = 1 sub-corpus, but featured regularly at later depths. This reflects the discriminator iterating on its previous selections, attempting to discriminate between programs produced by a first-generation discriminator and the human useful corpus. These programs now found feature more complex loopusing behaviours than simple reproduction of the input array, such as using conditionals and literals. Past ancestry = 2, however, we no longer see sudden find-rate jumps. This indicates that the discriminator loses its effectiveness and can no longer guide as reliably towards the functionalities of the human-useful corpus. We speculate that the discriminator is unable to force the presence of the required functionalities in the produced training corpus either because (i) the genetic algorithm doesn't produce any for it to select; (ii) it does not have the capacity to represent the behaviours (due to low layer width or depth); (iii) these functionalities are simply not identifiable from I/O mappings alone. Further study of this is the subject of our future work. This paper has presented a discriminator-based corpus generation technique, which iteratively seeks to generate training programs drawn from the same distribution as the programs it is attempting to solve. It works without the need for labelled training data, generating its own based purely on supplied features of I/O examples and the underlying properties of the language itself. We show that it is greatly superior to one which does not use a discriminator for selecting training examples. Once generation has completed, our framework can also return a collated training corpus, allowing training of a single large neural network. We show that this collated network is also significantly stronger, in terms of quality of trained network, to one trained using random sampling techniques. Based on our , we argue that the way in which training corpora are generated for neural program synthesis deserves significant further study -and may be of equal importance to the design of the neural network used for synthesis itself. In future work we will further explore the ability of discriminator-style networks to identify specific features of code likely to be involved in solving a particular problem, as well as more advanced architectures for synthesis and discriminator networks. Our programs are implemented as a constrained version of C, as in the work of. We, however, expand upon their work, increasing the limits on variable count, and introducing the concepts of runtime-defined array lengths and recursive function calls. We limit the programs to length 14, the maximum number of integer variables to 7, the maximum number of array variables to 2 (the one input and the one ultimately returned), the maximum array length under any condition to 16 and the maximum value ever to between -255 and 255 (inclusive). The operators available are described in Table 3, and represent a superset of the operators used in the work of Wild and Porter. Each line is given a "depth" to search, ranging from a single option to up to a maximum of 10 options. The combination of all programs within this space are initially added to a set, which therefore has a size equal to the product of all the depths. The beam search constructs the initial search volume by iteratively increasing the search depth of a single line to maximise the exploration value function. This function is as follows: [a 0, a 1, a 2 ...a n] represents the ranked option confidences from the neural network, now sorted such that a 0 is the highest-confidence option. These are normalised by dividing by a 0. S is the search volume size, and S i is the search volume size if line i had its depth increased by one. D i is the current depth of search of a given line minus one (for example if D i is 0, only the first option will be added to the search volume, if it is 1, the top two ranked options will be added to the combinational set of programs (thus doubling the search space size)). The exploration value for adding increasing the depth of any given line i is This process attempts to drive the neural network towards exploring lines in which multiple options are highly confident, and away from lines where a single option has been given a high confidence and all others given a low or negative confidence. Once the iterative addition has produced a set of >= 1, 000, 000, the 1,000,000 options with the highest sum confidence are selected and searched exhaustively. In Table 4 we see outputs from 15 randomly generated programs, from 3 randomly chosen subcorpora. The first is the starting corpus of the run, which had no discriminator. The second has a discriminator trained between the human-useful IO examples and its parent corpus. The third sub-corpus then has a second generation discriminator, which was trained based on a discriminated corpus and the HU IO corpus. All outputs are responses to the function being run with an input of input array = and input integer = 2. We see that the programs in the starting corpus, ancestry=0, which were randomly sampled from program space, differ greatly from the style of program we are attempting to train the network to synthesise. The majority of all returned values are 0, and the array length varies considerably. There is little evidence that the input array is being read in, or indeed any use of loops at all. The second generation corpus, ancestry=1, shows little use of the input values, but has outputs in more consistent ranges. The output lengths now appear to always be the length of the input array, and the programs clearly use loops to write to the output array. Despite this, the output patterns are highly uniform, often being the same value repeated for most of the output array. The third generation corpus, ancestry=2, shows more complex program still. Negative values, which would require arithmetic operations to produce, are present. The values vary across larger ranges, and they show elements of the input array (the last example being the input array changed by a single element). We believe this is fairly illustrative of the behaviour of the discriminator, although more in-depth analysis of the programs produced could form a good direction for future work. Functionally, all programs within this corpus receive both an array of length 2 to 16, with values ranging between -8 and 8, and an integer also ranging from -8 to 8, the same as in the rest of the paper. Some functions simply do not use the input variable, but this is accomplished solely by ignoring it, not by changing the function input parameter set. The human useful programs and their functionalities are listed in table 5 Table 5: Human useful programs, specified by user and presented to the system as a set of IO mappings only. Absolute Returns an array of length one less than the input array. Values are shifted, such that output i = input (i+1). The first input value is therefore not replicated in the output. Returns an array of length equal to the input array. Values are shifted, such that output i = input (i+1). The last value of the output array is set to zero. Returns an array of length one greater than that of the input array. Values are the values in the input array preceded by a zero. Returns an array of length equal to that of the input array. Values are such that output i = input (i−1), with the first output value being zero. Returns an array of length equal to that of the input array. Values are the the same elements as in the input array, with the exception that all zero-values are moved to the end of the sequence. Returns an array of length equal to that of the input array. Values in output -1, 0 or 1. If input i > 0 then output i = 1, if input i < 0 then output i = −1, else output i = 0 Returns a copy of the input array, sorted in ascending order. Returns an array of length equal to that of the input array. Values are such that output i = (input i) To Iterator Returns an array of length equal to that of the input array. Values are such that output i = i Add Returns an array of length equal to that of the input array. Values are such that output i = input i + inputInteger Returns an array of length one greater than that of the input array. Values are those of the input array, followed by the input integer. Returns an array of length equal to that of the input array. Values are such that if input i > inputInteger then output i = input i else output i = inputInteger Returns an array of length equal to that of the input array. Values are such that if input i < inputInteger then output i = input i else output i = inputInteger Returns an array of length equal to that of the input array. Values are such that output i = input i +(i * inputInteger) Returns an array of length equal to that of the input array. Values are such that output i = inputInteger Returns an array of length equal to that of the input array. Values are such that if input i > inputInteger then output i = 1 else output i = −1 Iterate from Start Returns an array of length equal to that of the input array. Values are such that output i = i + inputInteger Returns an array of length equal to that of the input array. Values are such that if input i < inputInteger then output i = 1 else output i = −1 Returns an array of length equal to that of the input array. Values are such that output i = i * inputInteger Returns an array of length equal to that of the input array. Values are such that output i = input i − inputInteger
A way to generate training corpora for neural code synthesis using a discriminator trained on unlabelled data
1,483
scitldr
Neural reading comprehension models have recently achieved impressive gener- alisation , yet still perform poorly when given adversarially selected input. Most prior work has studied semantically invariant text perturbations which cause a model’s prediction to change when it should not. In this work we focus on the complementary problem: excessive prediction undersensitivity where input text is meaningfully changed, and the model’s prediction does not change when it should. We formulate a noisy adversarial attack which searches among semantic variations of comprehension questions for which a model still erroneously pro- duces the same answer as the original question – and with an even higher prob- ability. We show that – despite comprising unanswerable questions – SQuAD2.0 and NewsQA models are vulnerable to this attack and commit a substantial frac- tion of errors on adversarially generated questions. This indicates that current models—even where they can correctly predict the answer—rely on spurious sur- face patterns and are not necessarily aware of all information provided in a given comprehension question. Developing this further, we experiment with both data augmentation and adversarial training as defence strategies: both are able to sub- stantially decrease a model’s vulnerability to undersensitivity attacks on held out evaluation data. Finally, we demonstrate that adversarially robust models gener- alise better in a biased data setting with a train/evaluation distribution mismatch; they are less prone to overly rely on predictive cues only present in the training set and outperform a conventional model in the biased data setting by up to 11% F1. Neural networks can be vulnerable to adversarial input perturbations . In Natural Language Processing (NLP), which operates on discrete symbol sequences, adversarial attacks can take a variety of forms including character perturbations , semantically invariant reformulations (b; b) or-specifically in Reading Comprehension (RC)-adversarial text insertions . A model's inability to handle adversarially chosen input text puts into perspective otherwise impressive generalisation for in-distribution test sets (; ; ; inter alia) and constitutes an important caveat to drawn regarding a model's language understanding abilities. While semantically invariant text transformations can remarkably alter a model's predictions, the converse problem of model undersensitivity is equally troublesome: a model's text input can often be drastically changed in meaning while retaining the original prediction. In particular, previous works (; a;) show that even after deletion of all but a small fraction of input words, models often produce the same output. However, such reduced inputs are usually unnatural to a human reader, and it is both unclear what behaviour we should expect from natural language models evaluated on unnatural text, and how to use such unnatural inputs to improve models. In this work, we show that in RC undersensitivity can be probed with automatically generated natural language questions. In turn, we use these to both make RC models more sensitive when they should be, and more robust in the presence of biased training data. Fig. 1 shows an examples for a BERT LARGE model ) trained on SQuAD2.0 that is given a text and a comprehension question, i.e. "What was Fort Caroline renamed to after the Spanish attack?" which it correctly answers as "San Mateo" with 98% confidence. Altering this question, however, can increase model confidence for this same prediction to 99%, even though the new question is unanswerable given the same context. That is, we observe an increase in model probability, despite removing relevant question information and replacing it with irrelevant content. We formalise the process of finding such questions as an adversarial search in a discrete input space arising from perturbations of the original question. There are two types of discrete perturbations that we consider, based on part-of-speech and named entities, with the aim of obtaining grammatical and semantically consistent alternative questions that do not accidentally have the same correct answer. We find that SQuAD2.0 and NewsQA models can be attacked on a substantial proportion of samples, even with a limited computational adversarial search budget. The observed undersensitivity correlates negatively with standard performance metrics (EM/F 1), suggesting that this phenomenon -where present -is a reflection of a model's lack of question comprehension. When training models to defend against undersensitivity attacks with data augmentation and adversarial training, we observe that they can generalise their robustness to held out evaluation data without sacrificing standard performance. Furthermore, we notice they are also more robust in a learning scenario that has dataset bias with a train/evaluation distribution mismatch, increasing their performance by up to 11%F 1. In summary, our contributions are as follows: • We propose a new type of adversarial attack targeting the undersensitivity of neural RC models, and show that current models are vulnerable to it. • We compare two defence strategies, data augmentation and adversarial training, and show their effectiveness at reducing undersensitivity errors on held-out data, without sacrificing standard performance. • We demonstrate that robust models generalise better in a biased data scenario, improving their ability to answer questions with many possible answers when trained on questions with only one. Adversarial Attacks in NLP. Adversarial examples have been studied extensively in NLP -see for a recent survey. However, automatically generating adversarial examples in NLP is non-trivial, as the search space is discrete and altering a single word can easily change the semantics of an instance or render it incoherent. Recent work overcomes this issue by focusing on simple semantic-invariant transformations, showing that neural models can be oversensitive to such modifications of the inputs. For instance, Ribeiro et al. (2018b) use a set of simple perturbations such as replacing Who is with Who's. Other semantics-preserving perturbations include typos , the addition of distracting sentences , character-level adversarial perturbations , and paraphrasing (a). In this work, we instead focus on undersensitivity of neural RC models to semantic perturbations of the input. This is related to previous works leveraging domain knowledge for the generation of adversarial examples : our method is based on the idea that modifying, for instance, the named entities involved in a question can completely change its meaning and, as a consequence, the answer to the question should also differ. Our approach does not assume white-box access to the model, as do e.g. and. Undersensitivity. demonstrated classifier undersensitivity in computer vision, where altered input images can still produce the same prediction scores, achieved using (approximately) invertible networks. investigated over-and undersensitivity in dialogue models and addressed the problem with a max-margin training approach. Ribeiro et al. (2018a) describe a general model diagnosis tool to identify minimal sufficient feature sets that are sufficient for a model to form high-confidence predictions. showed that it is possible to reduce inputs to minimal input word sequences without changing a model's predictions. We see our work as a continuation of this line of inquiry, but with a particular focus on undersensitivity in RC. In contrast to , we consider concrete alternative questions, rather than arbitrarily reduced input word sequences. We furthermore address the observed undersensitivity using dedicated training objectives, in contrast to and Ribeiro et al. (2018a) who simply highlight it. Finally, one of the baseline methods we later test for defence against under-sensitivity attacks is a form of data augmentation that has similarly been used for de-biasing NLP models ). Unanswerable Questions in Reading Comprehension.'s publication of adversarial attacks on the SQuAD1.1 dataset, proposed the SQuAD2.0 dataset, which includes over 43,000 human-curated unanswerable questions. A second dataset with unanswerable question is NewsQA , comprising questions about news texts. Training on these datasets should in models with an ability to tell whether questions are answerable or not; we will see, however, that this does not extend to adversarially chosen unanswerable questions in our undersensitivity attacks. address unanswerability of questions from a given text using additional verification steps. Other approaches have shown the benefit of synthetic data to improve performance in SQuAD2.0 . We operate on the same underlying research premise that the ability to handle unanswerable questions is an important part of improving text comprehension models. In contrast to prior work, we demonstrate that despite improving performance on test sets that include unanswerable questions, the problem persists when adversarially choosing from a larger space of questions. Problem Overview. Consider a discriminative model f θ, parameterised by a collection of dense vectors θ, which transforms an input x into a predictionŷ = f θ (x). In our task, x = (t, q) is a given text t paired with a question q about this text. The label y is the answer to q where it exists, or a NoAnswer label where it cannot be answered. In a text comprehension setting with a very large set of possible answers, predictionsŷ should be specific to x, i.e. not the model prediction for arbitrary inputs. And indeed, randomly choosing a different input (t, q) is usually associated with a change of the model predictionŷ. However, there exist many examples where the prediction erroneously remains stable; the goal of the attack formulated here is to find such cases. Concretely, given a computational search budget, the goal is to discover inputs x, for which the model still erroneously predicts f θ (x) = f θ (x), even though x is not answerable from the text. Input Perturbation Spaces. Identifying suitable candidates for x can be achieved in manifold ways. A simple option is to search among a large question collection, but we find this approach to only rarely be successful; an example is shown in Table 8, Appendix C. Generating x, on the other hand, is prone to in ungrammatical or otherwise infeasible text. Instead, we consider a perturbation space X T (x) spanned by perturbing original inputs x using a perturbation function family T: This space X T (x) contains alternative model inputs derived from x. Ideally the transformation function family T is chosen such that the correct label of these new inputs is changed: for x ∈ X T (x): y(x) = y(x). We will later search in X T (x) to find inputs x which erroneously retain the same prediction as x:ŷ(x) =ŷ(x). Part-of-Speech (PoS) Perturbations. We first consider the perturbation space X T P (x) generated by PoS perturbations T P of the original question: we swap individual tokens with other, PoSconsistent alternative tokens, where we draw from large collections of tokens of the same PoS types. For example, we might alter the question Who patronized the monks in Italy? to Who betrayed the monks in Italy? by replacing the past tense verb patronized with betrayed. There is no guarantee that the altered question will require a different answer (e.g. due to synonyms). Even more so -there might be type clashes or other semantic inconsistencies (e.g. Who built the monks in Italy?). We perform a qualitative analysis to investigate the extent of this problem and find that, while a valid concern, for the majority of attackable samples there exist attacks based on correct well-formed questions (see Section 5). Named Entity Perturbations. The space X T E (x) generated by the transformation family T E is created by substituting mentions of named entities in the question with different type-consistent named entities, derived from a large collection E. For example, a comprehension question Who patronized the monks in Italy? could be altered to Who patronized the monks in Las Vegas?, replacing the geopolitical entity Italy with Las Vegas, chosen from E. Altering named entities often changes the specifics of the question and poses different requirements to the answer, which are unlikely to be satisfied from what is stated in the given text t, given the broad nature of possible entities in E. While it is not guaranteed that perturbed questions are in fact unanswerable or require a different answer, we will find in a following qualitative analysis that in the large majority of cases they do. Undersensitivity Attacks. Thus far we have described different methods of perturbing questions. We will search in the ing perturbation spaces X T P (x) and X T E (x) for inputs x for which the model prediction remains constant. However, we pose a slightly stronger requirement: f θ should assign an even higher probability to the same predictionŷ(x) =ŷ(x) than for the original input: Note that this is a conservative choice, guaranteed to preserve the prediction. To summarise, we are searching in a perturbation space for altered questions which in a higher model probability to the same answer as the original input question. If we have found such an altered question that satisfies the inequality, we have identified a successful adversarial attack, which we will refer to as an undersensitivity attack. Adversarial Search in Perturbation Space. In its simplest form, a search for an adversarial attack in the previously defined attack spaces amounts to a search over a list of single lexical alterations for the maximum (or any) higher prediction probability. We can however recur the replacement procedure multiple times, arriving at texts with potentially larger lexical distance to the original question. For example, in two iterations of PoS-consistent lexical replacement, we can alter Who was the duke in the battle of Hastings? to inputs like Who was the duke in the expedition of Roger? The space of possibilities with increasing distance grows combinatorially, and with increasing perturbation radius it becomes computationally infeasible to comprehensively cover the full perturbation spaces arising from iterated substitutions. To address this, we follow and apply beam search to narrow the search space, and seek to maximise the difference Beam search is conducted up to a pre-specified maximum perturbation radius ρ, but once x with ∆ > 0 has been found, we stop the search. Relation to Attacks in Prior Work. Note that this type of attack stands in contrast to other attacks based on small, semantically invariant input perturbations (; ; b) which investigate oversensitivity problems. Such semantic invariance comes with stronger requirements and relies on synonym dictionaries or paraphrases harvested from back-translation (b), which are both incomplete and noisy. Our attack is instead focused on undersensitivity, i.e. where the model is stable in its prediction even though it should not be. Consequently the requirements are not as difficult to fulfil when defining perturbation spaces that alter the question meaning, and one can rely on sets of entities and PoS examples automatically extracted from a large text collection. In contrast to prior attacks , we evaluate each perturbed input with a standard forward pass rather than using a first-order Taylor approximation to estimate the output change induced by a change in input. This is less efficient but exact, and furthermore does not require white-box access to the model and its parameters. Training and Dataset Details. We next conduct experiments using the attacks laid out above to investigate model undersensitivity. We attack the BERT model fine-tuned on SQuAD2.0 , and measure to what extent the model exhibits undersensitivity when adversarially choosing input perturbations. Note that SQuAD2.0 per design contains unanswerable questions in both training and evaluation sets; models are thus trained to predict a NoAnswer option where a comprehension question cannot be answered. In a preliminary pilot experiment, we first train a BERT LARGE model on the full training set for 2 epochs, where it reaches, 78.32%EM and 81.44%F 1, in close range to reported by. We then however choose a different training setup as we would like to conduct adversarial attacks on data entirely inaccessible during training: we split off 5% from the original training set for development purposes and retain the remaining 95% for training, stratified by articles. We use this development data to tune hyperparameters and perform early stopping, evaluated every 5000 steps with batch size 16 and patience 5, and will later tune hyperparameters for defence on it. The original SQuAD2.0 development set is then used as evaluation data, where the model reaches 73.0%EM and 76.5%F 1; we will compute the undersensitivity attacks on this entirely held out part of the dataset. Attack Details. To compute the perturbation spaces, we collect large sets of string expressions across Named Entity and PoS types to define the perturbation spaces T E and T P, which we gather from the Wikipedia paragraphs used in the SQuAD2.0 training set, with the pretrained taggers in spacy 2, and the Penn Treebank tag set for PoS. This on average in 5126 different entities per entity type, and 2337 different tokens per PoS tag. When computing PoS perturbations, we found it useful to disregard perturbations of particular PoS types that often led to only minor changes or incorrectly formed expressions, such as punctuation or determiners; more details on the left out tags can be found in Appendix A. As the number of possible perturbations to consider is potentially very large, we limit beam search at each step to a maximum of η randomly chosen type-consistent entities from E, or tokens from P, and re-sample these throughout the search. We use a beam width of b = 5, ing in a bound to the total computation spent on adversarial search of b · ρ · η model evaluations per sample, where ρ is the perturbation'radius' (the maximum search depth). Metric: Adversarial Error Rate. We quantify adversarial vulnerability to the described attacks by measuring the proportion of evaluation samples for which at least one undersensitivity attack is found given a computational search budget, disregarding cases where a model predicts NoAnswer. budgets (η = 32, ρ = 1) reach more than 60% attack success rates, and this number can be raised to 95% with a larger computational budget. For perturbations based on Named Entity substitution, we find overall lower attack success rates, but still find that more than half of the samples can successfully be attacked under the budgets tested. Note that where attacks were found, we observed that there often exist multiple alternatives with higher probability. These findings demonstrate that BERT is not necessarily specific about the entire contents of a comprehension question given to it, and that even though trained to tell when questions are unanswerable, the model often fails when facing adversarially selected unanswerable questions. In a side experiment we investigated undersensitivity attacks using Named Entity perturbations on SQuAD1.1, which prove even more vulnerable with an adversarial error rate of 70% already with a budget of η = 32; ρ = 1 (compared to 34% on SQuAD2.0). While this demonstrates that undersensitivity is also an issue for SQuAD1.1, the unanswerable question behaviour is not really well-defined, making hard to interpret. On the other hand, the notable drop between the datasets demonstrates the effectiveness of the unanswerable questions added during training in SQuAD2.0. Qualitative Analysis of the Attacks. As pointed out before, the attacks are noisy as the introduced substitutions are by no means guaranteed to in meaningful and semantically consistent expressions, or require a different answer than the original. To gauge the extent of this we inspect 100 successful attacks conducted at ρ = 6 and η = 256 on SQuAD2.0, both for PoS perturbations and named entity perturbations. We label them as either 1. Having a syntax error (e.g. What would platform lower if there were fewer people?). These are mostly due to cascading errors stemming from wrong named entity/PoS tag predictions. 2. Semantically incoherent (e.g. Who built the monks?) 3. Questions that require the same correct answer as the original, e.g. due to a paraphrase. 4. Valid attacks: Questions that would either demand a different answer or are unanswerable given the text (e.g. When did the United States withdraw from the Bretton Woods Accord? and its perturbed version When did Tuvalu withdraw from the Bretton Woods Accord?) Table 1 shows several example attacks along with their annotations, and in Table 2 perturbed version What scorer did the case go before the supreme court?), it is remarkable that the model assigns higher probabilities to the original answer even when faced with incoherent questions, casting doubt on the extent of question information being used to determine the answer. Since the named entity-based attacks have a substantially larger fraction of valid, well-posed alternative questions, we will focus our study on these attacks for the remainder of this paper. We found that models are vulnerable to undersensitivity adversaries, however not all samples were successfully attacked. This raises questions on what distinguishes samples that can and cannot be attacked. We investigate various characteristics, aiming to understand model vulnerability causes. Questions that can be attacked produce lower original prediction probabilities, with an average of 72.9% compared to 83.8% for unattackable questions. That is, there exists a direct inverse link between a model's original prediction probability and sample vulnerability to an undersensitivity attack. The adversarially chosen questions had an average probability of 78.2%, i.e. a notable gap to the original questions. It is worth noting that search halted once a single question with higher probability was found. Vulnerable samples are also less likely to be given the correct prediction overall. Concretely, evaluation metrics for vulnerable examples are 56.4%/69.6% EM/F 1, compared to 73.0%/76.5% on the whole dataset. Attackable questions have on average 12.3 tokens, whereas unattackable ones are slightly shorter with on average 11.1 tokens. We considered the distribution of different question types (What, Who, When, ...) for both attackable and unattackable samples and did not observe notable differences apart from the single most frequent question type What; it is a lot more prevalent among the unattacked questions (56.4%) than under successfully attacked questions (42.1%). This is by far the most common question type, and furthermore one that is comparatively open-ended and does not prescribe particular type expectations to its answer, as e.g., a where question would require a location. A possible explanation for the prevalence of the What questions among defended samples is thus, that the model cannot rely on type constraints alone to arrive at its predictions, and is thus less prone to such exploitation. Section 6.2 will address this in more detail. Fig. 3 shows a histogram of the 10 most common named entity tags appearing in unattackable samples versus the corresponding fraction of replaced entities in successfully attacked samples. Besides one exception, the distributions are remarkably similar. Undersensitivity can be induced for a variety of entity types used in the perturbation, but in particular questions with geopolitical entities (GPE) are error-prone. A possible explanation is observations on (non-contextualised) word embeddings clustering geopolitical entities (e.g. countries) close to one another, thus making them potentially hard to distinguish for a model operating on these embeddings . We will now investigate methods for mitigating excessive model undersensitivity. Prior work has considered both data augmentation and adversarial training for more robust models, and we will conduct experiments with both. Adding a robustness objective can negatively impact standard test metrics , and it should be noted that there exists a natural trade-off between performance on one particular test set and performance on a dataset of adversarial inputs. We perform data augmentation and adversarial training by adding a corresponding loss term to the standard log-likelihood training objective: where Ω is the standard training data, fit with a discriminative log-likelihood objective, Ω either a set of augmentation data points, or of successful adversarial attacks where they exist, and λ > 0 a hyperparameter. In data augmentation, we randomly sample perturbed input questions, whereas in adversarial training we perform an adversarial search to identify them (η = 32, ρ = 1). In both cases, alternative data points in Ω will be fit to a NULL label to represent the NoAnswer prediction -which is also fit with a log-likelihood objective. Note that we continuously update Ω throughout training to reflect adversarial samples w.r.t the current model. Experimental Setup: SQuAD2.0. We train the BERT LARGE model on SQuAD2.0, tuning the hyperparameter λ ∈ {0.0, 0.01, 0.1, 0.25, 0.5, 0.75, 1.0, 2.0} and find λ = 0.25 to work best for either of the two defence strategies. We tune the threshold for predicting NoAnswer based on validation data and report on the test set (original SQuAD2.0 Dev set). All experiments are run with batch size 16, named entity perturbations for the defence methods, and a relatively cheap adversarial attack with budget η = 32 and ρ = 1. Where no attack is found for a given question we redraw standard samples from the original training data. We evaluate the model on its validation data every 5000 steps (batch size 16) and perform early stopping with a patience of 5. Experimental Setup: NewsQA. Following the experimental protocol for SQuAD, we further test a BERT BASE model on NewsQA , which -like SQuAD2.0 -contains unanswerable questions. As annotators do often not fully agree on their annotation in NewsQA, we opt for a conservative choice and filter the dataset, such that only samples with the same majority annotation are retained, following the preprocessing pipeline of. Experimental Outcomes. Results for these experiments can be found in Table 3 and Table 4 for the two datasets, respectively. First, we observe that both data augmentation and adversarial training substantially reduce the number of undersensitivity errors the model commits, consistently across adversarial search budgets, and consistently across the two datasets. This demonstrates that both training methods are effective defences and can mitigate -but not eliminate -the model's undersensitivity problem. Notably the improved robustness -especially for data augmentationis possible without sacrificing performance in the overall standard metrics EM and F 1, even slight improvements are possible. Table 4: Breakdown of undersensitivity error rate overall (lower is better), and standard performance metrics (EM, F 1 ; higher is better) on different subsets of NewsQA evaluation data, all in [%]. Second, data augmentation is a more effective defence training strategy than adversarial training. This holds true both in terms of standard and adversarial metrics, and hints potentially at some adversarial overfitting on the training set. Finally, a closer inspection of how performance changes on answerable (HasAns) vs. unanswerable (NoAns) samples of the datasets reveals that models with modified training objectives show improved performance on unanswerable samples, while sacrificing some performance on answerable samples. 4 This suggests that the trained models -even though similar in standard metrics -evolve on different alleyways during training, and the modified objective prioritise fitting unanswerable questions to a higher degree. In Tables 3 and 4 are computed using the same perturbations at training and evaluation time. The perturbation space is relatively large, and questions are about a disjoint set of articles at evaluation time. Nevertheless there is the potential of overfitting to the particular perturbations used during training. To measure the extent to which the defences generalise also to new, held out sets of perturbations, we assembled a new, disjoint perturbation space of identical size per NE tag as those used during training, and evaluate models on attacks with respect to these perturbations. Named entities are chosen from English Wikipedia using the same method as for the training perturbation spaces, and chosen such that they are disjoint from the training perturbation space. We then ran adversarial attacks using these new attack spaces on the previously trained models, and found that both vulnerability rates of the standard model, as well as relative defence success transfer to the new attack spaces. For example, with η = 256 we observed vulnerability ratios of 51.7%, 20.7%, and 23.8% on SQuAD2 for standard training, data augmentation, and adversarial training, respectively. Detailed for different values of η, as well as for NewsQA can be found in Appendix B. Datasets for high-level NLP tasks often come with annotation and selection biases; models then learn to exploit shortcut triggers which are dataset-but not task-specific . For example, a model might be confronted with questions/paragraph pairs which only ever contain one type-consistent answer span, e.g. mention one number in a text with a'How many...?' question. It is then sufficient to learn to pick out numbers from text to solve the task, irrespective of other information given in the question. Such a model might then have trouble generalising to articles that mention several numbers, as it never learned that it is necessary to take into account other relevant question information that helps determine the correct answer. We test models trained in such a scenario: a model is trained on SQuAD1.1 questions with paragraphs containing only a single type-consistent answer expression for either a person, date, or numerical answer. At test time, we present it with question/article pairs of the same respective question types, but now there are multiple possible type-consistent answers in the paragraph. We obtain such data from , who first described this biased data scenario. We use the same training data, but split the test set with a 40/60% split 5 into development and test data. 6 We then test both a vanilla fine-tuned BERT BASE transformer model, and a model trained to be less vulnerable to undersensitivity attacks using data augmentation. Finally, we perform a control experiment, where we join and shuffle all data points from train/dev/test (of each question type, respectively), and split the dataset into new parts of the same size as before, which now follow the same data distribution (w/o data bias setting). Table 5 shows the . In this biased data scenario we observe a marked improvement across metrics and answer type categories when a model is trained with unanswerable samples. This demonstrates that the negative training signal stemming from related -but unanswerable -questions counterbalances the signal from answerable questions in such a way, that the model learns to better take into account the relevant information in the question, which allows it to correctly distinguish among several type-consistent answer possibilities in the text, which the standard BERT BASE model does not learn well. We next evaluated BERT LARGE and BERT LARGE + Augmentation Training on ADDSENT and ADDONESENT, which contain adversarially composed samples . Our , summarised in Table 10 in the Appendix, show that BERT LARGE with robust training improves both EM and F1 on both datasets, boosting F1 by 3.7 and 1.6 points on the two datasets, respectively. We trained a RoBERTa model on SQuAD2, and conducted undersensitivity attacks (ρ = 6, η = 256). Attack rates are lower for RoBERTa (34.5%), and when considering only samples where RoBERTa was found vulnerable, BERT also has a vulnerability rate of 90.7%. Concrete adversarial inputs chosen for Roberta transfer when evaluating BERT transfer for 17.5% of samples. We have investigated a problematic behaviour of RC models -being overly stable in their predictions when given semantically altered questions. This undersensitivity can be drastically reduced with appropriate defences, such as adversarial training, and in more robust models without sacrificing standard performance. Future work should study in more detail the causes and better defences to model undersensitivity, which we believe provides an alternative viewpoint on evaluating a model's RC capabilities. 5 approximate as we stratify by article 6 We also include an experiment with the setup used in Table 7: Breakdown of undersensitivity error rate on NewsQA with a held-out attack space (lower is better). A APPENDIX: POS PERTURBATION DETAILS. We exclude these PoS-tags when computing perturbations: Vulnerability for new, held-out perturbation spaces, disjoint from those used during training, can be found in Table 6 for SQuAD2, and in 7 for NewsQA. Searching in a large collection of (mostly unrelated) natural language questions, e.g. among all questions in the SQuAD training set, yields several cases where the prediction of the model increases, compared to the original question, see Table 8 for one example. Such cases are however rare, and we found the yield of this type of search to be very low. D APPENDIX: ATTACK EXAMPLES Table 11 shows more examples of successful adversarial attacks for NER perturbations on SQuAD2.0. E APPENDIX: BIASED DATA SETUP For completeness and direct comparability, we also include an experiment with the same data setup chosen in (not holding aside a dedicated validation set). Results can be found in Table 9. We again observe improvements in the biased data setting, and the robust model outperforms GQA in two of the three subtasks. F APPENDIX: VULNERABILITY ANALYSIS ON NEWSQA Fig. 4 depicts the vulnerability of a BERT LARGE model on NewsQA under attacks using named entity perturbations.
We demonstrate vulnerability to undersensitivity attacks in SQuAD2.0 and NewsQA neural reading comprehension models, where the model predicts the same answer with increased confidence to adversarially chosen questions, and compare defence strategies.
1,484
scitldr
This paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters. This representation is language-independent with no need of pretraining and produces an encoding with no information loss. It provides an adequate description of the morphology of text, as it is able to represent prefixes, declensions, and inflections with similar vectors and are able to represent even unseen words on the training dataset. Similarly, as it is compact yet sparse, is ideal for speed up training times using tensor processing libraries. As part of this paper, we show that this technique is especially effective when coupled with convolutional neural networks (CNNs) for text classification at character-level. We apply two variants of CNN coupled with it. Experimental show that it drastically reduces the number of parameters to be optimized, ing in competitive classification accuracy values in only a fraction of the time spent by one-hot encoding representations, thus enabling training in commodity hardware. Document classification is one of the principal tasks addressed in the context of natural language processing BID22. It implies associating a document -or any text fragment, for that matter-with a category or label relying on their content. The increasing availability of texts in digital form, especially through the Internet, has called for the development of statistical and artificial intelligence tools for automating this process. Spam detectors, sentiment analysis, news archiving, among many others, demand high-quality text classifiers. There is a broad range of approaches to document classification (see BID22 BID1 BID11 BID15). An important portion of them relies on a representation that handles words as the atomic element of text. Consequently, those methods carry out their analysis through statistics of words occurrence. However, the variability of words and structures belonging to a language hinders the viability of this method. That is why, these models have a superior performance in specific domains and applications, where the vocabulary is or can be restricted to a relatively small number of words, possibly chosen by a specialist. Furthermore, such modeling becomes specific to a language, causing the replication process in another language to be carried out from scratch.In recent years, we have experienced a revolution in the machine learning with the advent of deep learning methods BID8. The development of convolutional neural networks (CNNs) BID17 coupled with the popularization of parallel computing libraries (e. g. Theano BID2, Tensorflow BID0, Keras BID7, etc.) that simplify general-purpose computing on graphics processing units (GPGPU) BID21 has been successful in tackling image classification problem BID16 quickly becoming the state of the art of the field. As it could be expected, the success of deep learning and CNNs in the image classification domain has prompted the interest to extend the deep learning principles to the document classification domain. Some existing methods have been updated but the clear majority are still based on the to-kenization of words and the inference of their statistics. Bag of Words (BoW) BID13 and Word2vec BID20 are some of the most popular strategies. It can be argued that the replication of image classification success in the documents domain faces as main challenge the difficulty of representing text as numerical tensors. To address this issue, suggested a groundbreaking approach that considers the characters as the atomic elements of a text. In particular, they represented the text as a sequence of one-hot encoded characters. This encoding provides a robust, language-independent representation of texts as matrices, that are then used as inputs of different CNNs. Their experimental showed that this approach was able to attain and, in some cases, improve the state of the art in complex text classification problems. More recently, BID25 improved those by combining CNNs with Long Short-Term Memories (LSTMs) BID10. In spite of that, the impact of this idea is hampered by the large computational demands of the approach, since its training can take days per epoch in relatively complex problems. Character-level representations have the potential of being more robust than word-level ones. On the other hand, they are computationally more expensive because detecting syntactic and semantic relationships at the character-level is more expensive . One possible solution could be a word representation that incorporates the character-level information. In this paper, we propose an efficient character-level encoding of word to represent texts derived from the Tagged Huffman BID23 information compression technique. This encoding takes into account the character appearance frequency in the texts in order to assign shorter codes to the most frequently used ones. This novel text encoding makes the idea put forward by more computationally accessible by reducing its training requirements in terms of time and memory. The proposed encoding makes possible to represent larger portions of texts in a less sparse form, without any loss of information, while preserving the ability to encode any word, even those not present in the training dataset ones. In order to study the impact of this encoding, we coupled it with two CNN architectures. The experimental studies performed showed that we managed to achieve a performance similar or in some cases better than the state of the art at a fraction of the training time even if we employed a simpler hardware setup. Our main contribution is to show that this novel character-level text encoding produces a reduced input matrix, leading to a substantial reduction in training times while producing comparable or better in terms of accuracy than the original approach by. This opens the door to more complex applications, the use of devices with lower computational power and the exploration of other approaches that can be coupled with input representation. The rest of the paper is structured as follows. In the next section, we deal with the theoretical foundations and motivation that are required for the ensuing discussions. There we also analyze the alternatives to character-level text compression that were taken into account for producing our proposal. After that, in Section 3, we describe the encoding procedure and the neural network architectures that will take part of the experiments. Subsequently, in Section 4, we replicate the experiments of in order to contrast our proposal with theirs under comparable conditions. Finally, in Section 5, we provide some final remarks, conclusive comments and outline our future work directions. The success in using Convolutional Neural Networks (CNNs) BID17 in image classification BID16 flourish with the development of many libraries BID6 BID7 BID2, techniques and hardware. The effort to use CNNs for text classification tasks is justified by the possibility of appropriating these tools for obtaining better and more robust algorithms, facilitating the use of the same approach for several applications. There are two usual approaches to use CNNs for handling textual information: the (i) bag of words (BoW) BID13 and (ii) Word2vec BID20 approaches. In the case of BoW and some of its variants, for representing each word in a vocabulary of size N, a digit 1 is placed in the correspondent position of that word in a 1 × N vector, all others positions remaining with digit 0. Since natural languages usually have a large vocabulary, a limited subset of the vocabulary must be used in order to make viable to perform the necessary computations in terms of memory requirements. The chosen subset of the vocabulary must be representative of the texts. Therefore, in practical problems, a great deal of attention is devoted to this matter. In particular, it is common to involve an application domain specialist or use some kind of word frequency statistic or relevance metric, where the most frequent and rare words are excluded. In the word2vec approach, each word is projected via a metric embedding of fixed size, representing its co-occurrence in a large corpus of text in the same language of the texts of interest. It is possible to use pretrained vectors or readjust the representation with new words. The main problem with both strategies is that it does not allows to represent words that are not in the training dataset. Typos, spelling errors, mixed up letters and text written in languages with a complex structure (declensions, conjugations, etc.) are completely ignored. Establishing the character as the basic unit of text formation provides a better opportunity to be robust to typos, acceptance of neologisms, and other textual forms as equations and chemical formulas, abbreviations and idiosyncrasies of written language on the internet, such as emoticons, slang and dialects of the online world, etc. Assuming the word as a base item, much of this ability is lost, especially when models assume that text producers use a formal language. put forward an important innovation in this regard. They represent text as a sequence of characters, not words. Consequently, they are capable of reducing the vocabulary of symbols to the size of the alphabet (69 in the case of the paper) and thus allowing the use of one-hot encoding BID9. In this paper, they represented a text as a matrix of size 1014 × 69 where each row corresponds to a position in the text sequence and columns with the presence or not of a given character. Therefore, a row with a 1 in a given column indicates that the presence of the corresponding character in that point of the text. With this representation on hand, they applied a CNN and obtained competitive with other techniques, and in some cases improving the state of the art. However, the main drawback is a large computational requirement, that in some cases called for days per training epoch. The obtained by them suggest that language, and therefore, text, can be treated as a sequence of signals, like any other. However, the training times and the dimension of the matrices to be computed are still obstacles to the most effective use of the method. That is why a better encoding of text could be a right path towards a substantial improvement if this issue. Searching for a way to reduce these training time while retaining the flexibility and power of character-level convolutional to classify text, we found a way to better encode texts. Our approach, with competitive accuracy, achieve a significant reduction in time execution from hours to minutes and from days to hours, by epoch. To achieve this performance in execution, our approach consisted of two elements:• Obtaining a shorter representation: At first, we though of using some form of encoding each character, and use the same approach of but we realize that using a variable length encoding for each word could be more efficient. To do this we need a way to encode each char, generating distinct concatenated code for representing each word and that words with the same prefix were near each other, especially to respond well to declensions.• Obtaining a sparse representation: To take full advantage of libraries of tensor multiplications like NVidia CuDNN BID6, we need a representation that was sparse, so our code should be composed of many 0s and a few 1s. Although the Huffman encoding BID12 yields shortest possible, it does not generate unique representations once we concatenate encoded characters to form a word. We investigated encoding of BID5 and found promising alternatives. Our approach is based on the Tagged Huffman encoding (Silva de BID23, where a pair of '0' digits is the signal and the digit '1' is the tag of beginning and end code, the only difference is that for our approach, we need a shorter version to reduce the size of input matrix, so we choose to use only one digit 0 instead of two for each char, marking the beginning and the end of each char code with a digit 1, the same way. As in the approach by Silva de , the coding we employ has the following advantages BID5 : DISPLAYFORM0 No character code is a prefix of another, that is, the match is one-to-one.2. It allows a direct search, that is, to search for a word in an encoded document, just encode the word and use traditional methods of comparing strings with the encoded document.3. This encoding approach is a compression technique, so it also allows saving already encoded text documents permanently using a binary system, requiring less storage space. These advantages become attractive especially if the goal is to extract knowledge about files in a repository to perform various classifications on different dimensions with the same files. A possible advantage of this encoding over others strategies that use a word as an atomic representation of text is better respond to unseen words on training data, once that the network at least have some prefix to guess the meaning of the word, the same way we humans do. This is especially interesting in languages where there are a lot of declensions like Portuguese, Spanish, Italian, French, Russian, German, Arabic and Turkish, for example. The coding procedure is not restricted to any size of vocabulary, the only problem is that less frequent characters will generate bigger codes, consequently, bigger encoding matrix. If your database has a lot of them, you could use a higher word code size. Our main contribution is demonstrate that such approach reduces the dimensionality of the encoded matrix, substantially reducing training time and allow the use of devices with lower computational power, remaining with competitive accuracy. In all of our experiments, we used the following procedure to encode words:• Obtaining a character frequency rank: We read the text database and count the frequency of each character, generating a list sorted by frequency of occurrence. Then we create a rank with only the relative positions of the characters. For a given language, this rank is quite stable since only the order of the rank is used. This means that if all documents are in the same idiom, this procedure can be replaced by a list with the characters rank of frequency for that language.• Creating a mapping from character to compress code: To encode each character, we insert the digit 0 in the same amount of the rank position of the character, between a 1 digit to signal the begin and another 1 digit to signal the end of the code. TAB0 have some examples of encoded characters. To encode each word, we just concatenate the codes of each character. As an example, we provide in TAB1 some examples of plain text words and their corresponding encoding. Given a document, we consider that it is composed of words, being those any set of characters limited by the space character. This means 'word' could be math equations, web addresses, L A T E X code, computer programming commands, etc. In TAB1 we can see that this encoding could represent words with the same prefix in vectors that have same initial coordinates. Slangs like tl;dr : too long, did not read and u2 : you too as well mathematical expressions like e a could be manipulated. In a matrix of number of words × code size representing the document, each row represents a properly encoded word, where the code is embedded with its first symbol in the first column. Columns that were not occupied are padded with 0, larger codes are represented up to the chosen limit. Unoccupied lines are filled with 0 and larger documents are represented only up to the chosen maximum number of words, ignoring the last remaining ones. As an example, we represent a document in an 8 × 65 matrix in Figure 1 . DISPLAYFORM0 In the example in Figure 1 we used a 8 × 65 matrix (520 elements) to encode the text with a certain amount of slack. At the very least, we would need 7 × 64 (448 elements). In contrast, the approach of would employ at least 32×69 elements to represent the same sentence. In our experiments, 256 coordinates were enough to represent 99.5% of the words from one of the databases studied. In all datasets studied in this paper, we choose 128 as a limit of words to represent a document, encoding each text in a 128 × 256 matrix. As mentioned before, this work was prompted by the of. In their original approach, they encode each character using one-hot encoding in a vocabulary of 69 elements. The non-space characters are letters, numbers and punctuation. The model is composed of 9 layers, 6 of convolutions and 3 fully connected. Their architecture is described in TAB2.They used stochastic gradient descent (SGD) with a minibatch of size 128, using momentum 0.9 and initial step size 0.01 which is halved every 3 epochs for 10 times. So, their were obtained in at least 30 epochs. To verify the efficiency of the encoding procedure, we realized 3 experiments, named CNN1, CNN2 and LSTM: At first, we choose a network architecture that we usually use to classify text using an embedding created by word2vec BID20, the only difference is that instead of 300 features, we reduce the input size to 256. This architecture we named CNN1. It is based on concatenation of convolutions in a shallow way, inspired by work of BID14, who achieve state of the art for some databases. We trained this model for 5 epochs. The neural network architecture CNN1 is summarized in Figure 2a as a diagram. Prompted by the positive outcome of CNN1, we decided to investigate others possible architectures. We created another shallow but wide convolution architecture following the recommendations of BID29 for choosing parameters, executing training on dataset ag news, for being the smallest of our test datasets. This architecture is composed by:• Convolution width filter: a combination of region sizes near the optimal single best region size outperforms using multiple region sizes far from the optimal single region size BID29. We scan the width from 1 to 7 comparing accuracy performance. In these evaluations, the convolution of width 1 was a better option.• Pooling size: max pooling consistently performs better than alternative strategies for the task of sentence classification BID29.The neural network architecture CNN2 is summarized in Figure 2b as a diagram To illustrate the possibilities of applying the proposed encoding, we did a experiment using LSTMs BID10, similar to LSTM model described by, the difference is that instead of using word2vec embedding BID20, we used our representation. The architecture is very simple: an input layer of 126 × 256, followed by a LSTM layer of 300, a Dropout layer of.10 BID24, a fully connected layer of 128 units and a softmax layer. We trained this model for 5 epochs. This architecture is in general, twice slower than CNN1. The neural network architecture LSTM is summarized in Figure 2c as a diagram. An essential part of this work is to contrast our proposal both within the context of character-level text classification and with other state-of-the-art approaches. The databases used were the same as those cited in an article by, where there is an extensive description of them. 1 A detailed analysis of these datasets is out of the scope of this paper, instead, we will only summarize the main characteristics:• AG's news: categorized news articles from more than 2000 news sources. Four classes (World, Sports, Business, SciTech).The dataset contains 120k train samples and 7.6k test samples equally distributed.• Sogou news: categorized news articles originally in Chinese. applied the pypinyin package combined with jieba Chinese segmentation system to produce Pinyin -a phonetic romanization of Chinese. Five classes (sports, finance, entertainment, automobile and technology). The dataset contains 450k train samples and 60k test samples equally distributed.• DBpedia: title and abstract from Wikipedia articles available in DBpedia crowd-sourced community BID18. Fourteen non-overlapping classes from DBpedia 2014. The dataset contains 560k train samples and 70k test samples equally distributed.• Yelp full: sentiment analysis from the Yelp Dataset Challenge in 2015 2. Five classes representing the number of stars a user has given. The dataset contains 560k train samples and 38k test samples equally distributed..• Yelp polarity: sentiment analysis from the Yelp Dataset Challenge in 2015 2. The original data is transformed into a polarity problem. Rating of 1 and 2 stars are represented as Bad and 4 and 5 as Good. The dataset contains 560k train samples and 50k test samples equally distributed... • Amazon full: sentiment analysis from Amazon reviews dataset from the Stanford Network Analysis Project (SNAP) BID19. Five classes representing the number of stars a user has given. The dataset contains 3,000k train samples and 650k test samples..• Amazon polarity: sentiment analysis from Amazon reviews dataset from the Stanford Network Analysis Project (SNAP) BID19. Two classes, rating of 1 and 2 stars are represented as Bad and 4 and 5 as Good. The dataset contains 3,600k train samples and 400k test samples equally distributed..The baseline comparison models are the same of where there is an extensive description of them, we just reproduce their , the only difference is that they report loss error and for better comprehension, we translated it to accuracy. In there is an extensive description of them. In this paper, we just summarize the main information:• Bag of Words (BoW) and its term-frequency inverse-document-frequency (BoW TFIDF): For each dataset, they selected 50,000 most frequent words from the training subset. For the normal bag-of-words, they used the counts of each word as the features and for the TFIDF they used the counts as the term-frequency. • Bag-of-ngrams (Ngrams) and its TFIDF (Ngrams TFIDF): The bag-of-ngrams models were constructed by selecting the 500,000 most frequent n-grams (up to 5-grams) from the training subset for each dataset. The feature values were computed the same way as in the bag-of-words model.• Bag-of-means on word embedding: an experimental model that uses k-means on word2vec BID20 learned from the training subset of each dataset, and then used these learned means as representatives of the clustered words. Took into consideration all the words that appeared more than 5 times in the training subset. The dimension of the embedding is 300. The bag-of-means features are computed the same way as in the bag-of-words model. The number of means is 5000.• Long Short Term Memory (LSTM): The LSTM BID10 model used is word-based, using pretrained word2vec BID20 embedding of size 300. The model is formed by taking a mean of the outputs of all LSTM cells to form a feature vector, and then using multinominal logistic regression on this feature vector. The output dimension is 512.For all the experiments, we used the environment and parameters settings listed in TAB3. Besides the encoding procedure, we do not use any preprocessing strategy except the use of lowercase letters. No data enhancement technique was employed. All the and comparison with traditional models and the approaches of is shown in TAB4.As tabular data are a hard to grasp, we have decided to go for a graphical representation of the . In particular, from the previous , we have selected the large and small architectures of and our CNN1, CNN2 and LSTM. Those values of accuracy were then scaled to the interval for every dataset, being 0 the worst and 1 the best performance among all models, including tradicional models. The outcome of this process is represented on Figure 3. On TAB5, we did a running time comparison, based on reports available by.The main objective of this research was to evaluate the possibility of using a coding approach that contemplated the construction of words using characters as basic tokens. Our main contribution is demonstrate that such approach allows reducing the dimensionality of the encoding matrix, thus allowing substantially shorter optimization times and the use of devices with lower computational power. Some datasets of text have peculiarities that were not addressed by word frequency methods (i.e. BoW and word2vec), like declensions and new vocabulary. The article of was a great innovation in this regard. However, the training times are still obstacles to the most effective use of the technique. We thought of a better representation should be a solution. To make a comparison, perform the training of architecture CNN1 with an output of 4 classes make necessary to optimize 1,837,188 parameters. As a comparison, in the architecture suggested by it is necessary to optimize 11,337,988 parameters. Figure 3: Performance re-scaled in the range between best and worst model comparing and our approaches when applied to the AG's news (AG), Sogou news (SOGOU), DBpedia (DBP), Yelp polarity (YLP-P), Yelp full (YLP), Yahoo! answers (YAH), Amazon full (AMZ) and Amazon polarity (AMZ-P) datasets. As one of our concerns was to make our proposal as usable as possible on commodity hardware, we focused our studies in that hardware configuration. The major bottleneck for analyzing this large amount of matrix-encoded text is the need for intensive use of RAM. Our approach generates a 128 × 256 matrix, smaller than the 1014 × 69 generated by. In spite of that, a large set of them quickly occupies the available RAM on a'regular' personal computer. On the machine we used, there were 16 GB available, which is not uncommon in modern personal computers. Therefore, the use of generators to control the number of matrices generated and sent to GPU is an important detail in the implementation of this optimization algorithm. If your computer has only 8 GB of RAM or less, it will be necessary to reduce the number of superbatchs to fit the memory. The obtained strongly indicate that the use of this coding is a possibility. We emphasize that we used is a fairly simple network, enough to demonstrate the feasibility of the encoding approach with the time and computational resources that we have. The are very competitive with the approach o and traditional techniques. We see even that we could find parameters that achieve excellent performance on AG's news dataset, following the suggestions of BID29. One of advantage of a faster algorithm is that if your dataset is not so big, you could scan the feature width to find a solution that optimizes accuracy. Another advantage is the possibility to realize k-folds validation, to have a better perspective on how well it will perform for your specific dataset on real life. Another interesting point is that using a LSTM layer with the proposed encoding, we achieved similar, sometimes better than using an embedding using word2vec BID20 as proposed by. Our approach by its own nature take into account morphological aspects of text while word2vec uses pre-trained vectors representing co-ocurrency of words on a big corpus of text. Being able to take into account character information even in recurrent networks, we show that this representation is not limited to the domain of CNN or neural networks, for that matter. Although our LSTM architecture is twice slower than our CNN1 and CNN2 topologies, it consistently outperform them. This indicates that temporal dependence among of words are important, so, potentially other architectures can generate better taking this information to account and this is a direction that should be explored. In addition to that, the dimensionality reduction achieved by our encoding enables several other architectures and methods to be verified in a reasonable timeframe. We are certain that our algorithm implementation could be even faster. For instance, when using a GPU Geforce 1080ti and a CNN1 architecture, each of the superbatch of 10,000 arrays have its weights updated in 30 seconds. Only 6 seconds is consumed by GPU, another 24 seconds is spent in encoding all the matrix and delivery it on the GPU. Using a multithread strategy could help in this regard. In this paper, we have proposed an efficient character-level encoding for text derived from the Tagged Huffman BID23 information compression method and applied it as input preprocessing step for character-level CNN text classification. We have shown that using this compression technique is a convenient possibility to represent text in a convolutional deep learning setup for text classification. This is particularly important as encoding text using characters can be relevant when dealing with less curated texts datasets, as it is robust to spelling errors, typos, slangs, language variations, and others usual characteristics of Internet texts. This novel text encoding allow to represent larger portions of texts in a compact form while preserving the ability to encode any word, even those not present in the training dataset ones. Furthermore, for being compact yet sparse, this approach dramatically reduces the time required for training CNNs and, therefore, makes the application of this novel approach more accessible. This opens the door to more complex applications, the use of devices with lower computational power and the exploration of other approaches that can be coupled with this input representation. The experimental studies carried out coupled the encoding with two convolutional neural networks architectures and a recurrent LSTM architecture. These experiments showed that we managed to achieve a performance similar to the one achieved by in a fraction of the training time even if we employed a simpler hardware setup. Furthermore, with our , we endorse , which state that language can be treated as a signal, no different from any other. It should be noted that the main objective of the paper was to show the viability of the text encoding, not producing better per se. Consequently, we have focused our efforts on the comparative study. Probably, custom neural network architectures should be devised to this new encoding with that purpose. Our indicate that combining it with LSTMs is a promising direction in order to overcome the fixed matrix size limitation. In the near future, we will focus on devising these new architectures that may further improve the . This study also opens a door to other information-theoretic-based methods for information representation to be used to create a numerical representation of texts. It must be outlined the fact that this compact numeric representation of text is not limited to the domain of CNN or neural networks, for that matter. It could be interesting to assess its impact as a preprocessing step, perhaps with a minor modification, for other classification algorithms.
Using Compressing tecniques to Encoding of Words is a possibility for faster training of CNN and dimensionality reduction of representation
1,485
scitldr
Sequence prediction models can be learned from example sequences with a variety of training algorithms. Maximum likelihood learning is simple and efficient, yet can suffer from compounding error at test time. Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency. A rich set of other algorithms, such as data noising, RAML, and softmax policy gradient, have also been developed from different perspectives. In this paper, we present a formalism of entropy regularized policy optimization, and show that the apparently distinct algorithms, including MLE, can be reformulated as special instances of the formulation. The difference between them is characterized by the reward function and two weight hyperparameters. The unifying interpretation enables us to systematically compare the algorithms side-by-side, and gain new insights into the trade-offs of the algorithm design. The new perspective also leads to an improved approach that dynamically interpolates among the family of algorithms, and learns the model in a scheduled way. Experiments on machine translation, text summarization, and game imitation learning demonstrate superiority of the proposed approach. Sequence prediction problem is ubiquitous in many applications, such as generating a sequence of words for machine translation ), text summarization , and image captioning ), or taking a sequence of actions to complete a task. In these problems (e.g., ;), we are often given a set of sequence examples, from which we want to learn a model that sequentially makes the next prediction (e.g., generating the next token) given the current state (e.g., the previous tokens). A standard training algorithm is based on supervised learning which seeks to maximize the loglikelihood of example sequences (i.e., maximum likelihood estimation, MLE). Despite the computational simplicity and efficiency, MLE training can suffer from compounding error in that mistakes at test time accumulate along the way and lead to states far from the training data. Another line of approaches overcome the training/test discrepancy issue by resorting to the reinforcement learning (RL) techniques . For example, used policy gradient to train a text generation model with the task metric (e.g., BLEU) as reward. However, RL-based approaches can face challenges of prohibitively poor sample efficiency and high variance. To this end, a diverse set of methods has been developed that is in a middle ground between the two paradigms of MLE and RL. For example, RAML adds reward-aware perturbation to the MLE data examples; SPG leverages reward distribution for effective sampling of policy gradient. Other approaches such as data noising also show improved . In this paper, we establish a unifying perspective of the above distinct learning algorithms. Specifically, we present a generalized entropy regularized policy optimization framework, and show that the diverse algorithms, such as MLE, RAML, data noising, and SPG, can all be re-formulated as special cases of the framework, with the only difference being the choice of reward and the values of two weight hyperparameters (Figure 1). In particular, we show MLE is equivalent to using a Delta-function reward which returns 1 to model samples that match training examples exactly, and −∞ to any other samples. Such extremely restricted reward has literally disabled any exploration of the model beyond training data, yielding brittle prediction behaviors. Other algorithms essentially use various locally-relaxed rewards, joint with the model distribution, for broader (and more costly) exploration during training. Besides the new views of the existing algorithms, the unifying perspective also leads to new algorithms for improved learning. We develop interpolation algorithm, which, as training proceeds, gradually expands the exploration space by annealing both the reward function and the weight hyperparameters. The annealing in effect dynamically interpolates among the existing algorithms from left to right in Figure 1. We conduct experiments on the tasks of text generation including machine translation and text summarization, and game imitation learning. The interpolation algorithm shows superior performance over various previous methods. Given a set of data examples, sequence prediction models are usually trained to maximize the loglikelihood of the next label (token, action) conditioning on the current state observed in the data. Reinforcement learning (RL) addresses the discrepancy between training and test by also using models' own predictions at training time. Various RL approaches have been applied for sequence generation, such as policy gradient and actor-critic. Reward augmented maximum likelihood (RAML) is an algorithm in between MLE and policy gradient. Mathematically, RAML shows that MLE and maximum-entropy policy gradient are respectively minimizing KL divergences in opposite directions. thus propose to use the more general α-divergence as a combination of the two paradigms. Our framework is developed in a different perspective, reformulates a different and more comprehensive set of algorithms, and leads to new insights in terms of exploration and learning efficiency of the various algorithms. Besides the algorithms discussed in the paper, there are other learning methods for sequence models. For example, Hal Daumé et al.;; use a learning-to-search paradigm for sequence generation or structured prediction. Scheduled Sampling and variants adapt MLE by randomly replacing ground-truth tokens with model predictions as the input for decoding the next-step token. Policy optimization for reinforcement learning is studied extensively in robotic and game environment. For example, introduce a relative entropy regularization to reduce information loss during learning. develop a trust-region approach for monotonic improvement.;; study the policy optimization algorithms in a probabilistic inference perspective. combine imitation learning with RL, whose approach is orthogonal to ours and can be plugged into our framework to incorporate imitation reward. The entropy-regularized policy optimization formulation presented here can be seen as a generalization of many of the previous policy optimization methods. Besides, we formulate the framework primarily in the sequence generation context. We first present a generalized formalism of entropy regularized policy optimization. The formulation contains a reward function and two weight hyperparameters that define the learning procedure. Therefore, varying the values of the reward and weights in a large space of algorithms. We show that several existing popular algorithms, which were originally proposed in distinct perspectives, can all be seen as members in the space. In particular, we reformulate the MLE algorithm in the same policy optimization form, which enables side-by-side comparison between the broad spectrum of algorithms. The ing unifying view provides new insights into the exploration and computation efficiency, and creates improved learning approaches for sequence prediction. For clarity, we present the framework in the sequence generation context. The formulations can straightforwardly be extended to other settings such as imitation learning in robotic and game environments, as discussed briefly at the end of this section and also shown in the experiment. We first establish the basic notations. Let y = (y 1, . . ., y T) be the sequence of T tokens. Let y * be a training example drawn from the empirical data distribution. From the sequence examples, we aim to learn a sequence generation model p θ (y) = t p θ (y t |y 1:t−1) with parameters θ. Note that generation of y can condition on other factors. For example, in machine translation, y is the sentence in target language and depends on an input sentence in source language. For simplicity of notations, we omit the conditioning factors. Policy optimization is a family of reinforcement learning (RL) algorithms. Assume a reward function R(y|y *) ∈ R that evaluates the quality of generation y against the true y *. For example, BLEU score can be a reward in machine translation. The general goal of policy optimization is to learn the model p θ (y) (a.k.a policy) 1 to maximize the expected reward. Previous work develops entropy regularized approaches, which augment the objective with information theoretic regularizers for stabilized training. We present a generalized variational formulation of ERPO, which, as we show shortly, has the power of subsuming an array of other popular algorithms. Specifically, we introduce a non-parametric variational distribution q(y) w.r.t the model p θ (y). The objective to maximize is as follows: where KL(· ·) is the Kullback-Leibler divergence forcing q to stay close to p θ; H(·) is the Shannon entropy imposing maximum entropy assumption on q; and α and β are balancing weights of the respective terms. Intuitively, the objective is to maximize the expected reward under the variational distribution q while minimizing the distance between q and the model p θ, with maximum entropy regularization on q. The above formulation is relevant to and can be seen as a variant of previous policy optimization approaches in RL literature, such as relative entropy policy search , maximum entropy policy gradient , and other work where the variational distribution q is formulated either as a non-parametric distribution as ours or parametric one (; 2017a;). The objective can be maximized with a standard EM procedure that iterates two coordinate ascent steps optimizing q and θ, respectively. At iteration n: In the E-step, q has a closed-form solution, which is an energy-based distribution. We can have an intuitive interpretation of its form. First, it is clear to see that if α → ∞, we have q n+1 = p n θ. This is also reflected in the objective Eq. where a larger weight α encourages q to be close to p θ. Second, the weight β serves as the temperature of the q softmax distribution. In particular, a large temperature β → ∞ makes q a uniform distribution, which is consistent with the outcome of an infinitely large maximum entropy regularization in Eq.. In the M-step, the update rule can be interpreted as maximizing the log-likelihood of samples from the distribution q. In the context of sequence generation, it is sometimes more convenient to express the equations at token level (instead of the sequence level), as shown when we devise a new algorithm in the next section. To this end, we decompose R(y|y *) along the time steps: where ∆R(y t |y *, y 1:t−1) measures the reward contributed by token y t. The solution of q in Eq. can then be re-written as: The Algorithm Space The above ERPO formalism includes three key components, namely, the reward R and the weight hyperparameters α and β > 0. Variation in these components can in different procedures of updating the model. In other words, different algorithms in the ERPO family correspond to a point (or a region) in the space spanned by the three components. The following sections visit a set of existing approaches, and connect them to the unifying picture by reformulating their seemingly distinct objectives. Figure 1 illustrates the particular algorithms in the space, clustered by the exploration behavior in learning, of which we will discuss more. Softmax Policy Gradient (SPG) We first briefly discuss the previous RL algorithms for sequence prediction that fit in the ERPO formalism. SPG was originally developed in the perspective of combining the reward R and policy p θ to improve sampling quality. The algorithm is equivalent to setting β = 0 and treating α > 0 as the temperature of the energy-based distribution q(y). That is, q(y) in the E-step of Eq. is now in the form q(y) ∝ p θ (y) exp{R(y|y *)/α}. The reward R is set to any normal task-specific reward. Note that sampling from q(y) (e.g., in the M-step) is typically difficult due to its energy-based form and the fact that the task reward R often does not have particular structures amenable for sampling. We will see in the next section that the MLE algorithm in contrast uses a special reward to avoid the computational difficulty in sampling, at the cost of restricted exploration during training. We also note the previous work of Sequence Tutor , which was motivated by the idea of using an MLE-trained policy as a prior to guide the learning of the target policy in an RL framework. The formalism closely resembles SPG, namely (α > 0, β = 0), with the exception that the variational distribution q(y) in Sequence Tutor is a parameterized model instead of a nonparametric one as in SPG and our more general ERPO formulation. In this section, we connect the maximum likelihood estimation (MLE) algorithm to the unifying ERPO formalism. Based on the connections, we are able to analyze the learning behavior of MLE from the reinforcement learning perspective in terms of exploration efficiency. We also discuss some well-known variants of the vanilla MLE algorithm, such as RAML and data augmentation. Due to its simplicity and efficiency, MLE is among the most widely-used approaches in learning sequence generation. It finds the optimal parameter value that maximizes the data log-likelihood: We show that the MLE objective can be recovered from Eq. with a specialized reward and weight values. More concretely, consider a δ-function reward defined as 2: That is, a sample y receives a valid unit reward only when it matches exactly with the true data, and receives a negative infinite reward in all other cases. We show that the MLE algorithm is a member of the ERPO family. In particular, the conventional MLE objective is equivalent to setting the ERPO components to (R = R δ, α → 0, β = 1). This can 2 For token-level, define R δ (y1:t|y *) = t/T * if y1:t = y * 1:t and −∞ otherwise, where T * is the length of y *. Note that the R δ value of y = y * can also be set to any constant larger than −∞. be straightforwardly seen by noting that, with the configuration, the q(y) in E-step (Eq.2) reduces to q(y) = 1 if y = y * and 0 otherwise. The M-step is thus in effect maximizing the log-likelihood of the real data examples (Note that the very small α is still > 0, making the M-step for maximizing the objective Eq. valid and necessary). With the δ-reward R δ, any sample y that fails to match the given data y * exactly will get a negative infinite reward and thus never contribute to model learning. Exploration Efficiency Reformulating MLE in the unifying ERPO form enables us to directly compare the approach with other RL algorithms. Specifically, the δ-reward has permitted only samples that match training examples, and made invalid any exploration beyond the small set of training data (Figure 2(a) ). The extremely restricted exploration at training time in a brittle model that can easily encounter unseen states and make mistakes in prediction. On the other hand, however, a major advantage of the δ-reward is that it defines a distribution over the sequence space such that sampling from the distribution is reduced to simply picking an instance from the training set. The ing samples are ensured to have high quality. This makes the MLE implementation very simple and the computation efficient in practice. On the contrary, task-specific rewards (such as BLEU) used in standard policy optimization are more diffused than the δ-reward, and thus allow exploration in a broader space with valid reward signals. However, the diffused rewards often do not lead to a distribution that is amenable for sampling as above. The model distribution is thus instead used to propose samples, which in turn can yield low-quality (i.e., low-reward) samples especially due to the huge sequence space. This makes the exploration inefficient or even impractical. Given the opposite behaviors of the algorithms in terms of exploration and computation efficiency, it is a natural idea to seek a middle ground between the two extremes in order to combine the advantages of both. Previous work has proposed variants of the vanilla MLE from different perspectives. We re-visit some of the popular approaches, and show that they can also be canonicalized in the ERPO framework and enrich our understanding of the learning behaviors. Data Noising Adding noise to training data is a widely adopted model regularizing technique. Previous work (e.g.,) has proposed several data noising strategies in the sequence generation context, such as replacing subsets of tokens with other random words. The ing noisy data is then used in MLE training. Though previous literature has commonly seen such techniques as a data pre-processing step, we show that the approach can be expressed in the generalized ERPO formulation. Specifically, data noising can be seen as using a locally relaxed variant of the δ-reward: where g denotes any transformation operation that returns a new sample as a noisy version of the input raw data y *. With the relaxed reward, data noising locally expands the exploration surrounding the observed training examples (Figure 2(b) ). The added exploration at training time can yield a model that is more robust to error at test time. Reward-Augmented Maximum Likelihood (RAML) RAML was originally proposed to incorporate task-specific metrics into the MLE training. Formally, it introduces an exponentiated reward distribution e(y|y *) ∝ exp{R(y|y *)/τ }, where R is a task reward and τ > 0 is the temperature. The conventional RAML objective is written as: That is, unlike MLE that directly maximizes the data log-likelihood, RAML first perturbs the data proportionally to the reward distribution, and maximizes the log-likelihood of the ing samples. Similar to how we map MLE to the ERPO formalism, we can align RAML with the unifying form by setting α → 0, β to the temperature τ, and R to the task reward. Compared to the vanilla MLE, the key feature of RAML is the use of task reward instead of the δ-reward, which permits a larger exploration space surrounding the training examples. On the other hand, same as in SPG (section 3.1), sampling from the energy-based distribution with a diffused reward tends to be difficult, and often requires specialized approximations for computational efficiency (e.g.,). Other Algorithms & Discussions The classic policy gradient algorithm has also been used for sequence prediction (e.g.,). We We show in the appendix that the approach can also be connected to the unifying ERPO with moderate approximations. also proposed a mixing training strategy that anneals from MLE training to policy optimization. We show in the next section that the particular annealing scheme is a special case of the new, more general interpolation algorithm below. We have presented the framework in the context of sequence generation. The formulation can also be extended to other settings. For example, in game environments, y is a sequence of actions and states. The popular imitation learning method GAIL uses an adversarially induced reward R from data, and applies standard RL updates to train the policy. The policy update part can be formulated with our framework as standard policy optimization (with α > 0, β = 0). The new interpolation algorithm described in the next section can also be applied to improve the vanilla GAIL, as shown in the experiments. Previous work has also studied connections of relevant algorithms. For example,; formulate MLE and policy gradient as minimizing the opposite KL divergences between the model and data/reward distributions. studied an update equation generalizing maximum marginal likelihood and policy gradient. Our framework differs in that we reformulate a different and more comprehensive set of algorithms for sequence prediction, and provide new insights in terms of exploration and its efficiency, which could not be derived from the previous work. Section 2 discusses more related work on sequence prediction learning. The unifying perspective also leads to new algorithms for improved learning. Here, we present an example algorithm that is naturally inspired by the framework. As in Figure 1, each of the learning algorithms can be seen as a point in the (R, α, β) space. Generally, from left to right, the reward gets more diffused and α gets larger, which in larger sequence space exposed to model training (Figure 2). More exploration in turn also makes the training less efficient due to lower sample quality. We propose an interpolation algorithm with the natural idea of starting learning from the most restricted yet efficient algorithm configuration, and gradually expanding the exploration to decrease the training/test discrepancy. The easy-to-hard learning paradigm resembles the curriculum learning . As we have mapped the algorithms to the points in the hyperparameter space, the interpolation becomes straightforward, which reduces to simple annealing of the hyperparameter values. Specifically, during training, we would like to anneal from using the restricted δ-reward R δ to using task reward, and anneal from sampling (exploring) by only the reward R to sampling by both R and p θ. Since R δ is a δ-function which would make direct function linear combination problematic, we implement the interpolation strategy in the update rule (Eq.2) and use log-sum-exp for mixing. Formally, let R task denote a task reward. The negative energy of q(y) in Eq. (i.e., the exponent inside exp{·}) is now replaced with the interpolated term: log(λ 1 p θ +λ 2 exp{R task}+λ 3 exp{R δ}). Note that we have re-organized the weight hyperparameters and used the distribution (λ 1, λ 2, λ 3) to carry out the calibration role of (α, β). In particular, as training proceeds, we gradually increase λ 1 and λ 2 and decrease λ 3. The formulation of interpolation in effect converts the energy-based model q(y) to a mixture of experts, which makes the sampling from q(y) easier, and resembles the Table 2: Text summarization (5-run average ± std dev). bang-bang rewarded SPG method as described in . Besides, similar to , we adopt the token-level formulation (Eq.4), so that tokens in a sequence can be sampled from different components (i.e., p θ, R task, and R δ) in a mixed way. We provide the pseudo-code of the interpolation algorithm in the appendix. As discussed above, we can also apply the interpolation algorithm in game imitation learning, by plugging it into the GAIL framework to replace the standard RL routine for policy update. The annealing schedule in this setting is constrained due to the agent interaction with the environment. Specifically, to generate a trajectory (a sequence of actions and states), we sample the beginning part from data (demonstrations), followed by sampling from either the model or reward. Note that data sampling can happen only before model/reward sampling, because the latter will interact with the environment and in states that do not necessarily match the data. Similar to sequence generation, we gradually anneal from data sampling to model/reward sampling, and hence increase the exploration until converging to standard RL. Our experiments validate that the easy-to-hard training is superior to the vanilla GAIL which directly applies the hard RL update from the beginning. It is notable that also developed an annealing strategy that mixes MLE and policy gradient training. The strategy is essentially the same as the one we apply in the GAIL learning setting. That is, the annealing approach of is a specialized case of the above more general annealing, using restricted values of (λ 1, λ 2, λ 3) and discrete changes. We provide more discussions in the appendix. The experiment in section 5 show that our generalized annealing performs better than the restricted approach . We evaluate the interpolation algorithm in the context of both text generation and game imitation learning. Experiments are run with 4 GTX 2080Ti GPUs and 32GB RAM. The link to the code is provided in the submission. We will release the code upon acceptance. We use the state-of-the-art neural architecture Transformer as the base model. The model has 6 blocks, trained with an Adam optimizer with an initial learning rate of 0.001 and the same schedule as in . Batch size is 1,792 tokens. At test time, we use beam search decoding with a beam width of 5 and length penalty 0.6. We use the popular IWSLT2014 German-English dataset. After proper pre-processing as described in the appendix, we obtain the final dataset with train/dev/test size of around 146K/7K/7K, respectively. The shared de-en vocabulary is of size 73,197 without BPE encoding. Table 1 shows the test-set BLEU scores of various methods. Besides MLE, RAML, and MIXER as discussed above, we also compare with other existing approaches such as Scheduled Sampling (SS) and Self-critic . (We did not compare with SPG as no public code is available.) From the table, we can see the various approaches provide improved performance over the vanilla MLE, as more sufficient exploration is made at training time. Our interpolation algorithm performs best, with significant improvement over the MLE training by 1.36 BLEU points. The validate our approach that interpolates among the existing algorithms offers beneficial scheduled training. To further study the effect of our generalized annealing versus the MIXER strategy, we compare with "MIXER-alike Aneal" which uses the same configuration with our interpolation algorithm, except that the annealing is restricted like MIXER. That is, the first portion of tokens in a sequence are all sampled from the data, while the subsequent tokens are sampled from only the model or the task reward. We see that the proposed more generalized annealing is superior to the restricted version. We note that there is other work exploring various network architectures for machine translation (;, which is orthogonal and complementary to the learning algorithms. It would be interesting to explore the effect of combining the approaches. We use an attentional sequence-to-sequence model where both the encoder and decoder are single-layer LSTM RNN. The dimensions of word embedding, RNN hidden state, and attention are all set to 256. We use Adam optimization with an initial learning rate of 0.001 and a batch size of 64. Test time uses beam search decoding with a beam width of 5. Please see the appendix for more configuration details. We use the popular English Gigaword corpus for text summarization, and pre-processed the data following . The ing dataset consists of 200K/8K/2K source-target pairs in train/dev/test sets, respectively. Following previous work , we use the summation of the three ROUGE(-1, -2, -L) metrics as the reward in learning. Table 2 show the on the test set. The proposed interpolation algorithm achieves the best performance on all three metrics. The RAML algorithm, which performed well in machine translation, falls behind other algorithms in text summarization. In contrast, our method consistently provides the best . We apply the interpolation algorithm in GAIL as described in section 4. Following , we simulate three environments with MuJoCo . Expert demonstrations are generated by running PPO (b) under the given true reward functions. We then run different imitation learning algorithms with varying numbers of demonstrations. Both the policy and the discriminator are two-layer networks with 128 units each and tanh activations in between. Figure 3 shows the average returns by the agents. We can see that agents trained with the interpolation algorithm can generally improve over the vanilla GAIL, especially in the presence of small number (e.g., 1 or 4) of demonstrations. This shows that our approach that anneals from the MLE mode to RL mode can make better use of data examples, and steadily achieve better performance in the end. We present the learning curves of the algorithms in the appendix. We have presented a unifying perspective of a variety of learning algorithms for sequence prediction problems. The framework is based on a generalized entropy regularized policy optimization formulation, and we show the distinct algorithms are equivalent to specifying the reward and weight hyperparameters. The new consistent treatment provides systematic understanding and comparison across the algorithms, and inspires further improved learning. The proposed interpolation algorithm shows consistent improvement in machine translation, text summarization, and game imitation learning. made an early attempt to address the exposure bias problem by exploiting the policy gradient algorithm . Policy gradient aims to maximizes the expected reward: where RP G is usually a common reward function (e.g., BLEU). Taking gradient w.r.t θ gives: We now reveal the relation between the ERPO framework we present and the policy gradient algorithm. Starting from the M-step of Eq. and setting (α = 1, β = 0) as in SPG (section ??), we use p θ n as the proposal distribution and obtain the importance sampling estimate of the gradient (we omit the superscript n for notation simplicity): Eq [∇ θ log p θ (y)] = Ep θ q(y) p θ (y) ∇ θ log p θ (y) = 1/Z θ · Ep θ exp{R(y|y *)} · ∇ θ log p θ (y), where Z θ = y exp{log p θ + R} is the normalization constant of q, which can be considered as adjusting the step size of gradient descent. We can see that Eq. recovers Eq. if we further set R = log RP G, and omit the scaling factor Z θ. In other words, policy gradient can be seen as a special instance of the general ERPO framework with (R = log RP G, α = 1, β = 0) and with Z θ omitted. The MIXER algorithm incorporates an annealing strategy that mixes between MLE and policy gradient training. Specifically, given a ground-truth example y *, the first m tokens y * 1:m are used for evaluating MLE loss, and starting from step m + 1, policy gradient objective is used. The m value decreases as training proceeds. With the relation between policy gradient and ERPO as established above, MIXER can be seen as a specific instance of the proposed interpolation algorithm (section 4) that follows a restricted annealing strategy for token-level hyperparameters (λ1, λ2, λ3). That is, for t < m in Eq.4 (i.e.,the first m steps), (λ1, λ2, λ3) is set to and c = 1, namely the MLE training; while for t > m, (λ1, λ2, λ3) is set to (0.5, 0.5, 0) and c = 2. Algorithm 1 summarizes the interpolation algorithm described in section 4. A.3.1 DATA PRE-PROCESSING For the machine translation dataset, we follow for data pre-processing. In text summarization, we sampled 200K out of the 3.8M pre-processed training examples provided by for the sake of training efficiency. We used the refined validation and test sets provided by . In the game imitation learning task, we randomly sample 50 state-action pairs in each trajectory as demonstrations. Every training iteration, we collect at least 2,048 state-action pairs, and we train 1,000 iterations for every model in every environment.
An entropy regularized policy optimization formalism subsumes a set of sequence prediction learning algorithms. A new interpolation algorithm with improved results on text generation and game imitation learning.
1,486
scitldr
Origin-Destination (OD) flow data is an important instrument in transportation studies. Precise prediction of customer demands from each original location to a destination given a series of previous snapshots helps ride-sharing platforms to better understand their market mechanism. However, most existing prediction methods ignore the network structure of OD flow data and fail to utilize the topological dependencies among related OD pairs. In this paper, we propose a latent spatial-temporal origin-destination (LSTOD) model, with a novel convolutional neural network (CNN) filter to learn the spatial features of OD pairs from a graph perspective and an attention structure to capture their long-term periodicity. Experiments on a real customer request dataset with available OD information from a ride-sharing platform demonstrate the advantage of LSTOD in achieving at least 6.5% improvement in prediction accuracy over the second best model. Spatial-temporal prediction of large-scale network-based OD flow data plays an important role in traffic flow control, urban routes planning, infrastructure construction, and the policy design of ridesharing platforms, among others. On ride-sharing platforms, customers keep sending requests with origins and destinations at each moment. Knowing the exact original location and destination of each future trip allows platforms to prepare sufficient supplies in advance to optimize resource utilization and improve users' experience. Given the destinations of prospective demands, platforms can predict the number of drivers transferring from busy to idle status. Prediction of dynamic demand flow data helps ride-sharing platforms to design better order dispatch and fleet management policies for achieving the demand-supply equilibrium as well as decreased passenger waiting times and increased driver serving rates. Many efforts have been devoted to developing traffic flow prediction models in the past few decades. Before the rise of deep learning, traditional statistical and machine learning approaches dominate this field (; ; ; ; Idé & ;). Most of these models are linear and thus ignore some important non-linear correlations among the OD flows. Some other methods further use additional manually extracted external features, but they fail to automatically extract the spatial representation of OD data. Moreover, they roughly combine the spatial and temporal features when fitting the prediction model instead of dynamically modelling their interactions. The development of deep learning technologies brings a significant improvement of OD flow prediction by extracting non-linear latent structures that cannot be easily covered by feature engineering. (; ;). Zhang et al. (2016; modeled the whole city are as an entire image and employed residual neural network to capture temporal closeness. and also learned traffic as images but they used LSTM instead to obtain the temporal dependency. Yao et al. (2018b) proposed a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relationships. However, using standard convolution filters suffers from the problem that some OD flows covered by a receptive field of regular CNNs are not spatially important. Graph-based neural net-works (GNN) (; ; Veličković et al., 2017) are proved to be powerful tools in modelling spatial-temporal network structures ). However, none of these frameworks are directly applicable here since both the historical observations and responses to predict are vertex-level variables. On the contrary, the OD flows we discuss in this paper are generated in the edge space by our definition. The aim of this paper is to introduce a hierarchical Latent Spatial-Temporal Origin-Destination (LSTOD) prediction model to jointly extract the complex spatial-temporal features of OD data by using some well-designed CNN-based architectures. Instead of modelling the dynamic OD networks as a sequence of images and applying standard convolution filters to capture their spatial information, we introduce a novel Vertex Adjacent Convolution Network (VACN) that uses an irregular convolution filter to cover the most related OD flows that share common vertecies with the target one. The OD flows connected by common starting and/or ending vertexes, which may fall into different regions of the flow map, can be spatially correlated and topologically connected. Moreover, for most ride-sharing platforms, a passenger is more likely to send a new request from the location where his/her last trip ends in. To learn such sequential dependency, we introduce a temporal gated CNN (TGCNN) and integrate it with VACN by using the sandwich-structured STconv block in order to collectively catch the evolutionary mechanism of dynamic OD flow systems. A periodically shifted attention mechanism is also used to capture the shift in the long-term periodicity. Finally, the combined short-term and long-term representations are fed into the final prediction layer to complete the architecture. Our contributions are summarized as follow: • To the best of our knowledge, it is the first time that we propose purely convolutional structures to learn both short-term and long-term spatio-temporal features simultaneously from dynamic origin-destination flow data. • We propose a novel VACN architecture to capture the graph-based semantic connections and functional similarities among correlated OD flows by modeling each OD flow map as an adjacency matrix. • We design a periodically shift attention mechanism to model the long-term periodicity when using convolutional architecture TGCNN in learning temporal features. • Experimental on two real customer demand data sets obtained from a ride-sharing platform demonstrate that LSTOD outperforms many state-of-the-art methods in OD flow prediction, with 7.94% to 15.14% improvement of testing RMSE. For a given urban area, we observe a sequence of adjacency matrices representing the OD flow maps defined on a fixed vertex set V, which indicates the N selected sub-regions from this area. We let V = {v 1, v 2, . . ., v N} denote the vertex set with v i being the i-th sub-region. The shape of each grid v i could be either rectangles, hexagons or irregular sub-regions. We define the dynamic OD flow maps as {O The goal of our prediction problem is to predict the snapshot O d,t+j ∈ R N ×N in the future time window (t + j) of day d given previously observed data, including both short-term and long-term historical information. The short-term input data consists of the last p 1 timestamps from t + 1 − p 1 to t, denoted by The long-term input data is made up of q time series {O d−ϕ,t+j−(p2−1)/2,..., O d−ϕ,t+j+(p2−1)/2 } of length p 2 for each previous day (d − ϕ) with the predicted time index (t + j) in the middle for ϕ = 1,..., q. We let,t+j+(p2−1)/2 } denote the entire long-term data. Increasing p 1 and p 2 leads to higher prediction accuracy, but more training time. We reformulate the set of short-term OD networks X 1 into a 4D tensor X 1 ∈ R N ×N ×p1×1 and concatenate the long-term snapshots X 2 into a 5D tensor Each X 2,d−ϕ ∈ R N ×N ×p2×1 for day d − ϕ is a 4D tensor for ϕ = 1,..., q. Therefore, we can finally define our latent prediction problem as follows: where F (·, ·) represents the LSTOD model, which captures the network structures of OD flow data as well as the temporal dependencies in multiple scales. A notation table is attached in the appendix. In this section, we describe the details of our proposed LSTOD prediction model. See Figure 1 for the architecture of LSTOD. The four major novelties and functionalities of LSTOD model include • an end-to-end framework LSTOD constructed by all kinds of CNN modules to process dynamic OD flow maps and build spatio-temporal prediction models; • a novel multi-layer architecture VACN to extract the network patterns of OD flow maps by propagating through edge connections, which can not be covered by traditional CNNs; • a special module ST-Conv block used to combine VACN and gated temporal convolutions to coherently learning the essential spatio-temporal representations; • a periodically shifted attention mechanism which is well designed for the purely convolutional ST-Conv blocks to efficiently utilize the long-term information by measuring its similarities with short-term data. Before introducing the detailed architecture of VACN, we want to discuss why directly applying standard CNNs to the OD flow map O d,t may disregard the connections between neighboring OD flows in the graph space first. Figure 2 demonstrates that it fails to capture enough semantic information using the real-world example of ride demands. For the OD flow starting from v 1 to v 2, as illustrated in the upper sub-figure, the most related OD flows should be those with either origin or destination being v 1 or v 2 in the past few timestamps. A certain part of the travel requests from v 1 to v 2 can be matched with some historical finished trips from a third-party location to V 1 by the same group of people, for example a trip from v 3 to v 1. However, as the lower-left sub-figure illustrates, some of the OD flows covered by a single CNN filter (the green square) such as the four corners of the kernel window may be topologically far away from the target one in the graph. More generally, let's consider a target OD flow o To better understand the differences between our proposed VACN over graph edges with those vertex-based convolutions such as GCN or GAT (Veličković et al., 2017), we introduce the concept of line graphs L(G) . Each node in L(G) corresponds to an edge in G, while each individual edge in L(G) can be mapped to a pair of edges in G that connect to a joint vertex. L(G) contains representing a feature vector at the edge from v i to v j. The learned representation of each target edge is defined as the weighted sum of those from the same row or column in the adjacency matrix, and those from the row or column in the transposed adjacency matrix. The output of the layer-wise VACN propagation for the OD flow from v i to v j is defined as follows: where are the weights to learn. F(·) represents an elementwise activation function, such as ReLU(x) = max(0, x). The first part of We use temporal gated CNN (TGCNN) instead of RNN-based architectures such as LSTMs to capture the temporal representation, which makes our LSTOD a pure convolutional architecture. RNNs suffer from the problem of lower training efficiency, gradient instability, and time-consuming convergence. Moreover, the high dimension of the spatial representations captured by VACN and a potential long temporal sequence length make RNNs notoriously difficult to train. The CNNs is more flexible in handling various data structures and allows parallel computations to increase training speed. TGCNN consists of two parts including one being a 3D convolution kernel applied to the spatial representations of all the N 2 OD flows along the time axis and the other being a gated linear unit (GLU) as the gate mechanism. By employing VACN at each of r successive timeslots, we can build a 4D tensor Y = (Y d,t) ∈ R N ×N ×r×ms which is then fed into the TGCNN operator: where G * γ represents the TGCNN kernel and γ includes all the related parameters to learn. 2m t 3D convolutional kernels of size (1 × 1 × K) with zero paddings map the input Y into a single output, which is split in half to obtain Y 1 and Y 2 with the same number of feature channels. here denotes the element-wise Hadamard product. We use the spatial-temporal convolutional block (ST-conv block) to jointly capture the spatialtemporal features of OD flow data, which has a'sandwich'-structure architecture with a multi-layer VACN operator in the middle connecting the two TGCNNs. The use of ST-Conv blocks have two major advantages. First, the block can be stacked or extended based on the dimension and characteristic of the spatio-temporal input data. Second, a temporal operation is applied before extracting the spatial information, which greatly reduces its computation complexity and memory consumption. Both the input and output of each individual ST-Conv block are 4D tensors. For the input of the l-th block Z l−1 ∈ R N ×N ×r l−1 ×c l−1 (Z 0 is the original the OD flow data with c 0 = 1), the output is computed as follows: where G 1 * γ l 1 and G 0 * γ l 0 are two TGCNN kernels and S * θ l is a multi-layer VACN operator being applied to each timestamp. The G 1 and G 2 operators from all the stacked ST-Conv blocks employ the same kernel sizes, which are (1 × 1 × K 1) and (1 × 1 × K 2), respectively. Thus, we have r l = r l−1 − (K 1 + K 2 − 2). After applying (r 0 − 1)/(K 0 + K 1 − 2) ST-Conv blocks to the input Z 0, the temporal length is reduced from r 0 to 1. When input being the short-term OD flow data f here squeezes the captured 4D output into a 3D tensor by dropping the temporal axis. The kernel sizes of each G 1 and, respectively. The detailed propagation of the l-th ST-Conv block is defined as In addition to capturing the the spatial-temporal features from short-term OD flow data X 1, we also take into account the long-term temporal periodicity due to the potential day-wise cycling patterns insides the OD flow data, decided by customer's travelling schedule and the city's traffic conditions. Directly applying ST-Con blocks to an extremely long OD sequence which covers previous few days or weeks is computationally expensive. Only a small set of timestamps from each previous day is necessary to capture the long-term periodicity. As mentioned, we pick p 2 time intervals for each day d − ϕ when predicting the time window (d, t + j) considering the non-strict long-term periodicity. This slight time shifting may be caused by unstable traffic peaks, holidays and extreme weather conditions among different days. Inspired by the recently widely used attention mechanisms (; a;) in spatial-temporal prediction problems, we propose a modified periodically shifted attention to work for the CNN-based ST-Conv blocks here. Different from Yao et al. (2018a) that measures the similarity between hidden units of LSTMs, the attention here is built on the intermediate outputs of TGCNNs where the concatenations are then fed into a new set of ST-Conv blocks. For each day (d − ϕ), we apply a series of L 1 ST-Conv blocks to the day-level p 2 -length sequential OD flow data X 2,d−ϕ and reduce the temporal length from p 2 to n 0 LT. Each block contains two TGCNN layers with the same kernel size 1 and the propagation rule of the l-th ST-Conv blocks is defined as: with Z Moreover, score(z where W 1, W 2 and v φ are learned projection matrices. b s is the added bias term. By assuming that to build a new 4D day-wise time series Z 0 LT . and finally apply another set of ST-Conv blocks to it to obtain the long-term spatial-temporal representations, which is denoted by Z LT ∈ R N ×N ×c LT . c LT is the number of feature channels. We concatenate the short-term and long-term spatial-temporal representations Z ST and Z LT together along the feature axis as Z = Z ST ⊕ Z LT ∈ R N ×N ×C, where C = c ST + c LT . Then, Z is modified to a 2D tensor Z ∈ R N 2 ×C by flattening the first two dimensions while keeping the third one. We apply a fully connected layer to the C feature channels together with an element-wise non-linear sigmoid function to get the final predictions for all the N 2 OD flows. We normalize the original OD flow data in the training set to by Max-Min normalization and use'sigmoid' activation for the final prediction layer to ensure that all predictions fall into. The upper and lower bounds are saved and used to denormalize the predictions of testing data to get the actual flow volumes. We use L 2 loss to build the objective loss during the training. The model is optimized via Backpropagation Through Time (BPTT) and Adam . The whole architecture of our model is realized using Tensorflow and Keras . All experiments were run on a cluster with one NVIDIA 12G-memory Titan GPU. In this section, we compare the proposed LSTOD model with some state-of-the-art approaches for latent traffic flow predictions. All compared methods are classified into traditional statistical methods and deep-learning based approaches. We use the demand flow data collected by a ride-sharing platform to examine the finite sample performance of OD flow predictions for each method. We employ a large-scale demand dataset obtained from a large-scale ride-sharing platform to do all the experiments. The dataset contains all customer requests received by the platform from 04/01/2018 to 06/30/2018 in two big cities A and B. Within each urban area, N = 50 hexagonal regions with the largest customer demands are selected to build the in total N 2 = 2500 OD flows. Since one-layer VACN has a computation complexity O(N) at each of the N 2 entries (globally O(N 3)), the memory consumption highly increases as N gets bigger. Considering the computation efficiency and storage limitation, we choose N = 50 here which can cover more than 80% of total demands and thus satisfy the operation requirement of the ride-sharing platform. We split the whole dataset into two parts. The data from 04/01/2018 to 06/16/2018 is used for model training, while the other part from 06/17/2017 to 06/30/2017 (14 days) serves as the testing set. The first two and half months of OD flow data is further divided in half to the training and validation sets. The size ratio between the two sets is around 4:1. We let 30 min be the length of each timestamp and the value of the OD flow from v i to v j is the cumulative number of customer requests. We make predictions for all the 50 2 OD flows in the incoming 1st, 2nd, and 3rd 30 minutes (i.e. t + 1, t + 2, t + 3) by each compared method, given the historical data with varied (p 1, p 2) combinations. For those model settings incorporating long-term information, we trace back q = 3 days to capture the time periodicity. We use Rooted Mean Square Error to evaluate the performance of each method: All state-of-the-art methods to be compared are listed as follows, some of which are modified to work for the OD flow data: (i) Historical average (HA): HA predicts the demand amount at each OD flow by the average value of the same day in previous 4 weeks. (ii) Autoregressive integrated moving average (ARIMA), (iii) Support Vector Machine Regression (SVMR), (iv) Latent Space Model for Road Networks (LSM-RN) , (v) Dense + BiLSTM (DLSTM) (Altché & de) and (vi) Spatiotemporal Recurrent Convolutional Networks (SRCN). We only consider latent models in this paper, that is, no external covariates are allowed, while only the historical OD flow data is used to extract the hidden spatial-temporal features. We tune the hyperparameters of each compared model to obtain the optimal prediction performance. Specifically, we get (p *, d *, q *) = for ARIMA and k * = 15, γ * = 2 −5, λ * = 10 for LSM-RN. The optimal kernel size of the spatial based CNN kernel is 11 × 11 in SRCN model. For fair comparison, we set the length of short-term OD flow sequence to be p 1 = 9 (i.e., previous 4.5 hours), q = 3 for long-term data which covers the three most recent days, and the length of each day-level time series p 2 = 5 to capture the periodicity shifting (one hour before and after the predicted time index). More analysis of how variant (p 1, p 2) combinations may affect the prediction performance of LSTOD will be studied latter. A two-layer architecture is used by all the deep-learning based methods to extract the spatial patterns inside the OD flow data (L = 2 for both short-term and long-term VACN). We set the filter size of all deep learning layers in both spatial and temporal space to be 64, including the VACNs and TGCNNs in our LSTOD model with c ST = c LT = 64. Comparison with state-of-the-art methods. Table 1 summarizes the finite sample performance for all the competitive methods and our LSTOD model in terms of the prediction RMSE on the testing data of city A. We compute the mean, variance, 25% quantile and 75% quantile of the 14 day-wise RMSE on the testing set. LSTOD outperforms all other methods on the testing data with the lowest average day-wise RMSE (2.41/2.55/2.67), achieving (8.02%/7.94%/8.24%) improvement over the second best method'SRCN'. In general, deep-learning based models perform more stably than traditional methods with smaller variance and narrower confidence intervals. Both'ARIMA' and'LSM-RN' perform poorly, even much worse than HA, indicating that they cannot capture enough short-term spatial-temporal features to get the evolution trend of OD flow data. Among the deep learning models, LSTOD can more efficiently control the estimation variance compared to all the others. This demonstrates the advantages of using our spatial-temporal architecture and long-term periodicity mechanism in modelling the dynamic evolution of OD flow networks. The improvement becomes more significant when the time scale increases since the contributions of long-term periodicity are more emphasized as the short-term signals getting weaker. The LSTOD performs even better on city B compared to the baseline methods since the long-term periodical pattern in city B may be more significant compared with that in city A. Detailed about City B are summarized in Table 3 of the appendix. Comparison with variants of LSTOD. Table 2 shows the finite sample performance of our proposed model LSTOD and its different variants based on the demand data from city A. We can see that the complete LSTOD model outperforms the short-term model and the one without using attention mechanisms in terms of smaller means and variances, and narrower confidence intervals. It indicates that the attention design we use can capture the shift of the day-wise periodicity and extract more seasonal patterns to improve prediction accuracy. The left sub-plot of Figure 3 compares the predictions by each model against the true values at two selected OD flows on the last 3 testing days in the 60-minute scales. Two abnormal change points are marked by black circles. The short-term model fails in this case because it ignores the long-term information. The complete LSTOD model outperforms the one without using attention mechanisms since it can better catch the shift of the periodicity. The right sub-plot visualizes the distribution curves of the day-wise RMSEs on the 14 testing days by each of the three compared models. The lighter tail of the red curve demonstrates that the complete LSTOD is more predictive and stable especially for those unusual cases. We do some more experiments to show how different hyperparameter configurations influence the model performance. For more details, please refer to Section E of the appendix. VACN VS standard local CNN. In this experiment, we will show that our proposed VACN outperforms standard CNNs in capturing the hidden network structure of the OD flow data. Given the model setting that N = 50 sub-regions of city A are used to build the dynamic OD flow matrices, the number of pixels being covered by VACN at each single snapshot is 50 × 4 = 200. For fair comparison, the largest receptive filed of standard CNN should be no bigger than a 15 × 15 window, which includes 225 elements each time. We consider five different kernel sizes including 5 × 5, 8 × 8, 11 × 11, 14 × 14, and 15 × 15. We replace VCAN in our model by standard CNN in order to fairly compare its performance. All hyper-parameters are fixed but only the kernel size of CNNs being changed. Moreover, we only consider the baseline short-term mode of LSTOD model while ignoring the long-term information. As Figure 4 illustrates, standard CNN achieves the best performance with the smallest RMSE = 2.64 on testing data with the filter size being 11 × 11, which is still higher than that using VACN with RMSE = 2.54. Specifically, RMSE increases when the receptive field is larger than 11 × 11 since the spatial correlations among the most related OD flows (sharing common origin or destination nodes) are smoothed with the increase in the filter size ((8 × 2 − 1)/64 > (14 × 2 − 1)/196). This experiment shows that treating the dynamic demand matrix as an image, and applying standard CNN filters does not capture enough spatial correlations among related OD flows without considering their topological connections from the perspective of graphs. For more details, please refer to Batch normalization is used in the VACN component. The batch size in our experiment was set to 10, corresponding to 10 randomly sampled timestamps and all the 50 2 OD flows in each snapshot. The initial liearning rate is set to be 10 −4 with a decay rate 10 −6. We use early stopping for all the deep learning-based methods where the training process is terminated when the RMSE over validation set has not been improved for 10 successive epochs. The maximal number of epochs allowed is 100. In this section, we want to explore how some important hyperparameters of input OD flow data, for example p 1 and p 2, may affect the performance of our LSTOD model. Figure 6 (b) compares RMSE on testing data by STOD model with different data settings. Varied combinations of the short-term sequence length p 1 and the long-term day-level sequence length p 2 are studied. We can see that the best performance is achieved as (p 1, p 2) = with RMSE = 2.41. Specifically, settings with different p 1's under p 2 = 5 consistently outperform those under p 2 = 7. It may demonstrate that the shift can usually be captured within a short time range, while a longer time sequence may smooth the significance. Table 4 provides the detailed prediction for each data setting.
We propose a purely convolutional CNN model with attention mechanism to predict spatial-temporal origin-destination flows.
1,487
scitldr
Adversarial training has been demonstrated as one of the most effective methods for training robust models to defend against adversarial examples. However, adversarially trained models often lack adversarially robust generalization on unseen testing data. Recent works show that adversarially trained models are more biased towards global structure features. Instead, in this work, we would like to investigate the relationship between the generalization of adversarial training and the robust local features, as the robust local features generalize well for unseen shape variation. To learn the robust local features, we develop a Random Block Shuffle (RBS) transformation to break up the global structure features on normal adversarial examples. We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples. To demonstrate the generality of our argument, we implement RLFAT in currently state-of-the-art adversarial training frameworks. Extensive experiments on STL-10, CIFAR-10 and CIFAR-100 show that RLFAT significantly improves both the adversarially robust generalization and the standard generalization of adversarial training. Additionally, we demonstrate that our models capture more local features of the object on the images, aligning better with human perception. Deep learning has achieved a remarkable performance breakthrough on various challenging benchmarks in machine learning fields, such as image classification and speech recognition. However, recent studies have revealed that deep neural network models are strikingly susceptible to adversarial examples, in which small perturbations around the input are sufficient to mislead the predictions of the target model. Moreover, such perturbations are almost imperceptible to humans and often transfer across diverse models to achieve black-box attacks (; ;). Though the emergence of adversarial examples has received significant attention and led to various defend approaches for developing robust models;; a), many proposed defense methods provide few benefits for the true robustness but mask the gradients on which most attacks rely (a; ; ;). Currently, one of the best techniques to defend against adversarial attacks is adversarial training a), which improves the adversarial robustness by injecting adversarial examples into the training data. Among substantial works of adversarial training, there still remains a big robust generalization gap between the training data and the testing data b;; ). The robustness of adversarial training fails to generalize on unseen testing data. Recent works further show that adversarially trained models capture more on global structure features but normally trained models are more biased towards local features. In intuition, global structure features tend to be robust against adversarial perturbations but hard to generalize for unseen shape variations, instead, local features generalize well for unseen shape variations but are hard to generalize on adversarial perturbation. It naturally raises an intriguing question for adversarial training: For adversarial training, is it possible to learn the robust local features, which have better adversarially robust generalization and better standard generalization? To address this question, we investigate the relationship between the generalization of adversarial training and the robust local features, and advocate for learning robust local features for adversarial training. Our main contributions are as follows: • To our knowledge, this is the first work that sheds light on the relationship between adversarial training and robust local features. Specifically, we develop a Random Block Shuffle (RBS) transformation to study such relationship by breaking up the global structure features on normal adversarial examples. • We propose a novel method called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features, and then transfers the information of robust local features into the training on normal adversarial examples. • To demonstrate the generality of our argument, we implement RLFAT in two currently stateof-the-art adversarial training frameworks, PGD Adversarial Training (PGDAT) and TRADES (a). Empirical show consistent and substantial improvements for both adversarial robustness and standard accuracy on several standard datasets. Moreover, the salience maps of our models on images tend to align better with human perception. In this section, we introduce some notations and provide a brief description on current advanced methods for adversarial attacks and adversarial training. Let F (x) be a probabilistic classifier based on a neural network with the logits function f (x) and the probability distribution p F (·|x). Let L(F ; x, y) be the cross entropy loss for image classification. The goal of the adversaries is to find an adversarial example x ∈ B p (x):= {x : x − x p ≤} in the p norm bounded perturbations, where denotes the magnitude of the perturbations. In this paper, we focus on p = ∞ to align with previous works. Projected Gradient Descent. Projected Gradient Descent (PGD) ) is a stronger iterative variant of Fast Gradient Sign Method (FGSM) , which iteratively solves the optimization problem max x: x −x ∞ < L (F ; x, y) with a step size α: where U denotes the uniform distribution, and Π B ∞ (x) indicates the projection of the set B ∞ (x). Carlini-Wagner attack. Carlini-Wagner attack (CW) (2017b) is a sophisticated method to directly solve for the adversarial example x adv by using an auxiliary variable w: The objective function to optimize the auxiliary variable w is defined as: where The constant k controls the confidence gap between the adversarial class and the true class. N attack. N attack is a derivative-free black-box adversarial attack and it breaks many of the defense methods based on gradient masking. The basic idea is to learn a probability density distribution over a small region centered around the clean input, such that a sample drawn from this distribution is likely to be an adversarial example. Despite a wide range of defense methods, and have broken most previous defense methods (; ; ; a), and revealed that adversarial training remains one of the best defense method. The basic idea of adversarial training is to solve the min-max optimization problem, as shown in Eq.: min Here we introduce two currently state-of-the-art adversarial training frameworks. PGD adversarial training. PGD Adversarial Training (PGDAT) leverages the PGD attack to generate adversarial examples, and trains only with the adversarial examples. The objective function is formalized as follows: where x PGD is obtained via the PGD attack on the cross entropy L(F ; x, y). Zhang et al. (2019a) propose TRADES to specifically maximize the trade-off of adversarial training between adversarial robustness and standard accuracy by optimizing the following regularized surrogate loss: where and λ is a hyper-parameter to control the trade-off between adversarial robustness and standard accuracy. Unlike adversarially trained models, normally trained models are more biased towards the local features but vulnerable to adversarial examples . It indicates that, in contrast to global structural features, local features seems be more well-generalized but less robust against adversarial perturbation. Based on the basic observation, in this work, we focus on the learning of robust local features on adversarial training, and propose a novel form of adversarial training called RLFAT that learns the robust local features and transfers the robust local features into the training of normal adversarial examples. In this way, our adversarially trained models not only yield strong robustness against adversarial examples but also show great generalization on unseen testing data. It's known that adversarial training tends to capture global structure features so as to increase invariance against adversarial perturbations . To advocate for the learning of robust local features during adversarial training, we propose a simple and straightforward image transformation called Random Block Shuffle (RBS) to break up the global structure features of the images, at the same time retaining the local features. Specifically, for an input image, we randomly split the target image into k blocks horizontally and randomly shuffle the blocks, and then we perform the same split-shuffle operation vertically on the ing image. As illustrated in Figure 1, RBS transformation can destroy the global structure features of the images to some extent and retain the local features of the images. Then we apply the RBS transformation on adversarial training. Different from normal adversarial training, we use the RBS-transformed adversarial examples rather than normal adversarial examples as the adversarial information to encourage the models to learn robust local features. Note that we only use the RBS transformation as a tool to learn the robust local features during adversarial training and will not use RBS transformation in the inference phase. we refer to the form of adversarial training as RBS Adversarial Training (RBSAT). To demonstrate the generality of our argument, we consider two currently state-of-the-art adversarial training frameworks, PGD Adversarial Training (PGDAT) and TRADES (a), to demonstrate the effectiveness of the robust local features. We use the following loss function as the alternative to the objective function of PGDAT: where RBS(·) denotes the RBS transformation; x PGD is obtained via the PGD attack on the cross entropy L(F ; x, y). Similarly, we use the following loss function as the alternative to the objective function of TRADES: where Since the type of input images in the training phase and the inference phase is different (RBS transformed images for training, versus original images for inference), we consider to transfer the knowledge of the robust local features learned by RBSAT to the normal adversarial examples. Specifically, we present a knowledge transfer scheme, called Robust Local Feature Transfer (RLFT). The goal of RLFT is to learn the representation that minimizes the feature shift between the normal adversarial examples and the RBS-transformed adversarial examples. In particular, we apply RLFT on the logit layer for high-level feature alignment. Formally, the objective functions of robust local feature transfer for PGDAT and TRADES are formalized as follows, respectively: where f (·) denotes the mapping of the logit layer, and · 2 2 denotes the squared Euclidean norm. Since the quality of robust local feature transfer depends on the quality of the robust local features learned by RBSAT, we integrate RBSAT and RLFT into an end-to-end training framework, which we refer to as RLFAT (Robust Local Features for Adversarial Training). The general training process of RLFAT is summarized in Algorithm 1. Note that the computational cost of RBS transformation (line 7) is negligible in the total computational cost. Algorithm 1 Robust Local Features for Adversarial Training (RLFAT). 1: Randomly initialize network F (x); 2: Number of iterations t ← 0; 3: repeat 4: Read a minibatch of data {x 1, ..., x m} from the training set; 6: Generate the normal adversarial examples {x Calculate the overall loss following Eq.. Update the parameters of network F through back propagation; 10: until the training converges. We implement RLFAT in two currently state-of-the-art adversarial training frameworks, PGDAT and TRADES, and have new objective functions to learn the robust and well-generalized feature representations, which we call RLFAT P and RLFAT T: where η is a hyper-parameter to balance the two terms. In this section, to validate the effectiveness of RLFAT, we empirically evaluate our two implementations, denoted as RLFAT P and RLFAT T, and show that our models make significant improvement on both robust accuracy and standard accuracy on standard benchmark datasets, which provides strong support for our main hypothesis. Codes are available online 1. Baselines. Since most previous defense methods provide few benefit in true adversarially robustness , we compare the proposed methods with state-of-theart adversarial training defenses, PGD Adversarial Training (PGDAT) and TRADES (a). Adversarial setting. We consider two attack settings with the bounded ∞ norm: the white-box attack setting and the black-box attack setting. For the white-box attack setting, we consider existing strongest white-box attacks: Projected Gradient Descent (PGD) ) and CarliniWagner attack (CW) (b). For the black-box attack setting, we perform the powerful black-box attack, N attack , on a sample of 1,500 test inputs as it is timeconsuming. Datasets. We compare the proposed methods with the baselines on widely used benchmark datasets, namely CIFAR-10 and CIFAR-100 (Hyper-parameters. To avoid posting much concentrate on optimizing the hyper-parameters, for all datasets, we set the hyper-parameter λ in TRADES as 6, set the hyper-parameter η in RLFAT P as 0.5, and set the hyper-parameter η in RLFAT T as 1. For the training jobs of all our models, we set the hyper-parameters k of the RBS transformation as 2. More details about the hyper-parameters are provided in Appendix A. We first validate our main hypothesis: for adversarial training, is it possible to learn the robust local features that have better adversarially robust generalization and better standard generalization? In Table 1, we compare the accuracy of RLFAT P and RLFAT T with the competing baselines on three standard datasets. The proposed methods lead to consistent and significant improvements on adversarial robustness as well as standard accuracy over the baseline models on all datasets. With the robust local features, RLFAT T achieves better adversarially robust generalization and better standard generalization than TRADES. RLFAT P also works similarly, showing a significant improvement on the robustness against all attacks and standard accuracy than PGDAT. The demonstrate that, the robust local features can significantly improve both the adversarially robust generalization and the standard generalization over the state-of-the-art adversarial training frameworks, and strongly support our hypothesis. That is, for adversarial training, it is possible to learn the robust local features, which have better robust and standard generalization. Motivation. and Zhang et al. (2019b) found that the effectiveness of adversarial training is highly sensitive to the "semantic-loss" shift of the test data distribution, such as gamma mapping. To further investigate the performance of the proposed methods, we quantify the smoothness of the models under the distribution shifts of brightness perturbation and gamma mapping. Loss sensitivity on brightness perturbation. To quantify the smoothness of models on the shift of the brightness perturbation, we propose to estimate the Lipschitz continuity constant F by using the gradients of the loss function with respect to the brightness perturbation of the testing data. We adjust the brightness factor of images in the HSV (hue, saturation, value) color space, which we refer to as x b = V(x, α), where α denotes the magnitude of the brightness adjustment. The lower the value of b F (α) is, the smoother the loss function of the model is: Loss sensitivity on gamma mapping. Gamma mapping is a nonlinear elementwise operation used to adjust the exposure of images by applyingx (γ) = x γ on the original image x. Similarly, we approximate the loss sensitivity under gamma mapping, by using the gradients of the loss function with respect to the gamma mapping of the testing data. A smaller value indicates a smoother loss function. Sensitivity analysis. The for the loss sensitivity of the adversarially trained models under brightness perturbation are reported in Table 2a, where we adopt various magnitude of brightness adjustment on each testing data. In Table 2b, we report the loss sensitivity of adversarially trained models under various gamma mappings. We observe that RLFAT T provides the smoothest model under the distribution shifts on all the three datasets. The suggest that, as compared to PGDAT and TRADES, both RLFAT P and RLFAT T show lower gradients of the models on different data distributions, which we can directly attribute to the robust local features. To further gain insights on the performance obtained by the robust local features, we perform ablation studies to dissect the impact of various components (robust local feature learning and robust local feature transfer). As shown in Figure 2, we conduct additional experiments for the ablation studies of RLFAT P and RLFAT T on STL-10, CIFAR-10 and CIFAR-100, where we report the standard accuracy over the clean data and the average robust accuracy over all the attacks for each model. We first analyze that as compared to adversarial training on normal adversarial examples, whether adversarial training on RBS-transformed adversarial examples produces better generalization and more robust features. As shown in Figure 2, we observe that Robust Local Features Learning (RLFL) exhibits stable improvements on both standard accuracy and robust accuracy for RLFAT P and RLFAT T, providing strong support for our hypothesis. Does robust local feature transfer help? We further add Robust Local Feature Transfer (RLFT), the second term in Eq., to get the overall loss of RLFAT. The robust accuracy further increases on all datasets for RLFAT P and RLFAT T. The standard accuracy further increases also, except for RLFAT P on CIFAR-100, but it is still clearly higher than the baseline model PGDAT. It indicates that transferring the robust local features into the training of normal adversarial examples does help promote the standard accuracy and robust accuracy in most cases. We would like to investigate the features of the input images that the models are mostly focused on. Following the work of , we generate the salience maps using SmoothGrad on STL-10 dataset. The key idea of SmoothGrad is to average the gradients of class activation with respect to noisy copies of an input image. As illustrated in Figure 3, all the adversarially trained models basically capture the global structure features of the object on the images. As compared to PGDAT and TRADES, both RLFAT P and RLFAT T capture more local feature information of the object, aligning better with human perception. Note that the images are correctly classified by all these models. For more visualization , see Appendix B. Differs to existing adversarially trained models that are more biased towards the global structure features of the images, in this work, we hypothesize that robust local features can improve the generalization of adversarial training. To validate this hypothesis, we propose a new stream of adversarial training approach called Robust Local Features for Adversarial Training (RLFAT) and implement it in currently state-of-the-art adversarial training frameworks, PGDAT and TRADES. We provide strong empirical support for our hypothesis and show that the proposed methods based on RLFAT not only yield better standard generalization but also promote the adversarially robust generalization. Furthermore, we show that the salience maps of our models on images tend to align better with human perception, uncovering certain unexpected benefit of the robust local features for adversarial training. Our findings open a new avenue for improving adversarial training, whereas there are still a lot to explore along this avenue. First, is it possible to explicitly disentangle the robust local features from the perspective of feature disentanglement? What is the best way to leverage the robust local features? Second, from a methodological standpoint, the discovered relationship may also serve as an inspiration for new adversarial defenses, where not only the robust local features but also the global information is taken into account, as the global information is useful for some tasks. These questions are worth investigation in future work, and we hope that our observations on the benefit of robust local features will inspire more future development. Here we show the details of the training hyper-parameters and the attack hyper-parameters for the experiments. Training Hyper-parameters. For all training jobs, we use the Adam optimizer with a learning rate of 0.001 and a batch size of 32. For CIFAR-10 and CIFAR-100, we run 79,800 steps for training. For STL-10, we run 29,700 steps for training. For STL-10 and CIFAR-100, the adversarial examples are generated with step size 0.0075, 7 iterations, and = 0.03. For CIFAR-10, the adversarial examples are generated with step size 0.0075, 10 iterations, and = 0.03. Attack Hyper-parameters. For the PGD attack, we use the same attack parameters as those of the training process. For the CW attack, we use PGD to minimize its loss function with a high confidence parameter (k = 50) following the work of. For the N attack, we set the maximum number of optimization iterations to T = 200, b = 300 for the sample size, the variance of the isotropic Gaussian σ 2 = 0.01, and the learning rate η = 0.008. We provide more salience maps of the adversarially trained models on sampled images in Figure 4. Original PGDAT TRADES RLFATP RLFATT Original PGDAT TRADES RLFATP RLFATT Figure 4: More Salience maps of the four models. For each group of images, we have the original image, and the salience maps of the four models sequentially.
We propose a new stream of adversarial training approach called Robust Local Features for Adversarial Training (RLFAT) that significantly improves both the adversarially robust generalization and the standard generalization.
1,488
scitldr
The verification of planning domain models is crucial to ensure the safety, integrity and correctness of planning-based automated systems. This task is usually performed using model checking techniques. However, directly applying model checkers to verify planning domain models can in false positives, i.e. counterexamples that are unreachable by a sound planner when using the domain under verification during a planning task. In this paper, we discuss the downside of unconstrained planning domain model verification. We then propose a fail-safe practice for designing planning domain models that can inherently guarantee the safety of the produced plans in case of undetected errors in domain models. In addition, we demonstrate how model checkers, as well as state trajectory constraints planning techniques, should be used to verify planning domain models so that unreachable counterexamples are not returned. Planning and task scheduling techniques are increasingly applied to real-world problems such as activity sequencing, constraint solving and resource management. These processes are implemented in planning-based automated systems which are already used in space missions BID14 BID3 BID0, search and rescue BID12, logistics BID19 and many other domains. Since the failure of such systems could have catastrophic consequences, these applications are regarded as safety-critical. Therefore, verification methods that are robust, trustworthy and systematic are crucial to gain confidence in the safety, integrity and correctness of these systems. The literature is rich with studies on verification of planning systems. For instance, BID17 carried out scenario-based testing and model-based validation of the remote agent that controlled the Deep Space 1 mission. Another example is the verification of the safety of the autonomous science agent design that was deployed on the Earth Orbiter 1 spacecraft.A typical planning system consists of a planning domain model, planning problem, planner, plan, executive, and mon-itor. Planners take as an input a domain model which describes application-specific states and actions, and a problem that specifies the goal and the initial state. From these inputs, a sequence of actions that can achieve the goal starting from the initial state is returned as plan. The plan is then executed by an executive to change the world state to match the desired goal. Our research focuses on the verification of planning domain models wrt. safety properties. Domain models provide the foundations for planning. They describe real-world actions by capturing their pre-conditions and effects. Due to modelling errors, a domain model might be inconsistent, incomplete, or inaccurate. This could cause the planner to fail in finding a plan or to generate unrealistic plans that will fail to execute in the real world. Moreover, erroneous domain models could lead planners to produce unsafe plans that, when executed, could cause catastrophic consequences in the real world. This paper addresses the fact that the state-of-the-art verification methods for planning domain models are vulnerable to false positive counterexamples. In particular, unconstrained verification tasks might return counterexamples that are unreachable by planners. Such counterexamples can mislead designers to unnecessarily restrict domain models, thereby potentially blocking valid and possibly necessary behaviours. In addition, false positive counterexamples can lead verification engineers to overlook counterexamples that are reachable by planners. To overcome these deficiencies, we propose to employ planning goals as constraints during verification. Thus, we introduce goal-constrained planning domain model verification, a novel concept that eliminates unreachable counterexamples per se. We formally prove that goal-constrained planning domain model verification of safety properties is guaranteed to return reachable counterexamples if and only if any exist. We also demonstrate two different ways to perform goal-constrained planning domain model verification, one using model checkers and the other using state trajectory constraints planning techniques. To the best of our knowledge, this work is the first to recommend fail-safe planning domain model design practice; introduce the concept of goal-constrained planning domain model verification. and demonstrate how model checkers, as well as state trajectory constraints planning techniques, can be used to perform goal-constrained planning domain model verification The rest of this paper is organised as follows. First, Section 2, contrasts the concepts presented here with related work. Second, Section 3 discusses the problem of unreachable counterexamples in planning domain model verification. Third, Section 4 proposes a design practice for planning domain models that can inherently guarantee domain model safety even in the case of undetected modelling errors. A verification concept of planning domain models that avoids returning unreachable counterexamples is presented in Section 5. Then, Section 6 discusses the implementation of this concept on the Cave Diving planning domain using Spin and MIPS-XXL. Finally, Section 7 concludes the paper and suggests future work. Closely related, but different, is the work by BID1. Their main objective is to treat verification as a planning task, whereas our aim is to demonstrate how model checkers and planners can be used for domain model verification. They proposed to perform system model verification using classical planners. To do this, they first translated the model of the system under verification into a planning domain model. Then, the negation of the safety property to be established, is used as the goal for the planner, which is then consulted to find a plan that acts as counterexample for the given property. In our study, because our aim is to verify domain models against a given property with respect to a specific goal and initial state, we used state trajectory constraints to restrict counterexamples to identify plans that can achieve the planning goal while falsifying the safety property. Unlike BID1, where the negation of the safety property is used as the goal, in our verification as planning method, the negation of the safety property is represented as state trajectory constraint and the goal is the given planning goal. BID16 ) also apply verification as planning to verify planning domain models, starting from LTL specifications. This work fundamentally differs from our work. BID16 focused their work on translating specification properties into trap formulas which can help in testing the impact of individual atomic propositions on the validity of the overall verified property. However, their method does not consider the interaction between property testing and the original planning goal. Note that finding a planning constraint to exercise a specific atomic proposition is not enough to ensure the constraint itself would be exercised during the planning process. For example, a planning goal might be achieved through a state trajectory that does not exercise the hard constraint used to represent the tested property. Our work is mainly based on investigating this interaction. Therefore, we used state trajectory constraints to guarantee the property is tested while achieving the planning goal. Additionally, their work, just like other similar methods, requires a complete planner to give deterministic , whereas our work, as discussed in Section 5, guarantees definite verification without this requirement.(Goldman, Kuter, and Schneider 2012) also used classical planners for planning systems verification, but they examined verifying plans rather than domain models. They proposed an approach that uses classical planners to find counterexamples for a given planning problem and plan instance. Their work and ours are related in that both suggest performing planning verification for a specific planning problem rather than attempting ungrounded verification of a planning system. However, their work is limited to the verification of single plan instances, whereas our method verifies all potential plans that can be spun from a domain model for a specific goal and initial state. Among others, BID15 BID13 BID18 BID10 BID2 ) used model checkers to verify planning domain models. They translated the respective domain models into the input language of the selected model checker. The model checker is then applied to verify the domain model wrt. a given specification property. Similarly, we also proposed a method to verify domain models using model checkers. However, our method differs from the others in two aspects. First, in the way we define the planning domain model verification problem, and, second, in the way we use model checkers to perform verification. As explained in Section 5, we consider the verification of planning domain models to be constrained by a specific goal and initial state pair. In contrast, previous studies perform ungrounded verification of domain models, i.e. leaving the goal and initial state open. As discussed in Section 3, the ungrounded goal and initial state may cause the model checker to return counterexamples that are unreachable when a planner uses the DUV. These unreachable counterexamples can mislead the designers to over-restrict the DUV during the debugging process. On the other side, when the goal and initial state are constrained for verification, then we have shown that the returned counterexamples, if any, are guaranteed to be reachable by any sound planner. The second difference is that, after the planning domain model is translated to the model checker's input language, we augment the model transitions, introducing the negation of the goal as a new constraint that forces the model checker to terminate once the goal is reached. This modification prevents the model checker from returning counterexamples that falsify the given property after satisfying the goal; these are unreachable by planners. Planning domain model verification aims to demonstrate that any produced plan satisfies a set of properties. To achieve this, formal planning domain model verification methods leave the planning goal open. This, we define as unconstrained verification of planning domain models, i.e. the verification is expected to hold for any potential goal. unconstrained verification searches the domain model for a sequence of actions that can falsify the given property, regardless of any other conditions. In particular, whether or not a planner would consider this sequence to be a plan, is not taken into account. This is a critical oversight, because, when the domain model is used to solve a specific planning problem, the sequence of actions that constitutes such a counterexample might, in fact, be "pruned away" by the planner, if it does not satisfy the planning goal. Therefore, for a specific planning problem, counterexamples that do not achieve the planning goal are deemed unreachable counterexamples from the planner's perspective. To illustrate this, we use a modified version of the microwave oven example, introduced in BID5, as presented in FIG0. A safety requirement would be that the domain model does not allow the generation of erroneous plans, in LTL p 0 = G(¬Error), where G is the LTL globally operator. Unconstrained verification will return StartOven as a counterexample that when applied to s 0 will produce s 2 which is an error state. However, when this model is used to find a plan that achieves the goal (g = Heat), this sequence will not be considered by the planner as it does not lead to a state that achieves the goal. Moreover, we observe that the valid plan CloseDoor, StartOven does satisfy the property p 0, i.e. is error-free. Thus, the sequence StartOven from s 0 to s 2 is an unreachable counterexample for the planner; it does not achieve the goal, nor is it part of a valid plan towards the goal. DISPLAYFORM0 Counterexamples that are unreachable by planners exist in the literature. For example, BID18 ) used the Spin model checker to verify whether a planning domain model would permit an automated planning system to select plans that would waste resources and therefore not meet the mission's science goals. To express this requirement, they used "five data-producing activities must be scheduled by any returned plan" as a property for model checking. The automated system has two data-producing and two data-consuming activities, and a buffer that can hold four data blocks. The goal of the planner is to schedule five data-producing activity instances. The counterexample returned by the model checker represented a plan with the two data-consuming activities scheduled before four dataproducing activities. This plan did not contain a fifth dataproducing task, because the data buffer was full after four data-producing activities and the only two data-consuming tasks that would have cleared the buffer, were scheduled at the beginning of the plan with no data in the buffer. Though the model checker found a counterexample to falsify the property, we argue that any sound planner would not generate such a plan, because it does not achieve the planning goal. As such, this counterexample would have been pruned during the planner's goal search, and consequently, it would never have been returned as a plan, i.e. it is unreachable for the planner, yet reachable by a goal-ignorant model checker. The problem with unreachable counterexamples is that they mislead the designer to unnecessarily restrict the domain model in the process of removing them. Consequently, debugging is made harder and genuine counterexamples could potentially be introduced in the process. To overcome this, we observe that planning is performed for a specific goal and initial state. To exploit this observation for domain model verification, we propose to use the goal and initial state given to the planner as constraints to ensure that the counterexamples returned by a model checker, or other tools used in this context, falsify the given property while also achieving the planning goal. Thus, instead of performing unconstrained domain model verification, we propose goal-constrained verification of planning domain models. The details of this method are further explained in Section 5. Next, we describe an inherently safe domain model design practice which can help to make domain models safer. The ultimate objective of planning domain model verification is to ensure that the plans produced by the verified domains satisfy a given specification. An alternative and guaranteed way of achieving this goal is to extract plan constraints from the specification, then include them in the domain model. A sound planner using this constrained domain model cannot produce any plan that could violate these constraints. This idea was first noticed in 2005 BID18 ) but was dismissed as it was not possible to describe overall plan constraints using PDDL 2.2. However, in BID7 BID7 proposed an extension to the PDDL 2.2 language that allows the expression of plan state trajectory constraints. The extended language, called PDDL3.0, was proposed for the fifth international planning competition (IPC-5). BID18 provided an example of a system consisting of a camera, solid-state recorder and a radio, and a requirement that for all plans, if an image is taken and stored, then it is eventually uplinked. With the hard state trajectory constraints, this property can be expressed as sometimeafter((image is taken and image is stored) image is uplinked). With this constraint, any sequence of actions that does not respect this property would not be returned as a plan. Though including specification properties in the domain model as strong constraints is enough to guarantee that sound planners using the constrained domain models will produce plans that meet the specification, this method will not be able to find any errors in the domain model. Instead, it will just ensure these errors, if any, are masked and prevented from affecting any plans that could possibly be generated using the modified domain model. As such, this method can be seen as a safety defence layer, a firewall, that prevents any potential property violation. Nevertheless, note that undetected bugs in a domain model could cause what would have been valid plans to be masked, thus unnecessarily restricting the planner. Therefore, further verification efforts are needed to reveal and rectify any underlying errors. We consider including plan constraints in the domain model to be a good practice to design inherently safe domain models. The effort of extracting formal properties from the specification and inserting them as constraints in the domain model is a small investment in return to the huge benefit of guaranteed safe plans, i.e. plans that are safe "by construction". This, together with our new concept of goalconstrained verification, as introduced in the next section, can deliver safe and error-free models.5 Goal-constrained verification of planning domain modelsPlanning domain model verification covers different objectives, including the domain's correctness, completeness, robustness, effectiveness and safety. The intent of safety verification in this context is to verify that any plan produced from the DUV will satisfy a given safety property. In other words, a domain is considered safe if the domain is guaranteed only to produce plans that satisfy the given safety property when used by a sound planner. This verification task can be performed using advanced search algorithms, such as model checkers or classical planners, to find a valid counterexample for the given safety property. We define a valid counterexample to be a sequence of actions that, firstly, can falsify the given safety property, secondly, can achieve the planning goal from the given initial state, and, thirdly, none of the sub-sequences of the counterexample can achieve the goal. Formally, this is defined as follows: Let the planning problem P be a tuple (D, s 0, g), where D is the domain model that describes the set of all available actions A, s 0 is the initial state and g is the desired goal. π is a solution to P, a plan, defined as a sequence of actions, where these actions are chosen from A. π = a 0, a 1,..., a n such that π |= g i.e. when π is applied to the initial state s 0 it yields a sequence of states S, S = s 0, s 1,..., s n where the last state s n satisfies the planning goal g, s n |= g. We say a plan π satisfies a property p, π |= p, if the sequence of states S, generated by the plan π, satisfies the property p, S |= p. Furthermore, as defined in (Ghallab, Nau, and Traverso 2004), we call a plan π a redundant plan, if π contains a subsequence, π, π | π, that achieves the goal g. Definition 1: A valid counterexample for a safety property, p, of a planning problem is a non-redundant plan, π, that falsifies the safety property, π |= p. Plans are required to be non-redundant in the definition of valid counterexamples to exclude any plans that are enriched with action sequences which are unnecessary to achieve the planning goal but required to falsify the given safety property. Such plans represent counterexamples that are unreachable by any sound planner when searching for a plan to achieve a given planning problem in a planning task. Such plans represent counterexamples that are unreachable by any sound planner. To ensure the returned counterexamples are valid, we constrain the verification problem with a goal and initial state, and we exclude any counterexample that is a redundant plan. More formally, the verification problem associated with planning task P is defined as the tuple V = (D, (s 0, g), p). Where p is a formal safety property extracted from a given specification and required to hold over all valid paths that achieve the goal g from the initial state s 0.In this section, we introduced and formally defined the concept of goal-constrained verification of planning domain models. In the following subsections, we demonstrate how this concept can be realized using model checkers and state trajectory constraints planning techniques. Model checkers verify safety properties by searching for counterexamples that falsify those properties. In the case of planning applications, any sequence of actions that does not achieve the given goal, will be pruned by any sound planner. Therefore, in the verification of planning problems, any counterexample that does not achieve the goal of the planning problem should be eliminated on the bases that this counterexample is unreachable by the planner. To force model checkers to only return valid counterexamples, the safety property is first negated and then joined with the planning goal in a conjunction. This conjunction is then negated and supplied to the model checker as an input property. The final property requires the model checker to find a counterexample that both, falsifies the safety property and satisfies the planning goal. Note that, unlike Def. 1, this permits sequences that falsify the property after satisfying the goal. However, once the goal is achieved, planners terminate the search, thus rendering such sequences unreachable. To eliminate these sequences, model transitions should be augmented with an additional guard, representing the negation of the goal, to restrict all transitions once the goal is achieved. With this modification, the model checker is forced to return counterexamples that falsify the safety property before achieving the goal, because once the goal is satisfied no further transitions will be permitted. For a verification problem V = (D, (s 0, g), p) we first translate the domain model D into the model checker's input language, obtaining the model M that incorporates the initial state s 0. Then, a model checker is applied to the verification problem V = (M, ¬F (g)) to establish that DISPLAYFORM0 where F is the LTL eventually operator. The model M is modified to M by augmenting the guards of all transitions with the negation of the goal condition. The model checker is then applied to the verification problem V = (M, p) where p is defined as follows: DISPLAYFORM1 There are two possible outcomes of the verification task V. If the model checker returns a counterexample, π, then: DISPLAYFORM2 From the definition of the LTL eventually operator F: DISPLAYFORM3 It follows that there is at least one sequence S that falsifies the property p, and there is a state s j in that sequence which satisfies the goal g, according to and. In addition to that, in the sequence S, p is guaranteed to be falsified before g is satisfied, due to the modification we introduced in the model M. Thus, the plan π is a valid counterexample for the original safety property p as per Def. 1. This proves that the DUV does not satisfy the safety property p with respect to the goal and initial state. The other potential outcome is that the model checker fails to find a counterexample, then for all plans, π: DISPLAYFORM4 It follows that π |= F (¬p) ∨ π |= F (g). Furthermore, from we know that ∃π. π |= F (g). Therefore for all plans, π: DISPLAYFORM5 That means p is always true for all possible plans. Which proves that the DUV satisfies the original property with respect to the goal and initial state. Domain models can be verified to only produce valid plans, in terms of satisfying given a property, for a specific goal and initial state pair using sound planners. This is achieved by consulting the planner over the DUV to produce a plan that can satisfy the goal and the negation of the property. If the domain model permits producing plans that, along with achieving the goal, contradict the safety property, then an unsafe plan can be found. Thus, the returned plan is a counterexample that demonstrates that the safety property does not hold. On the other hand, if the domain model does not permit the generation of plans that can satisfy the negation of the safety property while achieving the goal, then the planner will fail. Thus, the property holds in any plan produced for the given goal. A benefit of goal-constrained planning domain verification is, where a planner is used to perform the verification task, there is no need to for this planner to be complete, as long as the planner used for the verification is also the planner that will be used during the planning task. This is because any counterexample not found by that planner during verification, will then also not be reached by the same planner during the planning task. The following subsection provides a description of how state trajectory constraints can be used to verify planning domain models for a specific goal and initial state. Goal-constrained planning domain verification using planning techniques with state trajectory constraints The PDDL3.0 state trajectory constraints, first mentioned in Section 4, can be used to perform planning domain model verification. First, the negation of the given property is expressed using PDDL3.0 modal operators and embedded in the original domain model as state trajectory constraint. The modified model is then used by a planner, as described earlier, to perform the verification. For a verification problem V = (D, (s 0, g), p), we first apply a planner to the planning problem P = (D, s 0, g) to establish that there is a plan that solves P ∃π. π |= g. Then, the safety property p is negated and expressed in terms of PDDL3.0 modal operators as shown in BID8 ). The is added as state trajectory constraint to the original domain model. Using the algorithm proposed in (Edelkamp, Jabbar, and Nazih 2006), the new model is transformed into a PDDL2 compatible version. This is performed by first translating the state trajectory constraint into a non-deterministic finite state automaton (NFA) which can monitor property violations by inserting additional predicates and actions conditional effects into the model to simulate and observe the behaviour of the automaton that represents the constraint. This yields a new planning problem P = (D, s 0, g), where D, s 0, g are modified instances of D, s 0, g that are supplemented with the additional predicates and actions conditional effects of the automaton that represents the introduced constraint. Then, a planner is applied to P with two possible outcomes. If the planner finds a plan then: DISPLAYFORM0 Since the satisfaction of g implies both, the satisfaction of the original goal g at the last state of the sequence S, and the satisfaction of the state trajectory constraint by the sequence S, can be rewritten as DISPLAYFORM1 Furthermore, from it follows that π |= p, confirming that there is at least one plan that achieves the goal while not respecting the safety property. Therefore, this plan is a valid counterexample for that property as per Def. 1. Hence, the DUV does not satisfy the property wrt. the planning goal and initial state. Alternatively, if the planner fails to find a plan, then, as opposed to FORMULA1, we have DISPLAYFORM2 Given FORMULA1, FORMULA1 can be simplified to: DISPLAYFORM3 Hence, all plans satisfy the safety property. Therefore, the property holds for the planning domain model wrt. the given goal and initial state. In this section, we discuss how goal-constrained planning domain verification can verify safety properties using both the Spin model checker BID11 ) and the MIPS-XXL planner BID6. We perform constrained and unconstrained verification tasks to show how unlike the latter task our method does not return unreachable counterexamples. As an example, we consider the classical cave diving planning domain taken from the Satisfying Track of the IPC-2014 (IPC2014 2014 . The problem consists of an underwater cave system with a finite number of partially interconnected locations. Divers can enter the cave from a specific location, entrance, and swim from one location to an adjacent connected one. They can hold up to four oxygen tanks and they consume one for every swim and take-photo action. Only one diver can be in the cave at a time. Finally, divers have to perform a decompression manoeuvre to go to the surface and this can be done only at the entrance. Additionally, divers can drop tanks in or take tanks from any location if they hold at least one tank or there is one tank available at the location respectively. The planning goals of this domain, as provided in the problem files in the IPC-2014, consist of two parts. The first part dictates the required underwater location of which the photo is to be taken (we call it mission target) and the second part which mandates the divers should return to the surface after completing the mission (we call it safety target).A critical safety property is that divers should not drown i.e. they should not be in an underwater location, other than the entrance, where neither the divers nor the location has one full oxygen tank at least. To enable the planner and the model checkers to explore the entire state space, we simplified this domain by ignoring the "precludes" condition from the original domain as it does not affect the verification of the drowning safety property. Consequently, we considered only one diver and we modified some actions to enable the diver to go back into the water after a dive. These modifications are further explained in the commented simplified planning domain model PDDL file which is provided along with the tasks problem PDDL and Promela files online [address hidden for blind review].First, we translated the planning domain model from PDDL to Promela. Thus, the verification using the translated model only hold provided that the translation is valid. The verification of the translation is outside the scope and focus of this paper and left for future work. In this example, the chosen planning goal is to have a photo of the first location, L 1, and to get the diver outside the water. The verification tasks are:1 -Unconstrained verification with only the safety property: Both Spin and MIPS-XXL found a counterexample prepare-tank, enter-water, swim(L 0, L 1). Indeed, this counterexample leads the diver to a drowning state. At the end of this sequence, the diver will be in underwater location L 1 which is not the entrance so they can not surface and with no oxygen tank to swim back to the entrance. However, this is not a plan because it does not achieve any useful goal. Therefore, it will not be produced by any sound planner when it is used in a practical scenario (taking a photo of any location).2-Verification with safety property and incomplete goal (mission target only): Both Spin and MIPS-XXL returned prepare-tank, prepare-tank, enter-water, swim(L 0, L 1), take-photo. This counterexample achieves the goal and violates the property. However, without the safety part of the goal, it would be possible to generate plans that imply divers should swim to an underwater location and take a photo of it without requiring the divers to return to the surface. These kind of plans are illegal as they do not respect the safety part of the goal. Therefore, such sequences are unreachable counterexamples i.e. will never be produced by any sound planner while planning for a legal goal.3-Verification using Spin with both safety property and proper goal but without the augmented model M: Spin found a counterexample prepare-tank, prepare-tank, prepare-tank, prepare-tank, enter-water, swim(L 0, L 1), take-photo, swim(L 1, L 0), decompress, enter water, swim(L 0, L 1). This counterexample achieves the goal and violates the safety property but only after the goal is achieved. Therefore, this is also an unreachable counterexample because a sound planner will terminate after achieving the goal and any counterexample that violates the property after achieving the goal will not be returned. Hence, it is unreachable. 4-Goal-constrained planning domain verification, as presented in this paper, the was: No plan is returned by the planner MIPS-XXL with complete exploration and no counterexample is returned by Spin with exhaustive verification mode. This means the planning domain model has no provision of producing a plan that can violate the safety property before achieving the goal. I.e. this model is safe with respect to the given property and goal pair. Though the counterexamples returned by the incomplete verification tasks number one, two and three are obviously unreachable and should not misguide the designers to overcomplicate the model, in a real world sized application such unreachable counterexamples can be critical and much more diffcult to recognise and avoid. We expect that our proposed concept can save practitioners a huge amount of personhours trying to alter planning domain models for behaviours that their planners will never experience in practice. The verification of planning domain models is essential to guarantee the safety of planning-based automated systems. Unreachable counterexamples returned by unconstrained planning domain model verification techniques undermine the verification . In this paper, we have discussed the potential deficiencies of this problem and provided an example of an unreachable counterexample form the literature. We then introduced goal-constrained verification, a new concept to address this problem, which restricts the verification task to a specific goal and initial state pair. This limits counterexamples to those practically reachable by a planner that is tasked with achieving the goal given the initial state. Consequently, our method verifies the domain model only wrt. a specific goal and initial state. This is an acceptable limitation, given that planners also operate on this basis. We have demonstrated how model checkers and planning techniques can be used to perform goal-constrained planning domain model verification. In addition, we have recommended an inherently safe practice for domain model design that guarantees the safety of domain models "by construction" in case of undetected modelling errors. Goalconstrained domain model verification ensures accurate verification and complements the inherently safe domain model design practice to generate safe and error-free planning domain models. In , the main message of this paper is that the direct application of verification algorithms to the planning domain model verification problem can return counterexamples that would never be reached by planners in real planning tasks. These unreachable counterexamples can mislead the designers to perform unnecessary remediations that can be prone to errors. The proposed solution is simple which makes it readily usable in practice. It is also effective as formally proven in the paper. Currently, we are investigating the use of Temporally Extended Goals (TEGs) translators BID20 to perform goal-constrained domain model verification. As future work, we intend to automate the proposed methods, so that they can be applied to real-world sized planning domain models. Finally, we would like to perform an empirical comparison of the proposed methods to assess their scalability and performance.
Why and how to constrain planning domain model verification with planning goals to avoid unreachable counterexamples (false positives verification outcomes).
1,489
scitldr
We've seen tremendous success of image generating models these years. Generating images through a neural network is usually pixel-based, which is fundamentally different from how humans create artwork using brushes. To imitate human drawing, interactions between the environment and the agent is required to allow trials. However, the environment is usually non-differentiable, leading to slow convergence and massive computation. In this paper we try to address the discrete nature of software environment with an intermediate, differentiable simulation. We present StrokeNet, a novel model where the agent is trained upon a well-crafted neural approximation of the painting environment. With this approach, our agent was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner. Our primary contribution is the neural simulation of a real-world environment. Furthermore, the agent trained with the emulated environment is able to directly transfer its skills to real-world software. To learn drawing or writing, a person first observes (encodes) the target image visually and uses a pen or a brush to scribble (decode), to reconstruct the original image. For an experienced painter, he or she foresees the consequences before taking any move, and could choose the optimal action. Stroke-based image generation is fairly different from traditional image generation problems due to the intermediate rendering program. Raster-based deep learning approaches for image generation allow effective optimization using back-propagation. While for stroke-based approaches, rather than learning to generate the image, it is more of learning to manipulate the painting program. An intuitive yet potentially effective way to tackle the problem is to first learn this mapping from "stroke data" to the ing image with a neural network, which is analogous to learning painting experience. An advantage of such a mapping over software is that it provides a continuous transformation. For any painting program, the pixel values of an image are calcuated based on the coordinate points along the trajectory of an action. Specific pixels are indexed by the discrete pixel coordinates, which cuts the gradient flow with respect to the action. In our implementation, the indexing is done by an MLP described in Section 3.We further define "drawing" by giving a formal definition of "stroke". In our context, a "stroke" consists of color, brush radius, and a sequence of tuples containing the coordinate and pressure of each point along the trajectory. We will later describe this in detail in Section 3.Based on these ideas, we train a differentiable approximator of our painting software, which we call a "generator". We then tested the generator by training a vanilla CNN as an agent that encodes the image into "stroke" data as an input for the environment. Our proposed architecture, StrokeNet, basically comprises the two components, a generator and an agent. Finally, an agent is trained to write and draw pictures of several popular datasets upon the generator. For the MNIST digits, we evaluated the quality of the agent with a classifier trained solely on the original MNIST dataset, and tested the classifier on generated images. We also compared our method with others to show the efficiency. We explored the latent space of the agent as well. Generative models such as VAEs BID13 BID26 and GANs BID5 BID17 BID21 BID0 have achieved huge success in image generation in recent years. These models generate images directly to pixel-level and thus could be trained through back-propagation effectively. To mimic human drawing, attempts have been made by both graphics and machine learning communities. Traditionally, trial-and-error algorithms BID9 are designed to optimize stroke placement by minimizing an energy function, incorporating heuristics, e.g., constraining the number of strokes. Concept learning is another example tackling this problem using Bayesian program learning BID14. Recent deep learning based approaches generally falls into two categories: RNN-based approaches and reinforcement learning. For RNN-based approaches such as SketchRNN BID7 and handwriting generation with RNN by Graves BID6, they both rely on sequential datasets. Thus for unpaired data, those models cannot be applied. Another popular solution is to adopt reinforcement learning such as "artist agent" BID27 and SPIRAL BID4. These methods train an agent that interact with the painting environment. For reinforcement learning tasks with large, continuous action space like this, the training process can be computationally costly and it could take the agent tens of epochs to converge. To mitigate this situation, we simulate the environment in a differentiable manner much alike the idea in World Models BID8 BID22 BID23, where an agent learns from a neural network simulated environment. Similar approach is also used in character reconstruction for denoising BID11. In our scenario, we train our generator (auto-encoder) by parts for flexible stroke sequence length and image resolution, discussed in Section 3 and 4.Differentiable rendering is an extensively researched topic in computer graphics. It is used to solve inverse rendering problems. Some differentiable renderers explicitly model the relationship between the parameters and observations BID16, others use neural network to approximate the BID18 since neural nets are powerful function approximators. While little has been done on simulating 2D rendering process adopted in digital painting software, we used a generator neural network to meet our needs. We define a single stroke as follows, DISPLAYFORM0 where c ∈ R 3 stands for RGB color, scalar r for brush radius, and tuple (x i, y i, p i) for an anchor point on the stroke, consisting of x, y coordinate and pressure p, and n is the maximum number of points in a single stroke, in this case, 16. These values are normalized such that the coordinates correspond to the default OpenGL coordinate system. DISPLAYFORM1 for k = 1, 2, 3 and i = 1, 2, · · ·, n. We used absolute coordinates for each point. It is notable that compared to the QuickDraw BID7 dataset which contains longer lines, our strokes consist of much fewer points. We consider many trajectory points redundant since the stroke lines can be fitted by spline curves with fewer anchor points. For example, to fit a straight line, only two end-points are needed regardless of the length, in other words, stroke curves are usually scaleinvariant. However, if we are to sample the data from a user input, we could have dozens of points along the trajectory. Hence we made the assumption of being able to represent curves with a few anchors. We later showed that a single stroke with only 16 anchors is able to fit most MNIST digits and generate twisted lines in Section 5. We further assumed that longer and more complicated lines can be decomposed into simple segments and extended our experiments to include recurrent drawing of multiple strokes to generate more complex drawings. The outline of the StrokeNet architecture is shown in FIG0. The generator takes s as input, and projects the stroke data with two MLPs. One is the position encoder which encodes (x i, y i, p i) into 64 × 64 feature maps, the other, brush encoder encodes the color and radius of the brush to a single 64 × 64 feature map. The color c is a single gray scale scalar whose value equals to 1 3 3 k=1 c k, while color strokes are approximated by channel mixing described in Section 3.4. The features are then concatenated and passed to the (de)convolution layers. To preserve the sequential and pressure information of each point (x i, y i, p i), the position encoder first maps (x i, y i) to the corresponding position onto a 64 × 64 matrix by putting a bright dot on that point. This is modeled by a 2D Gaussian function with its peak scaled to 1, which simplifies to: DISPLAYFORM0 for i = 1, 2, · · ·, n where the value is calculated for each point (x, y) on the 64 × 64 map. Denote this mapping from (x i, y i) to R 64×64 as pos: DISPLAYFORM1 By multiplying the corresponding pressure p i, we now have n position features, in our setup, sixteen. This part of the generator is trained separately with random coordinates until it generates accurate and reliable signals. However, if we directly feed these features into the (de)convolutional layers of the network, the generator fails partly due to the sparsity of the single brightness feature. Instead, we take every two neighbouring feature maps and add them together (denoted by "reduce" in FIG0 .), DISPLAYFORM2 Now, each feature map f i represents a segment of the stroke. By learning to connect and curve the n − 1 "segments", we are able to reconstruct the stroke. By appending the encoded color and radius data we now have the feature with shape 64 × 64 × n. We then feed the features into three (de)convolutional layers with batch-normalization BID12 activated by LeakyReLU BID28. The last layer is activated by tanh. The agent is a VGG BID25 )-like CNN that encodes the target image into its underlying stroke representation s. Three parallel FC-decoders with different activations are used to decode position (tanh), pressure (sigmoid) and brush data (sigmoid) from the feature. We used average-pooling instead of max-pooling to improve gradient flow. For the recurrent version of StrokeNet, two separate CNNs are trained for the target image and the drawing frame, as shown in FIG1. In practice the target image feature is computed once for all steps. We first built a painting software using JavaScript and WebGL. We later tailored this web application for our experiment.2 The spline used to fit the anchor points is centripetal CatmullRom BID2 BID1. A desirable feature about Catmull-Rom spline is that the curve goes through all control points, unlike the more commonly used Bezier curve BID24.We then interpolate through the sampled points and draw circles around each center point as shown in Figure 3. For each pixel inside a circle, its color depends on various factors including attributes of the brush, blending algorithm, etc. Our generator is trained on the naive brush. When it comes to the color blending of two frames, the generator is fed with the mean value of input RGB color as a gray scale scalar, and its output is treated as an alpha map. Normalization and alpha-blending is then performed to yield the next color frame, to simulate real blending algorithm underlying the software. Denote the generator output at time-step t by q (t) ∈ R 256×256, the frame image by r (t) ∈ R 3×256×256, RGB color of the brush by c ∈ R 3, the blending process is approximated as follows, DISPLAYFORM0 DISPLAYFORM1 for k = 1, 2, 3 corresponding to the RGB channels, where J denotes a 256 × 256 all-one matrix. For the generator, we synthesize a large amount of samples, each of length n. We would like to capture both the randomness and the smoothness of human writing, thus it is natural to incorporate chaos, most notably, the motion of three-body BID19 ). Figure 3: Illustration of how a stroke is rendered. Figure 4: Images from our three-body dataset. There is no closed-form solution to three-body problem, and error accumulates in simulation using numerical methods, leading to unpredictable and chaotic . We simulate three-body motion in space (z-component for pressure) with random initial conditions and sample the trajectories as strokes for our dataset. The simulation is done with a set of equations using Newton's universal law of gravitation: DISPLAYFORM0 where P i (i = 1, 2, 3, P i ∈ R 3) denotes the position of the three objects respectively, F i denotes the gravitational force exerted on the ith object. In our simulation we set mass m 1 = m 2 = m 3 = 1 and gravitational constant G = 5 × 10 −5. We also always keep our camera (origin point) at the center of the triangle formed by the three objects to maintain relatively stable "footage".Using this method we collected about 600K images since there is virtually no cost to generate samples. Samples from the dataset are shown in Figure 4. To prove the effectivess of our neural environment, we trained an agent to perform drawing task on several popular datasets, from characters to drawings, with the generator part frozen. For MNIST and Omniglot, we trained an agent to draw the characters within one stroke. We later trained the recurrent StrokeNet on more complex datasets like QuickDraw and KanjiVG BID20. We resized all the input images to 256 × 256 with anti-alias and paddings. At first we train the position encoder guided by function pos that maps a coordinate to a 64×64 matrix with l 2 distance to measure the loss. Next we freeze the position encoder and train the other parts of the generator, again with l 2 loss to measure the performance on the three-body dataset. It can be found that smaller batch size in more accurate images. We trained the generator with a batch size of 64 until the loss no longer improves. We then set the batch size to 32 to sharpen the neural network. To train the agent, we freeze the generator. Denote the agent loss as l agent, the generated image and ground-truth image as i gen and i gt respectively, the loss is defined as: DISPLAYFORM0 where DISPLAYFORM1 T is the data describing the kth anchor point on the stroke. Here the summation term constrains the average distance between neighbouring points, where λ denotes the penalty strength. If we drop this term, the agent fails to learn the correct order of the points in a stroke because the generator itself is, after all, not robust to all cases of input, and is very likely to produce wrong for sequences with large gaps between neighbouring points. All experiments are conducted on a single NVIDIA Tesla P40 GPU. We first experimented with single step StrokeNet on MNIST and Omniglot, then we experimented recurrent StrokeNet with QuickDraw and Kanji. For the MNIST dataset, we later enforced a Gaussian prior to the latent variable and explored the latent space of the agent by linear interpolation. Finally, for a quantitative evaluation of the model, we trained a classifier on MNIST, and tested the classifier with images generated by the agent. The close accuracies indicate the quality of the generated images. It can be seen that a single stroke provides rich expressive power for different shapes. The generator generalizes well to unseen stroke patterns other than the synthesized three-body dataset. On the Omniglot dataset, since many characters consist of multiple strokes while the agent can only draw one, the agent tries to capture the contour of the character. For more complex datasets, multiple steps of strokes are needed. Again the agent does pretty well to capture the contour of the given image. However, it seems that the agent has trouble to recover the details of the pictures, and tends to smear inside the boundaries with thick strokes. To convert the agent into a latent space generative model, we experimented with the VAE version of the agent, where the feature obtained from the last layer of CNN is projected into two vectors representing the means µ and standard deviations (activated by softplus) σ, both of 1024 dimensions. A vector noise of i.i.d. Gaussian U ∼ N (0, I) is sampled, latent variable z is given by DISPLAYFORM0 We did latent space interpolation with the agent trained on MNIST. The simple data led to easily interpretable . Since the images are generated by strokes, the digits transform smoothly to one another. That is to say, the looked as if we were directly interpolating the stroke data. Results are shown in Figure 9 and 10. In order to evaluate the agent, we trained a 5-layer CNN classifier solely on pre-processed MNIST dataset, which is also the input to the MNIST agent. The size of the image is 256 × 256, so there is some performance drop to the classification task compared to standard 28 × 28 images. The classifier is then used to evaluate the paired test-set image generated by the agent. The accuracies reflect the quality of the generated images. We also compared the l 2 loss with SPIRAL on MNIST to illustrate that our method has the advantage of faster convergence over reinforcement learning approaches, shown in FIG0. Pre-processed images 90.82% Agent Output (3 steps) 88.43% Agent Output TAB0 79.33% Agent Output (1 step, VAE) 67.21% (a) (b) FIG0: Comparison of stroke orders between human and agent. We can see the stroke order is completely chaotic compared to natural order. For future work, there are several major improvements we want to make both to the network structure and to the algorithm. The recurrent structure adopted here is of the simplest form. We use this setup because we consider drawing as a Markov process, where the current action only depends on what the agent sees, the target image and the previous frame. More advanced structures like LSTM BID10 or GRU BID3 may boost the performance. A stop sign can be also introduced to determine when to stop drawing, which can be useful in character reconstruction. For the agent, various attention mechanism could be incorporated to help the agent focus on undrawn regions, so that smear and blurry scribbles might be prevented. Secondly, The generator and the agent were trained as two separate parts throughout the experiment. We can somehow train them as a whole: during the training of the agent, store all the intermediate stroke data. After a period of training, sample images from the real environment with the stroke data just collected, and train the generator with the data. By doing so in an iterative manner, the generator could fit better to the current agent and provide more reliable reconstructions, while a changing generator may potentially provide more valuable overall gradients. It is also found useful to add a bit of randomness to the learning rate. Since different decoders of the agent learn at different rates, stochasticity in more appealing . For example, the agent usually fails to generalize to color images because it always sticks with one global average color (as shown in FIG0). However, it sometimes generates appealing with some randomness added during the training. As a of this immobility, the way agent writes is dull compared to humans and reinforcement learning agents like SPIRAL. For instance, when writing the digit "8", the agent is simply writing "3" with endpoints closed. Also, the agent avoids to make intersecting strokes over all datasets, although such actions are harmless and should be totally encouraged and explored! Thus, random sampling techniques could be added to the decision making process to encourage bolder moves. Finally, for the evaluation metrics, the naive l 2 loss can be combined with adversarial learning. If paired sequential data is available, we believe adding it to training will also improve the . In this paper we bring a proof-of-concept that an agent is able to learn from its neural simulation of an environment. Especially when the environment is deterministic given the action, or contains a huge action space, the proposed approach could be useful. Our primary contribution is that we devised a model-based method to approximate non-differentiable environment with neural network, and the agent trained with our method converges quickly on several datasets. It is able to adapt its skills to real world. Hopefully such approaches can be useful when dealing with more difficult reinforcement learning problems. T denote the coordinate of a sampled point. For a curve defined by points P 0, P 1, P 2, P 3, the spline can be produced by: DISPLAYFORM0 where DISPLAYFORM1 with α = 0.5, t 0 = 0 and i = 0, 1, 2, 3By interpolating t from t 1 to t 2 linearly, we generate the curve between P 1 and P 2. The pressure values between neighbouring points are interpolated linearly. The agent loss equals to the l 2 distance between the generator output and agent input plus the penalty term constraining the average point distance within a stroke. For (c) and (d) the learning rate is set to 10 −4, batch size equals to 64. Figure 15: A trained StrokeNet generates images that resemble the output of painting software. The first row depicts generated by our model (left) and by the software (right) given the same input. The second row shows the model could produce strokes with color and texture using simple arithmetic operations. The third and fourth row shows the model's ability to draw MNIST digits (left) on both its own generative model (middle) and real-world painting software (right).
StrokeNet is a novel architecture where the agent is trained to draw by strokes on a differentiable simulation of the environment, which could effectively exploit the power of back-propagation.
1,490
scitldr
We demonstrate how machine learning is able to model experiments in quantum physics. Quantum entanglement is a cornerstone for upcoming quantum technologies such as quantum computation and quantum cryptography. Of particular interest are complex quantum states with more than two particles and a large number of entangled quantum levels. Given such a multiparticle high-dimensional quantum state, it is usually impossible to reconstruct an experimental setup that produces it. To search for interesting experiments, one thus has to randomly create millions of setups on a computer and calculate the respective output states. In this work, we show that machine learning models can provide significant improvement over random search. We demonstrate that a long short-term memory (LSTM) neural network can successfully learn to model quantum experiments by correctly predicting output state characteristics for given setups without the necessity of computing the states themselves. This approach not only allows for faster search but is also an essential step towards automated design of multiparticle high-dimensional quantum experiments using generative machine learning models. In the past decade, artificial neural networks have been applied to a plethora of scientific disciplines, commercial applications, and every-day tasks with outstanding performance in, e.g., medical diagnosis, self-driving, and board games . In contrast to standard feedforward neural networks, long short-term memory (LSTM) architectures have recurrent connections, which allow them to process sequential data such as text and speech . Such sequence-processing capabilities can be particularly useful for designing complex quantum experiments, since the final state of quantum particles depends on the sequence of elements, i.e. the experimental setup, these particles pass through. For instance, in quantum optical experiments, photons may traverse a sequence of wave plates, beam splitters, and holographic plates. Highdimensional quantum states are important for multiparticle and multisetting violations of local realist models as well as for applications in emerging quantum technologies such as quantum communication and error correction in quantum computers . Already for three photons and only a few quantum levels, it becomes in general infeasible for humans to determine the required setup for a desired final quantum state, which makes automated design procedures for this inverse problem necessary. One example of such an automated procedure is the algorithm MELVIN, which uses a toolbox of optical elements, randomly generates sequences of these elements, calculates the ing quantum state, and then checks whether the state is interesting, i.e. maximally entangled and involving many quantum levels. The setups proposed by MELVIN have been realized in laboratory experiments b). Recently, also a reinforcement learning approach has been applied to design new experiments . Inspired by these advances, we investigate how LSTM networks can learn quantum optical setups and predict the characteristics of the ing quantum states. We train the neural networks using millions of setups generated by MELVIN. The huge amount of data makes deep learning approaches the first choice. We use cluster cross validation to evaluate the models. Let us consider a quantum optical experiment using three photons with orbital angular momentum (OAM) (; a). The OAM of a photon is characterized by an integer whose size and sign represent the shape and handedness of the photon wavefront, respectively. For instance, after a series of optical elements, a three particle quantum state may have the following form: This state represents a physical situation, in which there is 1/4 chance (modulus square of the amplitude value 1/2) that all three photons have OAM value 0 (first term), and a 1/4 chance that photons 1 and 3 have OAM value 1, while photon 2 has OAM value 0 (second term), and so on for the two remaining terms. We are generally interested in two main characteristics of the quantum states: Are they maximally entangled? Are they high-dimensional? The dimensionality of a state is represented by its Schmidt rank vector (SRV). State |Ψ is indeed maximally entangled because all terms on the right hand side have the same amplitude value. Its SRV is, as the first photon is four-dimensionally entangled with the other two photons, whereas photons two and three are both only two-dimensionally entangled with the rest. A setup is labeled "positive" (y E = 1) if its output state is maximally entangled and if the setup obeys some further restrictions, e.g., behaves well under multi-pair emission, and otherwise labeled "negative" (y E = 0). The target label capturing the state dimensionality is the SRV y SRV = (n, m, k). We train LSTM networks to directly predict these state characteristics (entanglement and SRV) from a given experimental setup without actually predicting the quantum state itself. For classification, we use binary cross entropy (BCE) in combination with logistic sigmoid output activation for learning. For regression, it is always possible to reorder the photon labels such that the SRV has entries in non-increasing order. An SRV label is thus represented by 3-tuple y SRV = (n, m, k) which satisfies n ≥ m ≥ k. With slight abuse of notation, we model n ∼ P(λ) as a Poisson-distributed random variable and m ∼ B(n, p), k ∼ B(m, q) as Binomials with ranges m ∈ {1, . . . n} and k ∈ {1, . . ., m} and success probabilities p and q, respectively. The ing log-likelihood objective (omitting all terms not depending on λ, p, q) for a data point x with label (n, m, k) is whereλ,p,q are the network predictions (i.e. functions of x) for the distribution parameters of n, m, k respectively. The Schmidt rank value predictions aren =λ,m =pλ,k =pqλ. To see this, we need to consider the marginals of the joint probability mass function To obtain the marginal distribution of m, we can first sum over all possible k, which is easy. To sum out n we first observe that n m = 0 for n < m, i.e. the first m terms are zero and we may write Sequence processing model for a many-to-one mapping. The target valueŷ can be either an estimate for y E (entanglement classification) or y SRV (SRV regression). capturing only non-zero terms. It follows that which is P(pλ)-distributed. Using the same argument for k we get that the marginal of k is P(pqλ)-distributed. The estimatesn,m,k are obtained by taking the means of their respective marginals. The used sequence processing model is depicted in Figure 1. We train two networks, one for entanglement classification (target y E), and one for SRV regression (target y SRV). The reason why we avoid multitask learning in this context is that we do not want to incorporate correlations between entanglement and SRV into our models. For instance, the SRV was only observed in nonmaximally entangled samples so far, which is a perfect correlation. This would cause a multitask network to automatically label such a sample as negative only because of its SRV. By training separate networks we lower the risk of incorporating such correlations. A setup of N elements is being fed into a network by its sequence of individual optical components x = (x 1, x 2, ..., x N), where in our data N ranges from 6 to 15. We use an LSTM with 2048 hidden units and a component embedding space with 64 dimensions. The component embedding technique is similar to word embeddings . The dataset produced by MELVIN consists of 7,853,853 different setups of which 1,638,233 samples are labeled positive. Each setup consists of a sequence x of optical elements, and the two target values y E and y SRV. We are interested in whether the trained model is able to extrapolate to unseen SRVs. Therefore, we cluster the data by leading Schmidt rank n. Figure 2 shows the the number of positive and negative samples in the data set for each n. All samples with n ≥ 9 are moved to a special extrapolation set consisting of only 1,754 setups (gray cell in Table 1). The remainder of the data, i.e. all samples with n < 9, is then divided into a training set and a conventional test set with 20 % of the data drawn at random (iid). This workflow is shown in Figure 3. 0,1 0,1 2 3 4 5 6 7 8 9-12 Table 1: Cluster cross validation folds and extrapolation set characterized by leading Schmidt rank n. Samples with n = 0 and samples with n = 1 are combined and then split into two folds at random. The test set is used to estimate the conventional generalization error, while the extrapolation set is used to shed light on the ability of the learned model to perform on higher Schmidt rank numbers. If the model extrapolates successfully, we can hope to find experimental setups that lead to new interesting quantum states. Cluster cross validation (CCV) is an evaluation method similar to standard cross validation. Instead of grouping the folds iid, CCV groups them according to a clustering method. Thus, CCV removes similarities between training and validation set and simulates situations in which the withheld folds have not been obtained yet, thereby allowing us to investigate the ability of the network to discover these withheld setups. We use CCV with nine folds (white cells in Table 1). Seven of these folds correspond to the leading Schmidt ranks 2,..., 8. The samples with n = 1 (not entangled) and n = 0 (not even a valid three-photon state) are negative by definition. These samples represent special cases of y E = 0 setups and it is not necessary to generalize to these cases without training on them. Therefore, the 4,300,268 samples with n < 2 are divided into two folds at random such that the model will always see some of these special samples while training. Let us examine if the LSTM network has learned something about quantum physics. A good model will identify positive setups correctly while discarding as many negative setups as possible. This behavior is reflected in the metrics true positive rate TPR = TP/(TP + FN) and true negative rate TNR = TN/(TN + FP), with TP, TN, FP, FN the true positives, true negatives, false positives, false negatives, respectively. A metric that quantifies the success rate within the positive predictions is the hit rate (i.e. precision or positive predicted value), defined as HR = TP/(TP + FP) . For each withheld CCV fold n, we characterize a setup to be "interesting" when it fulfills the following two criteria: (i) It is classified positive (ŷ E > τ) with τ the classification threshold of the sigmoid output activation. (ii) The SRV predictionŷ SRV = (n,m,k) is such that there exists a y SRV = (n, m, k) with y SRV −ŷ SRV 2 < r. We call r the SRV radius. We denote samples which are classified as interesting (uninteresting) and indeed positive (negative) as true positives We split the entire data by their leading Schmidt rank n. All samples with n ≥ 9 constitute the extrapolation set, which we use to explore the out-of-distribution capabilities of our model. For the remaining samples (i.e. n < 9) we make a random test split at a ratio of 1/4. The test set is used to estimate the conventional generalization error of our model. We use the training set to perform cluster cross validation. (negatives). And we denote samples which are classified as interesting (uninteresting) and indeed negative (positive) as false positives (false negatives). We employ stochastic gradient descent for training the LSTM network with momentum 0.5 and batch size 128. We sample mini-batches in such a way that positive and negative samples appear equally often in training. For balanced SRV regression, the leading Schmidt rank vector number n is misused as class label. The models were trained using early stopping after 40000 weight update steps for the entanglement classification network and 14000 update steps for the SRV regression network. Hyperparameter search was performed in advance on a data set similar to the training set. Figure 4 shows the TNR, TPR, and rediscovery ratio for sigmoid threshold τ = 0.5 and SRV radius r = 3. The rediscovery ratio is defined as the number of distinct SRVs, for which at least 20% of the samples are rediscovered by our method, i.e. identified as interesting, divided by the number of distinct SRVs in the respective cluster. The TNR for fold 0,1 is 0.9996, and the hit rate HR on the extrapolation set 9-12 is 0.659. Error bars in Figure 4 and later in the text are 95 % binomial proportion confidence intervals. Model performance depends heavily on parameters τ and r. Figure 5 shows the "beyond distribution" for a variety of sigmoid thresholds and SRV radii. Figure 5: True negative rate (scale starts at 0.6), true positive rate, rediscovery ratio, and hit rate for the extrapolation set 9-12 for varying sigmoid threshold τ and SRV radius r. For too restrictive parameter choices (τ → 1 and r → 0.5) the TNR approaches 1, while TPR and rediscovery ratio approach 0, such that no interesting new setups would be identified. For too loose choices (small τ, large r), too few negative samples would be rejected, such that the advantage over random search becomes negligible. For a large variety of τ and r the models perform satisfyingly well, allowing a decent compromise between TNR and TPR. This is reflected in large values for the hit rate, which is 0.736 on average over all depicted thresholds. Finally, we investigate the conventional in-distribution generalization error using the test set (20 % of the data). Entanglement classification: The entanglement training BCE loss value is 10.2. TNR and TPR are 0.9271 ± 0.00024 and 0.9469 ± 0.00041, respectively. The corresponding test error is 10.4. TNR and TPR are 0.9261 ± 0.00038 and 0.9427 ± 0.00065, respectively. SRV regression: The SRV training loss value according to Equation is 2.247, the accuracy with r = 3 is 93.82 % and the mean distance between label and prediction is 1.3943. The SRV test error is 2.24, the accuracy with r = 3 is 0.938 and the mean distance between label and prediction is 1.40. These figures are consistent with a clean training procedure. Our experiments demonstrate that an LSTM-based neural network can be trained to model certain properties of complex quantum systems. Our approach is not limited to entanglement and Schmidt rank but may be generalized to employ other objective functions such as multiparticle transformations, interference and fidelity qualities, and so on. Another possible next step to expand our approach towards the goal of automated design of multiparticle high-dimensional quantum experiments is the exploitation of generative models. Here, we consider Generative Adversarial Networks (GANs) and beam search as possible approaches. Generating sequences such as text in adversarial settings has been done using 1D CNNs and LSTMs . The LSTM-based approaches employ ideas from reinforcement learning to alleviate the problem of propagating gradients through the softmax outputs of the network. Since our data is in structure similar to text, these approaches are directly applicable to our setting. For beam search, there exist two different ideas, namely a discriminative approach and a generative approach. The discriminative approach incorporates the entire data set (positive and negative samples). The models trained for this work can be used for the discriminative approach in that one constructs new sequences by maximizing the belief of the network that the outcome will be a positive setup. For the generative approach, the idea is to train a model on the positive samples only to learn their distribution via next element prediction. On inference, beam search can be used to approximate the most probable sequence given some initial condition . Another option to generate new sequences is to sample from the softmax distribution of the network output at each sequence position as has been used for text generation models . In general, automated design procedures of experiments has much broader applications beyond quantum optical setups and can be of importance for many scientific disciplines other than physics. We have shown that an LSTM-based neural network can be trained to successfully predict certain characteristics of high-dimensional multiparticle quantum states from the experimental setup without any explicit knowledge of quantum mechanics. The network performs well even on unseen data beyond the training distribution, proving its extrapolation capabilities. This paves the way to automated design of complex quantum experiments using generative machine learning models.
We demonstrate how machine learning is able to model experiments in quantum physics.
1,491
scitldr
In this paper, we propose a nonlinear unsupervised metric learning framework to boost of the performance of clustering algorithms. Under our framework, nonlinear distance metric learning and manifold embedding are integrated and conducted simultaneously to increase the natural separations among data samples. The metric learning component is implemented through feature space transformations, regulated by a nonlinear deformable model called Coherent Point Drifting (CPD). Driven by CPD, data points can get to a higher level of linear separability, which is subsequently picked up by the manifold embedding component to generate well-separable sample projections for clustering. Experimental on synthetic and benchmark datasets show the effectiveness of our proposed approach over the state-of-the-art solutions in unsupervised metric learning. Cluster analysis has broad applications in various disciplines. Grouping data samples into categories with similar features is an efficient way to summarize the data for further processing. In measuring the similarities among data samples, the Euclidean distance is the most common choice in clustering algorithms. Under Euclidean distance, feature components are assigned with the same weight, which essentially assumes all features are equally important across the entire data space. In practice, such setup is often not optimal. Learning a customized metric function from the data samples can usually boost the performance of various machine learning algorithms BID1. While metric learning has been extensively researched under supervised BID19 BID18 BID17 BID14 and semi-supervised settings BID15 BID3 BID23 BID13, unsupervised metric learning (UML) remains a challenge, in part due to the absence of ground-truth label information to define a learning optimality. In this paper, we focus on the problem of UML for clustering. As the goal of clustering is to capture the natural separations among data samples, one common practice in the existing UML solutions is to increase the data separability and make the separations more identifiable for the ensuing clustering algorithm. Such separability gain can be achieved by projecting data samples onto a carefully chosen low-dimensional manifold, where geometric relationships, such as the pairwise distances, are preserved. The projections can be carried out linearly, as through the Principle Component Analysis, or nonlinearly, as via manifold learning solutions. Under the dimension-reduced space, clustering algorithms, such as K-means, can then be applied. Recent years have seen the developments of UML solutions exploring different setups for the lowdimensional manifolds. FME ) relies on the learning of an optimum linear regression function to specify the target low-dimensional space. BID0 model local sample densities of the data to estimate a new metric space, and use the learned metric as the basis to construct graphs for manifold learning. Application-specific manifolds, such as Grassmann space BID6 and Wasserstein geometry BID16, have also been studied. When utilized as a separate preprocessing step, dimensionality reduction UML solutions are commonly designed without considering the ensuing clustering algorithm and therefore cannot be fine-tuned accordingly. AML takes a different approach, performing clustering and distance metric learning simultaneously. The joint learning under AML is formulated as a trace maximization problem, and numerically solved through an EM-like iterative procedure, where each iteration consists of a data projection step, followed by a clustering step via kernel K-means. The projection is parameterized by an orthogonal, dimension-reducing matrix. A kernelized extension of AML was proposed in BID2. As the projection models are built on linear transformations, their capabilities to deal with complex nonlinear structures are limited. UML solutions performing under the original input space have also been proposed. SSO BID7 learns a global similarity metric through a diffusion procedure that propagates smooth metrics through the data space. CPCM BID4 relies on the ratio of within cluster variance over the total data variance to obtain a linear transformation, aiming to improved data separability. As the original spaces are usually high-dimensional, UML solutions in this category tend to suffer from the local minima problem. In light of the aforementioned limitations and drawbacks, we propose a new nonlinear UML framework in this paper. Our solution integrates nonlinear feature transformation and manifold embedding together to improve the data separability for K-means clustering. Our model can be regarded as a fully nonlinear generalization of AML, in which the transformation model is upgraded to a geometric model called Coherent Point Drifting (CPD) BID8. Data points are driven by CPD to reach a higher level of linear separability, which will be subsequently picked up by the manifold embedding component to generate well-separable sample projections. At the end, K-means is applied on the transformed, dimension-reduced embeddings to produce label predictions. The choice of CPD is with the consideration of its capability of generating high-order yet smooth transformations. The main contributions of this paper include the following.• Our proposed fully nonlinear UML solution enhances data separability through the combination of CPD-driven deformation and spectral embeddings.• To the best of our knowledge, this is the first work that utilizes dense, spatial varying deformations in unsupervised metric learning.• The CPD optimization has a closed-form solution, therefore can be efficiently computed.• Our model outperforms state-of-the-art UML methods on six benchmark databases, indicating promising performance in many real-world applications. The rest of this paper is organized as follows. Section 2 describes our proposed method in detail. It includes the description of CPD model, formulation of our CPD based UML, optimization strategy and the approach to kernelize our model. Experimental are presented in Section 3 to validate our solutions with both synthetic and real-world datasets. Section 4 concludes this paper. Many machine learning algorithms have certain assumption regarding the distribution of the data to be processed. K-means always produces clustering boundaries of hyperplanes, working best for the data set made of linearly separable groups. For data sets that are not linearly separable, even they are otherwise well-separable, K-means will fail to deliver. Nonlinearly displacing the data samples to make them linearly separable would provide a remedy, and learning such a transformation is the goal of our design. The application of such a smooth nonlinear transformation throughout feature space (either input space or kernel space) would change pairwise distances among samples, which is equivalent to assigning spatially varying metrics in different areas of the data space. In our framework, the CPD model is chosen to perform the transformation. Originally designed for regulating points matching, CPD moves the points U towards the target V by estimating an optimal continuous velocity function v(x): DISPLAYFORM0 where n is the number of samples in the dataset, and d is the data dimension. L represents a linear differentiation operator, and λ is the regularization parameter. The regularization term in CPD is a Gaussian low-pass filter. The optimal solution v(x) to matching U and V can be written in the matrix format as BID9: DISPLAYFORM1 where Ψ (size d × n) is the weight matrix for the Gaussian kernel functions, g(DISPLAYFORM2 . σ is the width of the Gaussian filter, which controls the smoothness level of the deformation field. K-means clustering aims to partition the samples into K groups S = {S 1, S 2, ..., S K}, through the minimization of the following objective function: DISPLAYFORM0 S c is the set of data samples in the c-th cluster. n c is the number of data instances in cluster S c, and µ c is the mean of S c.Allowing samples to be moved, we intend to learn a spatial transformation to improve the performance of K-means clustering by making groups more linearly separable, as well as by harnessing the updated distance measure under the transformed feature space. Let x i be the initial location of an instance. Through the motion in Eqn. FORMULA1, x i will be moved to a new position x DISPLAYFORM1 With Eqn. FORMULA5, Eqn. can be reformulated as: DISPLAYFORM2 Now DISPLAYFORM3 c is the mean vector of the instances in cluster S 1 c. Our proposed CPD based unsupervised metric learning (CPD-UML) is designed to learn a spatial transformation Ψ and a clustering S 1 at the same time. Eqn. can be reformulated into a matrix format through the following steps. First, put the input dataset into a d-by-n data matrix. Second, define a Gaussian kernel function matrix for the CPD deformation as: DISPLAYFORM4 The size of G is n-by-n. Third, let p be a vector of dimension n c -by-1 with all elements equal to one, then the mean of the data instances within a cluster S 1 c can be written as µ BID21. With these three formulations, and let E be a permutation matrix, Eqn. can be rewritten as: DISPLAYFORM5 DISPLAYFORM6 where X 1 is the transformed data matrix. Since ||A|| 2 F = trace(A T A), Eqn. FORMULA10 can be written in the form of the trace operation: DISPLAYFORM7 As trace(AB) = trace(BA), and p T p = n c, the J in Eqn. FORMULA11 can be further reformulated as: DISPLAYFORM8 Similar to BID21 ), we define a n-by-k orthonormal matrix Y as the cluster indicator matrix: DISPLAYFORM9 With X 1 = X + ΨG(X, X) and the cluster indicator matrix in Eqn. FORMULA13, Eqn. FORMULA12 can be written into the following: DISPLAYFORM10 To reduce overfitting, we add the squared Frobenius norm λ||Ψ|| 2 F = λtrace(Ψ T Ψ), to penalize any non-smoothness in the estimated transformations. λ is a regularization parameter. Finally, our nonlinear CPD-UML solution is formulated as a trace minimization problem, parameterized by Y and Ψ: DISPLAYFORM11 To search for the optimal solutions of Y and Ψ, an EM-like iterative minimization framework is adopted to update Y and Ψ alternatingly. The transformation matrix Ψ is initialized with all 0 elements, and the cluster indicator is initialized with a K-means clustering of the input data samples. Optimization for Y With Ψ fixed, Eqn. FORMULA1 reduces to a trace maximization problem: DISPLAYFORM0 Since Y is an orthonormal matrix: Y T Y = I K, the spectral relaxation technique BID21 ) can be adopted to compute the optimal Y. The solution is based on Ky Fan matrix inequalities below:Theorem. (Ky Fan) If A be a symmetric matrix with eigenvalues {λ 1 ≥ λ 2 ≥ ... ≥ λ n}. Let the corresponding eigenvectors be DISPLAYFORM1 where the optimal Y * is given by DISPLAYFORM2 This spectral relaxation solution can be regarded as a manifold learning method that projects data samples from the original d-dimensional space to a new K-dimensional space. In our case, the A matrix in Ky Fan Theorem takes the form of X T X. In implementation, we first compute the K largest eigenvectors of X T X, and then apply the traditional K-means method, under the induced K-dimensional space, to compute the cluster assignments. Optimization for Ψ With the Y generated from Eqn., Eqn. becomes a trace minimization problem w.r.t. Ψ: DISPLAYFORM3 Through a careful investigation of the gradient and Hessian matrix of Eqn. FORMULA1, we found the J could be proved to a smooth convex function, with its Hessian w.r.t. Ψ being positive definite (PD) everywhere. Therefore, the only stationary point of J, where the gradient is evaluated to 0, locates the global minimum, and provides the optimal Ψ *. The convexity proof is given as follows. Convexity proof of J w.r.t. Ψ: Firstly, we update J in Eqn., through several straightforward derivation steps (the details are given in Appendix A), to an equivalent form: DISPLAYFORM4 The gradient of J w.r.t. Ψ can then be computed as: DISPLAYFORM5 To facilitate the convexity proof, we rewrite this gradient equation as: DISPLAYFORM6 N is a matrix of size d × n. M is a symmetric matrix of size n × n, which can be proved positive definite, based on the theorem in :Theorem. "Suppose that A ∈ M m,n and B ∈ M n,m with m ≤ n. Then BA has the same eigenvalues as AB, counting multiplicity, together with an additional n − m eigenvalues equal to 0."We know Y T * Y = I K, whose eigenvalues are all 1s. Then, according to this Theorem, the eigenvalues of Y Y T are 1s (multiplicity is K), and 0 (multiplicity is n − K). In the matrix M of Eqn. FORMULA1 DISPLAYFORM7 T is a positive semidefinite matrix as it is symmetric and its eigenvalues are either 0 or 1. G is also positive definite because it is a kernel (Gram) matrix with the Gaussian kernel. With G being symmetric PD and λ setting to be a positive number in our algorithm, the matrix M is guaranteed to be a PD matrix. Expanding the gradient formulated in Eqn. to individual elements of Ψ, it can be further written as: DISPLAYFORM8 With Eqn. FORMULA1, Eqn. FORMULA1 can be resized into a vector of size d × n. Then, the Hessian matrix of J w.r.t. Ψ can be calculated as below: DISPLAYFORM9 DISPLAYFORM10 It is clear that H is a symmetric matrix with size (d * n) × (d * n). The diagonal of H is composed by d repeating M matrices. Let z be any non-zero column vector with size (d * n) × 1. To prove H is a PD matrix, we want to show that z T H z is always positive. To this end, we rewrite z as [z 1, z 2, ..., z d], where z i is the sub-column of z with size n × 1. Then z T H z can be computed as: DISPLAYFORM11 As M has been proved to be a PD matrix, each item in Eqn. FORMULA1 is positive. Therefore, the summation z T H z is also positive. Since z is an arbitrary non-zero column vector, this shows H is PD. With the Hessian matrix H being PD everywhere, the objective function J is convex w.r.t. Ψ. As a , the stationary point of J makes the unique global minimum solution Ψ *. Let Eqn. equal to 0, we get DISPLAYFORM12 The matrix M on the left is proved PD, thus invertible. The optimal solution of Ψ is given as: DISPLAYFORM13 Based on the description above, our proposed CPD-UML algorithm can be summarized as the pseudo-code below:Algorithm So far, we developed and applied our proposed CPD-UML under input feature spaces. However, it can be further kernelized to improve the clustering performance for more complicated data. A kernel principal component analysis (KPCA) based framework ) is utilized in our work. After the input data instances are projected into kernel spaces introduced by KPCA, CPD-UML can be applied under the kernel spaces to learn both deformation field and clustering , in the same manner as it is conducted under the original input spaces. We performed experiments on a synthetic dataset and six benchmark datasets. Comparisons are made with state-of-the-art unsupervised metric learning solutions. The two-moon synthetic dataset 1 was tested in the first set of experiments. It consists of two classes with 100 examples in each class. (see FIG1). All the samples were treated as unlabeled samples in the experiments. Both linear and kernel versions of our CPD-UML were tested. Linear version CPD-UML In this experiment, our CPD-UML was applied in deforming the data samples to achieve better separability under the input space. The effectiveness of our approach is demonstrated by comparing with the base algorithm K-means. The clustering of K-means and CPD-UML are shown in FIG1 (a) and 1 (b) respectively. The sample labels are distinguished using blue and red colors. The clustering are shown using the decision boundary. It is obvious that K-means cannot cluster the two-moon data well due to the data's non-separability under the input space. Our CPD-UML, on the contrary, achieves a 99% clustering accuracy by making the data samples linearly separable via space transformations. The deformation field of FIG1 in the input space is shown in FIG1 and (d). It is evident that our nonlinear metric learning model can deform feature spaces in a sophisticated yet smooth way to improve the data separability. Kernel version CPD-UML In this set of experiments, various RBF kernels were applied on the two-moon dataset to simulate linearly non-separable cases under kernel spaces. The clustering of kernel K-means with different RBF kernels (width = 4, 8, 16, 32) are shown in FIG2 (a) -2 (d). Colors and decision boundaries stand for the same meaning as those in FIG1. Obviously, the performance of kernel K-means was getting worse with sub-optimal kernels, as in 2 (b), 2 (c) and 2 (d). Searching for an optimal RBF kernel requires cross-validation among many candidates, which could in a large number of iterations. This procedure can be greatly eased by our kernel CPD-UML. The CPD transformation under kernel spaces provides a supplementary force to the kernelization to further improve the data separability, the same as it performs under the input space. FIG2 (f) -2 (h) demonstrate the effectiveness of our CPD-UML. Same RBF kernels as in FIG2 (b) -2 (d) were used, but better clustering were obtained. The ability to work with sub-optimal kernels should also be regarded as a computational advantage of our model. Experimental Setup In this section, we employ six benchmark datasets to evaluate the performance of our CPD-UML. They are five UCI datasets 2: Breast, Diabetes, Cars, Dermatology, E. Coli and the USPS_20 handwritten data. Their basic information is summarized in Appendix B.Both linear and kernel versions of our proposed approach were tested. For linear version, K-means method was used as the baseline for comparison. In addition, three unsupervised metric learning solutions, AML, RPCA-OM BID12 and FME were utilized as the competing solutions. For kernel version, the baseline algorithm is kernel K-means. NAML BID2, the kernel version of AML is adopted. Since RPCA-OM and FME do not have their kernel version, the same kernelization strategy in 2.4 was applied to kernelize these two solutions. RBF kernels were applied for all kernel solutions. Each dataset was partitioned into seen and unseen data randomly. Optimal cluster centers and parameters are determined by the seen data. Clustering performance is evaluated via the unseen data, which are labeled directly based on their distances away from the cluster centers. Similar setups have been used in BID11 BID6. In the experiments, we performed 3-fold cross validation, in which two folds were used as seen data and one fold as unseen data. In the competing solutions, the hyper-parameters were searched within the same range as in their publications. In our proposed approach, the regularization parameter λ and smooth parameter σ were searched from {10 0 ∼ 10 10} and {2 0 ∼ 2 10}, respectively. The RBF kernel width for all kernel methods is chosen from {2 −5 ∼ 2 10}. Since the performance of tested methods depends on the initialization clusters, the clustering of K-means was applied as the initialization clusters for all the competing solutions in each run. The performance of each algorithm was calculated over 20 runs. Results We measured the performance using the ground truth provided in all six benchmark datasets. Three standard performance metrics were calculated: accuracy, normalized mutual information and purity. To better compare the tested methods in statistic, we conducted a Student's t-test with a p-value 0.05 between each pair of solutions for each dataset. The solutions were ranked using a scoring schema from BID17. Compared with other methods, an algorithm scores 1 if it performs significantly better than one opponent in statistic; 0.5 if there is no significant difference, and 0 if it is worse. Tables 1, 2 and 3 summarize the clustering performance and ranking scores. The best performance is identified in Boldface for each dataset. It is evident that our CPD-UML outperforms other competing solutions in all three standard measurements with significant margins. Highest ranking scores in the performance tables are all achieved by our kernel version approach. In addition, significant improvements have been obtained by our proposed approach compared with the baseline algorithm K-means and kernel K-means. It is also noteworthy that, the linear CPD-UML achieved comparable with the other competing methods using RBF kernels, which further demonstrates the effectiveness of our nonlinear feature space transformation. The proposed CPD-UML model learns a nonlinear metric and the clusters for the given data simultaneously. The nonlinear metric is achieved by a globally smooth nonlinear transformation, which improves the separability of given data during clustering. CPD is used as the transformation model because of its capability in deforming feature space in sophisticated yet smooth manner. Evaluations on synthetic and benchmark datasets demonstrate the effectiveness of our approach. Applying the proposed approach to other computer vision and machine learning problems are in the direction of our future research.
a nonlinear unsupervised metric learning framework to boost the performance of clustering algorithms.
1,492
scitldr
As machine learning becomes ubiquitous, deployed systems need to be as accu- rate as they can. As a , machine learning service providers have a surging need for useful, additional training data that benefits training, without giving up all the details about the trained program. At the same time, data owners would like to trade their data for its value, without having to first give away the data itself be- fore receiving compensation. It is difficult for data providers and model providers to agree on a fair price without first revealing the data or the trained model to the other side. Escrow systems only complicate this further, adding an additional layer of trust required of both parties. Currently, data owners and model owners don’t have a fair pricing system that eliminates the need to trust a third party and training the model on the data, which 1) takes a long time to complete, 2) does not guarantee that useful data is paid valuably and that useless data isn’t, without trusting in the third party with both the model and the data. Existing improve- ments to secure the transaction focus heavily on encrypting or approximating the data, such as training on encrypted data, and variants of federated learning. As powerful as the methods appear to be, we show them to be impractical in our use case with real world assumptions for preserving privacy for the data owners when facing black-box models. Thus, a fair pricing scheme that does not rely on secure data encryption and obfuscation is needed before the exchange of data. This pa- per proposes a novel method for fair pricing using data-model efficacy techniques such as influence functions, model extraction, and model compression methods, thus enabling secure data transactions. We successfully show that without running the data through the model, one can approximate the value of the data; that is, if the data turns out redundant, the pricing is minimal, and if the data leads to proper improvement, its value is properly assessed, without placing strong assumptions on the nature of the model. Future work will be focused on establishing a system with stronger transactional security against adversarial attacks that will reveal details about the model or the data to the other party. Driven by the application of facilitating data exchange with clear consent, we focus on the most needed form of data transaction: a potential improvement to an already powerful model. A tiny model without sensible accuracy is not proven to work and is likely not deployed for important tasks. Models trained only a small scale of data are furthermore very sensitive to new data of any form, so the question of whether a new dataset brings forth improvement is easy to answer (yes!). A data transaction is the hardest and most meaningful in the setting of a robust model potentially improving with the addition of a relatively small set of data that makes the training set better mimic that of the real world i.e. deployed cases. This subsection shows that the naive approach of exchanging data upfront or paying blind upfront are both undesirable, as summarized in TAB0.In industry, because only tech giants with sufficient existing centrality gets to aggregate the most data, a convenient naive practice involves the data providers giving the data up in advance.1. An individual user using storage, social, or other valuable features may give up their data within the services for convenience. It can be argued that they are trading their data in for improved services they receive. 2. A small company that provides data to be evaluated, while trusting the auditing within the big companies to prevent useful data from not being paid. 3. Academic researchers in fields of linguistics, biology or other fields, upon hearing about the power of machine learning in industry, give their field studies data collected in decades to model owners, who tend to be entities of big corporations, for free, hoping for collaboration and interesting insights produced. The fairness in the pricing is highly questionable, as the implicit contracts get difficult to verify and reinforce. Furthermore, the data that is evaluated to be useless currently will likely still sit in the company's cluster, so while the company may decline reciprocating gifts such as academic collaboration, while using the data for some other service in the future. It is difficult for data providers to argue for a latent payment, since any data given up is given up. Improvement of services may never get delivered, and a user of a centralized service who has given up their data will have trouble telling if their data exchange was fair at all (even if their evaluation was purely psychological).A comparable approach is the other way around: a data company prices their dataset, such as phone call clips, in advance, and a model owner, such as a speech recognition service, pays for the data in full, before any evaluation is done. The fairness in the pricing is dubious, as the value of the data is not evaluated against the model. While this approach has high security guarantee for one of the involved parties, it is clearly inflexible. For one thing, in both cases, the financial incentives do not align: the model owners would rather not pay fairly for the data while the data owner would rather not price their data fairly. This is further complicated if we look into more granular fairness that relates to the data's interaction with the model: can we reason about the pricing of the data and the presumably positive effect it has on the model. For simplicity, we assume the model owner has a relatively robust large model that can use improvements. A hypothesis is made that a particular data provider may be able to improve the model. For the simplicity of a privacy argument and for the use case of user data, data here is assumed to be totally private before as far as the involved parties are concerned. That is, there is unknown information about the data that may benefit the model. DISPLAYFORM0 This ensures that the data change under MDE for a given model is similarly ranked against other data points, as with Spearman's Correlation, when compared to the ranking done directly on the model. Pricing on MDE is used to facilitate a one-time data transaction that is fair; that is, useless data is not paid much money, while useful data gets evaluated. Our technique imposes a pricing function that is applied to model approximation techniques, which is then applied on given data, as summarized in Figure 4.2. As summarized in Figure 4.2, a pricing can be put forward in advance before a transaction agreement is put forth; the data owners get paid for the data while the model owners, after paying, will have full flexibility in their usage of the data which has been estimated to be valuable. An improvement is evaluated eventually as the parameter updates that can leader to better performance on a particular test set, which we reduce to just desirable parameter updates. This choice is a deliberate one. In the next section, we discuss the usability of training on private data for the data transaction use case under our assumptions of a relatively robust but unknown model. The rest of the paper is structured to address other approaches, how practical, secure, private, and fair they are, and give details on our solution. 3 IS THERE USABLE TRAINING DATA PRIVACY?As machine learning models become more and more complicated, its capability can outweigh the privacy guarantees encryption gives us. Our approach of approximating the model, rather than using techniques to protect the data, is a calculated one: when facing black box models, encrypted training data is not so private. This section addresses the practical implications of private training data in our use case. Once a transaction of the raw data is made, the data no longer belongs to the data owner. The model owner is then free to use the data however many times in however many ways unless further restrictions are applied. It is fair to assume that from a security and privacy perspective, any data that is given up is given up forever. Under that assumption, a fair pricing scheme is needed to make sure a pricing happens before a transaction, thus giving data owners the opportunity to throughly trade the entirety of the data. If available, the test set should only be used once to maintain statistical validity, a test set is ideally only used once to prevent model improvements that are just overfitting. Similarly, our data evaluation framework affords a one-time evaluation between every pair of data and model, since repeated testing would leak a lot of contextual information. The proposed method, for instance, assumes the pricing and transaction to be both one-time activities. See the next sections for details. Note that even though giving the model owners the entirety of the data may lead to overfitting, as Numerai had tried to solve BID10, such concern is at a loss of the model owner's, and therefore not a concern of our paper, which aims to align the economic incentives of model owners and data owners. The motivation for training on encrypted data includes user data data and the desire to still improve a learned model. These ideals are tradeoffs. Specifically, for notable improvements on the model the data can demonstrate, some information about the data will need to be leaked, regardless of whether the model input was encrypted. More practically, it is well-known that popular networks on media data can memorize complete noise (but does not reveal data), as shown by BID12, making it further difficult to maintain the privacy of data for any meaningful model improvement that the model owner needs to observe, because the parameter updates alone reveal much information about the data. The following is a sketch of the proof from an information theory standpoint with real-world scales:Suppose we have a high resolution image passed to a visual network. A practical choice for an image would be a high resolution selfie, such as an 1024x1024 pixel black and white image. The network is assumed to be a smaller variant of AlexNet BID8, which has only 1 million parameters and only outputs binary updates. Note that we restrict the model capability to have a conservative estimate. Suppose every model parameter update is non-trivial, defined as greater than for some where 0 < 1. For a given input on the scale of 1MB, once encrypted and trained against an unknown model of 1 million parameters, it outputs binary parameter updates; that is, r i = 1 for updating θ i and r i = 0 for not updating θ i.As we know, the model parameters can encapsulate 1 million bits of information, so it has the capacity to model such an input by pixel. This means that there is no good information theoretical guarantee for encrypted data on a completely unknown model of arbitrary scale, especially in the use cases today. In fact, even for unencrypted data, the model owner should be incentivized to not reveal the details of its model and should instead use a proxy for the model in evaluating the efficacy of data. In practice, unrestricted raw training data is useful for model improvement. In addition, extremely restricted visibility into data can be dangerous, as in the case of training on encrypted data. A practical solution thus calls for more flexibility in data usage for the model owners. The model owners' incentives are against replacing the entire data with much less information, such as with the of feature extractions; because encrypted data is undesirable, the model owners will be less likely to employ it. Firstly, the engineering work involved each time such a filter of information is made is expensive. This adds to the cost of training on encrypted data, further disincentivizing model owners from sticking to the encryption scheme that requires regular updates. More broadly, the model owner would like to sometimes change the architecture of the model to improve it; not having the underlying training data stored makes such process hard.. If a feature extractor add one more entry, all the data would need to be extracted again. Similarly, if an image classifier's model architecture changes, all the data would need to be collected and purchased again. Even with the same model, extremely restricted data is inflexible for the model owners in improving their models. As deep learning matures as a technology, modern development techniques for a safe and robust model requires the model owners to have intimate access to data. The model developers usually require a lot of time, tweaking, test, and in general more iterations of runs with the training data in order to form a better model in the development phase. This development phase is not separable in practice: visibility into why their model works and does not work often rely on examining the data. In the case of media input, the raw data is often more human readable than their representations. In addition, not having the actual data at the disposal of model owners is undesirable and potentially unsafe. For example, adversarial training data cause misclassification BID4; autonomous driving cars that mis-classify a stop sign can potentially cause human deaths. To remedy such risks, practical model testing and debugging methods often rely on visibility into data BID7; for instance, adversarial images that cause mis-classification need to be examined BID1. Each of the testing technique require yet a new operation on the data; if we want the data to remain encrypted, each of these testing frameworks would need to be hand-crafted to accommodate. Fully homomorphic encryption, verifiable contracts and proofs, and federated learning techniques address the issue of privacy-preserving training. We talk about their privacy, practicality, and fairness tradeoffs in this section, as summarized in TAB2. In particular, we discuss the beneficial intuitions in federated learning, as summarized in TAB3. Homomorphic encryption is appealing in the privacy and security community for its power to complete verifiably correct computation without revealing the original inputs. Naively it can be applied to either the data or the model, yet the computational cost is prohibitively expensive; as BID11 show, to learn a classifier for recognizing 101 objects using 2048-dimensional deep features, a verifiable summing operation encrypted with a 1024-bit key takes about 3 ms for a single weight using a modern CPU on Mac OS X, thus expecting to take up more than 16 hours to encrypt all classifiers even in a simple case BID11. Encrypting the model is thus not practical. Encrypting the data for machine learning, while still slow, has led to many observations in the differential privacy community. For instance, some models have shown to have secure and efficient solutions, such as ridge regressionGiacomelli et al.. Here, we focus on those with strong security guarantees that are in practical use. The advances of bitcoin-related research touch on aligning financial incentives with machine learning algorithms' effectiveness using blockchain. Numerai addresses economic incentive against overfitting, while Openmined (et al., 2017) encourages a collaboration of open protocols and development that incorporate all of federated learning, homomorphic encryption, and blockchain smart contracts, which are all covered here. Advances in using blockchain smart contracts often have use cases that are diverge from our goal, making it suboptimal for security, flexibility, and pricing fairness. Security-wise, the data detail may Figure 3: OpenMined architecture, which aims to democratize data by making it easier to exchange user data. It consists of federated learning server Sonar running on the blockchain, Capsule for handling encrypted key exchange, Mine with user data, and Syft, which contains neural networks that can be trained in an encrypted state (et al., 2017).remain secret, yet it can be inferred from the model improvements. To mitigate that, a smart aggregator is designed to rely on the other users' data to mix up the updates, so that such information regarding data detail is differentially private. However, making strong assumptions about the distribution of other data not in our control is impractical, as we deal with a single transaction between a model owner and a data owner. In the case of training data transaction, we want the data to be preferably visible to maximize their flexibility. Absolute security of the data is also less relevant in the pricing case, as we hope to guard the data against its use case; that is, we aim to prevent its specific model improvements to known while still demonstrating that the model is improved by the data. Federated learning has seen great advances, without sacrificing much accuracy, while preserving privacy BID9. It potentially can be used to allow model parameter updates to be estimated before the central big model gets that information. That is, a transaction can happen between the gradient updates are passed back, and a pricing can be done based on the gradient, achieving the desired privacy guarantee for data owners. This method is an improvement on all the previous methods mentioned when it comes to privacy for data owners who can benefit a lot from the exchange. However, it is relatively unsafe for model owners, as a miniature version of the model is deployed per client, lacking true ownership, making the attack vector for knowledge about the model very large. In addition, it still requires the effect of the exact data on the exact model to be known before transaction. It further calls for a secure aggregating solution, though existing ones proposed appears to work in experimental in the case of visual learning BID11.Our approach can be seen as an improvement on federated learning scheme in that the data owners' privacy is prioritized over the model owner's model details. Federated learning often involves neural networks that can be trained in an encrypted state, in order to retain the information regarding the model, preventing it from being stolen. Thus they can be very slow. To optimize for them, the distributed model is hand-crafted, which is time consuming. This encryption guarantee is necessary for federated learning, since the architecture requires the model to be distributed per user, making the attack vector for model theft very large. Under such constraint, speeding up a neural network that is trainable in an absolutely encrypted state has little black box solution that is plug-and-play. Our method can be seen as an improvement on the federated learning scheme, utilizing the intuition that both the model and the data can be protected to achieve fairness. To secure the data, a pricing scheme is needed to reveal as little data detail to the model owners as possible before a transaction happens. The eventual system the data is tested on would output a scalar price for given data. It consists of a pricing component, a model-data efficacy middleware, and the model itself. The middleware is prepared by the model owners, who have control over the details of their model. A safer mediation is abstracting the middleware into two parts: a model approximation and an efficacy calculation layer. To model owner, the construction of MDE is under their control; for simplicity, we use a binary score. Then it suffices to have the data causing positive output be paid a uniform price while the data causing zero output to not be transacted, as they are likely useless. We are formulating the efficacy of data with respect to the model T. Such efficacy, denoted as ρ can be seen as a function that loosely maps data to a scalar. Since we care less about the cases when the data does not have value to us, including when the data is a duplication, of high similarly, leads to no parameter change, no model improvement, or even regression, we will only consider the non-negative cases. For the weak we wish to obtain: useful versus not very useful, many existing methods can be utilized. Interpretability solutions aim to alleviate the notoriety of reasonability of neural networks. Because the interpretation steps are generally faster or less resource-draining than running the data through the original model, some of these techniques are suitable for our purposes. Leaning on these, we can set up a general framework f where the model T's effect under data D is approximated: Influence functions are very flexible: it is shown that the approximation of the effect of the model works well on both seen and unseen data, making it suitable for our purposes BID0. The model access influence function needs are gradients and Hessian-vector products, effectively treating the model itself as a blackbox. Valuable are obtained in evaluating experimental data efficacy even on non-convex and non-differentiable models. The versatility of influence functions is a good direct MDE measure to be used for pricing. A similar approach that aims at interpretability of neural network models (often difficult to reason about) is model extraction BID0, which effectiveness approximates model as a decision tree. Model Extraction is also experimented to have better data efficacy approximation than just training a decision tree from the same set of data. This suggests that the extract model preserves the behavior of the original model well; that is, data that leads to improvements in model give the same inference in the extracted model as the original model would, and useless data that does not shift original model will not shift the extracted model. Leveraging that comes a fast approach of evaluating data efficacy, thus providing fair pricing approximations. Besides interpretability techniques through data, a more generic class of techniques that make the model smaller and faster (and less energy-consuming) can be utilized for pricing purposes, because they don't reveal the full model while still retaining its behavior. There are many techniques on model compression BID5 BID6, and our selection for pricing models requires the underlying model compression algorithms to take on relatively black-box approaches to maintain generality and secrecy. Model compression techniques are very powerful in reducing the resource footprint of large models while retaining the overall accuracy. Another extension to approximate data efficacy on model includes training an ensemble model that assumes sparsity assumption on models. Because this is closer to just replacing the existing model, rather than keeping it as is while shrinking it, we will leave it to future work to explore the possibilities. Some techniques are model-specific, such as model distillation for ensemble models BID6. A pricing model takes a data efficacy estimates and outputs a price. The model owners selects the desirable parameter updates. Because we have a much smaller component involved to evaluate the training data, thus encrypting it becomes a lot more practical than encrypting the whole model. Because we simplified the communication between data owners and model owners to be of a onetime scaler, the data security issue is largely simplified. Once the price is agreed upon, a verifiable transaction is trivial to implement to ensure that the same data is transacted after the money is paid. Encrypting the data or approximating the data is a lost cause for the data owner whose privacy is not guaranteed. Since the model owner has greater context on similar data distribution, they can infer much information about the data without actually seeing it. Because data cannot be practically secured without losing its value before being handed over, pricing and the transactional form relevant. In this scheme, no data is given up until money is paid. The suggested methods for Model-Data Efficacy include influence, which explores the change in parameter with training data, model extractions, which approximate a trained network with a decision tree, and model compression techniques that are learned. They all work to approximate the effect of data to the model owners without showing the exact makeup of the model. The crux of the usability of the solution lies in whether the approximation technique preserves model details, but combining secure transaction techniques is sufficient to make the approximated pricing model entirely private (beyond its output) without further approximating the effect of these pricing models, thus keeping them as accurate as the previous in the last section. Despite the potential accuracy loss, usability is much better. For any transaction reached through model approximation, we still maintain usable privacy guarantee. Securing a pricing function, which is very small, is easy. Enforcing by ways of contract to guarantee that a money-data transaction happens after agreeing on a price is much easier to enforce than contracts that bind within the large model owners? organization, such as trusting a security audit.
Facing complex, black-box models, encrypting the data is not as usable as approximating the model and using it to price a potential transaction.
1,493
scitldr
We present Line-Storm, an interactive computer system for creative performance. The context we investigated was writing on paper using Line-Storm. We used self-report questionnaires as part of research involving human participants, to evaluate Line-Storm. Line-Storm consisted of a writing stylus and writing pad, augmented with electronics. The writing pad was connected to a contact microphone, and the writing stylus had a small micro-controller board and peripherals attached to it. The signals from these electronic augmentations were fed into the audio-synthesis environment Max/MSP to produce an interactive soundscape. We attempted to discover whether Line-Storm enhanced a self-reported sense of being present and engaged during a writing task, and we compared Line-Storm to a non-interactive control condition. After performing statistical analysis in SPSS, we were unable to support our research hypothesis, that presence and engagement were enhanced by Line-Storm. Participants reported they were, on average, no more present and engaged during the experimental condition than during the control condition. As creativity is subtle, and varies with person, time, context, space and so many other factors, this was somewhat expected by us. A statistically significant of our study is that some participants responded to Line-Storm more positively than others. These Preservers of Line-Storm were a group, distinct from other participants, who reported greater presence and engagement and who wrote more words with Line-Storm and during the control condition. We discuss the of our research and place Line-Storm in an artistic-technological context, drawing upon writings by Martin Heidegger when considering the nature of Line-Storm. Future work includes modifying interactive components, improving aesthetics and using more miniaturized electronics, experimenting with a drawing task instead of a writing task, and collaborating with a composer of electronic music to make a more interesting, immersive, and engaging interactive soundscape for writing or drawing performance. Our philosophy is that people have become frugal regarding "joy"! How we all are becoming increasingly suspicious of all joy! The desire for joy already calls itself a "need to recuperate" and is beginning to be ashamed of itself. -Nietzsche Tod Machover has emphasized the need to augment existing, traditional musical instruments while ensuring these augmentations act as stimuli to the creative process, not simply as additional features. One focus of this paper is to find a way to enhance human creativity. Another is to observe the emergence of the work when the system is used. A third, is our attempt to make something that is fun to use. We have conceived, designed, constructed, evaluated, our system called Line-Storm 1, attempting to enhance a sense of both presence and engagement in the user. Only through performance with Line-Storm, does Line-Storm come into being. The method of experience sampling-interrupting a person as they go through their daily activities and asking questions about their experience-has been used to find that when peoples minds are wandering, they are less happy. "Be Here Now," a mantra popularized in the United States by, for example, Dr. Richard Alpert, who became Baba Ram Dass. This mantra now occurs in a leading business publication urging middle managers everywhere to "be present" to be a "great leader" and presumably to reap the rewards of "success." Even the LSD experimentation Dass describes in Be Here Now, carried out on a small, socially acceptable scale in Silicon Valley, where tech workers "microdose" themselves with LSD, to enhance their creativity and improve interpersonal interactions. Some esoteric practices leading to creative work may conjure images of the lone painter or poet, or of a sculptor in her studio. It is not only Silicon Valley technocrats, scrambling for millions and billions of dollars, who might benefit from enhancing human creativity. Even now one is ashamed of resting (equated to waste of time in our mind), and prolonged reflection almost gives people a bad conscience. One thinks with a watch in ones hand, while eating meals, and reading the latest news of the stock market; we live today not to miss out on anything. -Nietzsche Note that Nietzsche was writing well over 100 years before "FOMO," or "fear of missing out," became an expression related to early 21st-century smartphone users. Our point is that we recognize that there are different meanings to the phrase creative work. For example, billionaires and poets are not endorsing the same thing when both use the word "creative" or the word "work," though both may praise "creative work." Some decry the extreme measures taken by LSD trippers in the 1960s, and want to turn the drug into an effective money-making tool. An irony is that creative work translates into fortunes undreamt of by poets such as Robert Frost. There is a story in which Joseph Heller, author of the novel Catch-22, when told of an investment banker who had made more money last year than he might ever to be expected to make from the novel, replied that he had something the investment banker would never have: enough. So, we argue that it is possible that what was good for Heller, in the anecdote, would probably not have been good for the investment banker, even when the concept of creative work is broadened to include both their endeavors. Enhancing one type of creative work may not enhance the other. The ecstasy of the composer remarked upon by Csikszentmihalyi or of the novelist, may not be found in the same way the "A-ha!" of the software developer is found. Our work involving Line-Storm has been an attempt to provide a ludic system for use by the creative worker. Gaver defines a ludic system as one that is used for its own sake, and not for some other end. By attempting to increase a users sense of presence and engagement-their being here now-our hope is to provide an immersive environment in which to do creative work with a writing stylus such as the mechanical pencil we chose to use. Taskscape is a complex term from Ingold's "The Temporality of the Landscape", which we will refer to later, when speaking of the new possibilities of a task that Line-Storm exposes, as affordances in Gibson's sense of the term. One of our committee members, a professor of music, suggested that our work involves the taskscape of the creative worker, working with a writing stylus and paper. This taskscape includes the place, people, and objects surrounding the creative worker doing creative work. The taskscape is social. The experience of the user of our system, and of the research participants who gave of their time to be a part of this thesis, is a social experience, and the writing tasks they performed are tasks that fit into "an array of activities"-which include the writing of this sentence. We do not know-as above, because too little work has been done in this area-whether the taskscape of a user of Line-Storm is altered in ways more conducive to writing poetry than to the drafting of microprocessor plans, for example, or vice versa. Rather than devise a completely new tool, we have chosen to augment an otherwise ordinary mechanical pencil 2. Perhaps by looking 2 We could have similarly augmented a paintbrush or a pen, though the away from our goal, creative enhancement-as we must when looking at faint night-sky objects with the naked eye -and making the use of the system the primary activity, and the work done with it a secondary activity, we think we will find ourselves progressing in that direction, whereas a direct approach would not have succeeded. By giving a chance for play, we have hoped our system, Line-Storm, serves as stimulant and facilitator "to the creative process itself," as Machover advises. Line-Storm is not digital art as such. Its products are physical objects and phenomena. They are (analog) drawings or writings. It produces sounds, which are-though digitally mediated-analog sounds. The computer is, in Line-Storm, an intermediary and a facilitator, with a visual arts component and sound, satisfying criteria for Demers' second sub-genre of sound art. Line-Storm amplifies and augments the sonic aspects. The sounds made, while writing or drawing, are captured using a contact microphone and are played through headphones. Sounds of natural phenomena-the sounds of a thunderstorm-augment the writing or drawing experience. These sounds are recorded-analog yet are digitally mediated . Performance with Line-Storm can involve performance in art, in technology, and/or in play. A performer using Line-Storm may be using it for different reasons, including for the fun of using it, to write a letter to a friend, to write down a cooking recipe, to write poetry, to draw, because it is a curious thing one wants to understand, or for other reasons. A performance occurs "as action, interaction, and relation". Line-Storm is an interactive system, where the performer's actions cause sounds to occur, which may influence subsequent actions. The sounds can be controlled to some degree, by the performer. The drawing or writing produced during performance is one product of the performance. The sounds, which can be recorded and played back, are another product. The audience of the performance may be the performer alone, or a person or persons presented with one or more products of the performance, the written or drawn product or the sound produced. Line-Storm is a way of "honoring the ordinary" in Schechner's words. Schechner refers to "Nietzschean" play, making specific reference to a dice roll. This is contrasted with playing a game with rules that are agreed upon by everyone before play starts. Play is a way of introducing flow into ones life and has an organic quality. Deleuze has discussed Nietzsches approach to the dice roll, in Nietzsche and Philosophy. Sounds which we added to LineStorm included those of thunderstorms, which have organic qualities similar to movement of air through a room, pushed by a ceiling fan, or sounds from nearby birds. Thunderstorms followed by quiet rain can be medicative to some as well. We wanted to use analog (thunderstorm) sounds for our analog ludic system. Gamification is contrary to play and playfulness and renders "personal" relationships impersonal. Bohme has commented upon the superfluity of personal relationships to the normal functioning of society. The digital medium is one of permanence and impermanence. Privacy becomes a concern in the digital realm, in ways not found in the analog realm. Talk of a "right to be forgotten" has occurred surrounding the at-times oppressive permanence of the digital medium. Line-Storm as a medium provides the privacy familiar from the analog world. Letters can be (and have been) intercepted, shared unexpectedly, and so on, but personal letters written on paper are not automatically added to databases, compiled on users of internet services. One motivation of Line-Storm is that it could preserve the practice of the handwritten letter. No human tools are more beautifully designed for their purpose than traditional musical instruments, which no colpaintbrush would have required a different approach. We depend in part on the sounds made by the user's touching of the writing pad, and we cannot expect a paintbrush to make the same level of sound made by a pencil lead. lection of buttons, wires, and sensors can replace. That's why it is important that technology be used to augment, not replace, existing instruments. -Tod Machover Previous work, that investigated augmenting a writing stylus with electronic or computer systems, includes MusicGrip, a pressuresensor-controlled system in which a writing stylus was used to control analog synthesizers. Musc Grip used a one-to-one correspondence between sensor input and synthesizer output. Shichinohe et al. used a camera system to implement an augmented-reality system to aid in the instruction of calligraphic writing . Their system monitored brush position and body posture, providing both ambient (color) feedback and verbal feedback. Part of a performance-the Brain Opera-the Digital Baton was a wireless baton, augmented with sensors, used as a New Interface for Musical Expression (NIME). The baton carried an infrared LED at its tip, pressure-sensitive resistors that were controlled by the performer's fingers gripping the baton, and three +/-5g accelerometers. These inputs were mapped to musical parameters. The Digital Baton was a wired NIME, but the authors did discuss what could be done to make it wireless. Much work relevant to ours has been done by Tod Machover, whose research group at MIT's Media Lab developed the technology behind Guitar Hero. His work with hyperinstruments (electronic augmentations of traditional music instruments) ) and are very interesting. Machover's Hyperstring Trilogy was composed and performed using hyperinstruments-hypercello, hyperviolin, and hyperviola-which were traditional classical instruments augmented with sensors. The entire performance was itself augmented using computer programs that generated accompanying musical sounds. Machover's philosophy of augmenting, and not replacing, traditional tools, is one we have followed in our work. LiveScribe (http://www.livescribe.com), which has produced a wireless pen with handwriting recognition, no longer develops the electronic writing pen it once did, so we did not involve the company's work in our work. Work involving the augmentation of objects other than writing utensils or musical instruments includes the Sonic City system, in which the urban environment served as the interface. Sonic City equipped an urban pedestrian with a music-generating computer that was "aware" of the way the user traversed the streets and sidewalks of the city. The Bluetooth Radio Ball Interface (BRBI) augmented a sport ball with sensors, providing sound and music capabilities (mediated by a computer and a Bluetooth radio connection). The Urban Musical Game was another augmentation project involving a sport ball and sound/music generation based on the ball's motions; video of use of Urban Musical Game have been made available on Vimeo (https://vimeo.com/26413625) and (https://vimeo.com/22120867). Measurement of writing motions helps diagnose people suffering from obsessive-compulsive disorder . Handwriting and cell-phone texting have been compared as therapies for Brocas aphasia, with handwriting emerging as the more effective treatment. Embodied cognition models have been used to investigate neural relationships with character writing, copying, and recognition. Preschool children have taken part in fMRI experiments, which demonstrate the importance of "learningby-doing" approaches to literacy learning, with kinesthetic activity working in tandem with cognition. Existential phenomenology has informed thought regarding the teaching of personal writing, without any technological involvement. Recent work by Kiefer has found neuropsychological evidence for benefits from writing by hand, as opposed to writing using a computer keyboard, including improved learning of reading and writing skills in young children. Morphy and Graham argue students more generally, appear to write better when using word processors than when composing by hand, considering the composition tools (spell check, grammar check, etc.) available in modern word processing software. Al-Ghabra focused on the importance of handwriting for the development of composition skills in college students. Earlier work by Collier and Werier found no difference between high-level characteristics of textual production in proficient adult writers who composed either by hand or while using a word processor. An early impetus for our work was a desire to encourage handwritten letter writing, to provide a system that might encourage a person to increase or maintain their stamp-and-envelope transmission of handwritten letters. Thoreau decried some forms of letter writing, writing that, "The penny-post is, commonly, an institution through which you seriously offer a man that penny for his thoughts which is so often safely offered in jest"; from a different viewpoint, writing letters has been a way for families to stay connected through the generations and has functioned alongside newer media. Twentieth-century German philosopher Martin Heidegger commented, in his Parmenides, upon handwriting, declaring its superiority over use of a typewriter. Philosopher of technology Don Ihde faulted Heidegger for Heidegger's comparison. Philosopher Jacques Derrida also faulted Heidegger, for implying, while emphasizing the importance of "the hand" for humanity, that human beings only have one hand. A typewriter does not offer the affordance of being easily carried up a mountainalthough Nietzsche owned a portable typewriter. Likewise, the poem title of a friend, "Notebooks," would read differently if it had to do, not with notebooks, but with some digital note-taking contrivance such as Google's Keep app (http://keep.google.com). Ihde reminds us of the non-transparency of electronic and digital communications media such as the telephone, and here, with Nietzsche's typewriter and Gregory Lawless's poem, we see some effects of medium, in practice (typewriter) and in discourse (poem title). Ihde does not want to go with Heidegger, to declare there is not just a difference between media but a hierarchy of values. These dismissals, by Ihde and Derrida, of Heidegger's criticism of the typewriter and his preference for handwriting, seem to pretend to have the benefit of hindsight. Ihde and Derrida do not convince us, because they, like the king Thamus, do not look enough at what might support Heidegger's position, but only criticize it-their seeming invincibility coming from their objections to Heidegger being voiced decades after his death in 1976. Heidegger decries what he sees as hastiness in the face of a technologically facilitated information glut: "[N]owadays we take in everything in the quickest and cheapest way, only to forget it just as quickly, instantly." Both Heidegger in his "Memorial Address," and Jacques Ellul in The Technological Society, declare technology to have become "autonomous" (in Ellul's phrasing), saying its progression could not be stopped, even if human beings wanted to stop it. " These [technological] forces have moved long since beyond his will and have outgrown his capacity for decision," Heidegger says, regarding our relationship to technology and "calculative thinking". Our thinking here is that technology creates more options, including the option to not use it; non-users of a technology have been considered by Satchell and Dourish. These are core questions we consider as we discuss Line-Storm. Should we augment human capability, or should we replace it with a technological contrivance? As discussed above, we have followed Machover in choosing to augment human creative capability, using Line-Storm. For work done by Csikszentmihalyi, in Creativity: Flow and the Psychology of Discovery and Invention, ninety-one persons were interviewed who were deemed to have made significant contributions to their fields. Many others, who excluded themselves from his study, were skeptical of studying creativity or of participation in the study as being worthy of their time, and some insisted they were too busy being creative to stop and talk about it. A direct approach to enhancing creativity, Csikszentmihalyi writes, is less effective than are attempts to place the creative worker in a favorable environment; but beautiful surroundings are not what he means. The creative worker creates an environment conducive to creative thought and work, despite otherwise unfavorable surroundings; "they manage to give their surroundings a personal pattern". On the other hand, he denies there is proof that a person needs "delightful" surroundings to engage in creative work. There are emotional aspects as well-creativity is not only cognition, taken as reasoning-and attention is the finite resource that allows the creative person to find problems to solve, that have gone unrecognized until they have found them. Our work attempts to alter the state of the creative worker, short of accomplishing shamanistic technique or administering psychedelics. Csikszentmihalyi makes a similar claim for the creative worker. The creative worker has their attention focused in areas outside the "status quo" (Csikszentmihalyi, Motivation and Creativity: Towards a Synthesis of Structural and Energistic Approaches to). Creativity is lauded widely yet creativity works for good and bad. Cropley wrote that a computer hacker who circumvents security measures to steal money, has exhibited creativity no less than a symphonic composer imagining a new melodic line. Sternberg, writing of what is known about creativity, iterates two points: creativity is mostly "domain-specific," and it is partly independent of measured intelligence quotient (IQ). Much ink, including that of Thoreau, has been spilled comparing creativity to play. Play does not need the context of a game, to be play. Play may be contrasted with the world of production and work. In attempting to provide an immersive experience conducive to the presence and engagement of the creative worker, we recognize the worker-even a solitary worker writing alone in a micro-environment they have fashioned to their needs-as engaging in a performance. The act of sitting down to write a poem might be viewed by some poets as requiring a "smooth" execution; but others will likely view it as Schechner has described the "actual" and the roughness of the performance of writing the poem as "the genuine meeting between performer and problem"; having a sense of presence and engagement is a desirable state. Line-Storm is interactive. It augments an ordinary pencil and an ordinary pad of paper, adding new interaction possibilities, new affordances. It responds to the person engaged in using it-and it is immersive. The headphones may make it "easy to forget the outside world," allowing the user to "concentrate completely" on the writing task . Line-Storm is an attempt at providing creative workers with a new tool. Citing Edward Tenner, Runco cautions that tools do not have to be poorly made or poorly designed or have "an undesirable feature, to cause problems" involving either the creative worker or others. Combining technology with art and art works does not, in itself, enhance creativity. Heidegger inquired into whether the common conception of technology, in which technology is neutral and independent of its uses, was supportable, and found the essence of technology to be a view of the world and all that is in it, as resource to be put to use, or standing-reserve. Han, in In the Swarm and Psychopolitics, and Bohme, in Invasive Technification, question the place of technologies in our lives, and the role of the associated, technological, perspective in dominating other forms of life. Lucas observed of the world depicted in his 1971 film, THX 1138, that "nobody was having any fun, but no one was unhappy". We have made a new piece of technology that is based on fun. We tread softly when we attempt to bring new technologies into the practice of writing or drawing by hand with pencil and paper. Dan Ariely posed a thought experiment, in his 2010 book The Upside of Irrationality, in which the reader was asked to imagine how a large cash reward might motivate them to greater creativity. After he considered the experimental evidence, he wrote that money is a poor motivator to creative production. According to Ariely, it is not clear how much of our "mental activity" is under our "direct control," especially when we are working under pressure. Our system might prove more difficult to use than ordinary pencil and paper, and this is not in itself a problem for us because creativity is different than usability. In addition creativity may not be fully mechanizable. Creativity is not only randomness or sheer novelty; it requires filtering by an intelligence. Counterintuitive incentivisation may be called for when attempting to stimulate creativity. Making a task more difficult through the use of unusual tools, may stimulate creative production. Changing the affordances of a once-familiar taskscape, may be key to inducing creative thought, making one see a thing or activity in a new way. We present the details of our implementation of Line-Storm. We implemented the software interface and sound-synthesis engine of Line-Storm using Max/MSP, Version 7.2.3, 64-bit edition. Max/MSP is the mature, commercial successor to Miller Puckettes Pd (aka Pure Data) (https://puredata.info/downloads/pure-data), a free and opensource project. Like its predecessor, Max/MSP is a graphical programming environment. Objects in the Max/MSP GUI windows can be interconnected and otherwise manipulated inside patchers (graphical representations of program files in Max/MSP). A Max/MSP program or patcher appears generally as one or more objects connected by patch cords joining inputs and outputs. Inputs and outputs can be: symbolic, that is, textual; numeric; or signals running at an audio rate, typically 44.1 kHz. Max/MSP allows us to interface with a wider range of electronic devices, and we have elected to use the serial-port communication capabilities of Max/MSP to bring our sensor data into the Max/MSP application, as a control signal, through a serial port on our laptop computer. Max/MSP has further advantages over some other music-synthesis DAWs such as FM8; Max/MSP is programmable, and it is welldocumented. The sensor-fob, shown above, in Figure 17, comprises multiple PCB circuit-boards, powered by a lithium-polymer battery, and five solidcore, insulated copper wires soldered between two of the PCB circuitboards. The primary board is an Arduino Fio v3 microcontroller board. This type of Arduino board includes a socket into which an XBee radio transceiver module can be inserted (see Figure 21, below). The Fio v3 can control the inserted XBee radio transceiver module. We have a Digi International XBee radio transceiver module (type S1) inserted into the socket of our Fio v3. The Fio v3 has multiple GPIO/ADC (general purpose input/output or analog-to-digital converter) pins, three of which we have soldered wires to. These three wires are soldered at their ends, to an Adafruit ADXL335 3-axis accelerometer, to its three, analog output-signal pins. Two more pins, and two more wires, connect Vcc and GND on the Fio v3 and ADXL335. The wires are rigid; they both connect the boards and hold them in constant positions relative to each other, in a fixed orientation. See Figure 22, below, which shows the solder connections between the Fio v3 and the ADXL335. We chose to orient the boards parallel to each other, the top or front face of the Fio v3 facing the bottom or back face of the ADXL335. The axes of the ADXL335 3-axis accelerometer are oriented along the lengths, widths, and surface normals of both parallel boards. The x-axis of the sensor is oriented along the length of the Fio v3; the y-axis along the width, and the z-axis along the surface normal. The adhesive-tape connection of the sensor-fob to the stylus allows quick changes in the relative orientations of the stylus and sensor. When carrying out experiments involving human subjects, we maintained a constant relative orientation of sensor-fob to stylus, with the XBeeradio end of the sensor-fob pointing in the direction of the eraser on the pencil-stylus, as shown in Figure 16, above. We used an Adafruit ADXL335 3-axis accelerometer mounted to a breakout board (see Figure 20, which also shows solder points). The ADXL335 has an acceleration-measurement range of +/-3g, where g is the average acceleration due to gravity at Earths surface. While there was a similar model, available from Adafruit, which had a +/-5g range, we found that the ADXL335 +/-3g measurement range was sufficient for our system. Moving the sensor-fob, with its attached ADXL335 unit, in as violent a manner as we were able to do while holding it with a hand, we sometimes reached minimum and maximum sensor-output values, but not always; reaching these values was difficult. Lesser motions were well within the +/-3g range, giving sensor output values below the approximately 1000 maximum and above the approximately 0 (zero) minimum. Sensor output values, raw from the GPIO/ADC pins, range from 0 to approximately 1000, with a center value of approximately 500. This range is compatible with an 8-bit ADC, which the Fio v3 uses. Values below about 500 indicate negative accelerations relative to the corresponding sensor axis, while values above 500 indicate positive accelerations relative to the corresponding sensor axis. More programming and other details can be found in [FirstSecondAuthor]. We present the details of our experimental evaluation of Line-Storm, which involved research involving human participants. Our study involved participation by thirteen persons, but data for one of these participants was discarded, leaving twelve participants with valid data. The participant whose data we discarded had filled out the questionnaires by circling the value 3, in all cases but one, on both Line-Storm and control-condition questionnaires, strongly suggesting they did not follow directions or complete the forms in good faith. We had roughly half female and half male, including one who chose not to self-identify. Participant ages are ranged from 18 years to 34 years. We ran our experiments at two sites. We began advertising near the end of the Spring semester, and it seems likely many students were interested in obtaining extra credit at that time. In three days, we were able to run ten participants. We present and analyze the of our research study. We were unable to support our research hypothesis, that participants' sense of presence and engagement would be greater during the experimental, interactive condition than during the control, non-interactive condition. Because creativity and engagement is somewhat fleeing and vary from one person to another, this was not so unexpected . Using our abbreviation for control and experimental questionnaires' first item, measuring self-reported level of presence and engagement-PANDE for "present and engaged," a suffix of 0 (zero) referring to the control condition questionnaire item and a suffix of a 1 referring to the experimental condition questionnaire item-we found we could not reject the null hypothesis, PANDE0 = PANDE1. We compared mean self-reported level, of being "present and engaged" during the experimental and control conditions, using two statistical tests. We performed a paired-samples test using the Student's t distribution, suitable for small sample sizes from normal populations, and we performed a paired-samples test using the nonparametric, distribution-free Wilcoxon Signed Rank Test, suitable for symmetrical distributions of small samples. We performed these two statistical tests for all paired variables, that is, for every matched item on the control and experimental questionnaires. We did find a statistically significant difference between experimental and control conditions, for two questionnaire items, abbreviated NAT (for naturalness of interaction with the system) and ADJEXP (for adjustment to the system experience). Both these items were rated lower for the experimental condition than for the control condition, indicating participants found the experimental system interactions unnatural and had difficulty adjusting to using it, compared to the control condition. We performed Pearson correlations, and found several statistically significant correlations, discussed below. For example, those participants who reported they lost track of time during the experimental condition also tended to write more during the experimental condition. There was a non-significant correlation between losing track of time and word count during the control condition. One participant circled 3 for all questionnaire items on both the control and experimental questionnaires, except for one item for which this participant circled 3. We excluded this participant from our data analysis for this reason. They did not appear to follow the instructions for the experiment. When examining word counts (WC0 and WC1), we excluded two participants who drew pictures instead of writing, leaving a sample size of ten. We performed K-means clustering classification, discussed below. A group of participants responded differently to the experimental condition than did the rest of the participants. There was also a group who responded differently to the control condition than did the rest of the participants and we have termed these Preservers of Line-Storm. To perform our statistical analyses, used IBM's SPSS Statistics, Version 25 (https://www.ibm.com/products/spss-statistics), because it is an industry standard statistics processing application. Many tutorials are available online, in textual/graphical and video formats. SPSS also has links to user forums and documentation integrated into the application. • There were strong, significant (p<0.01) correlations between the initial, baseline level of a sense of presence and engagement and response items 4 (NAT) and 7 (ADJEXP), for both control and experimental conditions. • A sense of presence and engagement correlated strongly and significantly (p<0.01) with adjustment to the "control devices" (augmented stylus, augmented writing pad) (ADJCTL) for both control and experimental conditions. • There were strong, significant correlations between a sense of the naturalness of interactions with the system and baseline sense of presence and engagement, ease of adjustment to the system experience, and ease of adjustment to the control devices (ADJCTL), for both control and experimental conditions. • There was a group of participants who responded more favorably to the experimental condition than the rest of the participants (analysis performed using K-means clustering tests). This is significant for our experiments. • Those who wrote more in the control condition wrote more in the experimental condition. This is also significant for our experiments. • The more participants lost track of time, in the experimental condition, the more they wrote-or vice versa. This is significant for our experiments. • We found correlations between a sense of presence and engagement during the experimental condition (PANDE1), and the degree to which a participant lost track of time while using the system during the experimental condition. This is significant for our experiment, and we call these participants as the Preservers of Line-Storm. We discuss our experimental . We found that participants found their interactions with the system more natural during the control condition, and less natural during the experimental condition. Participants adjusted to the system experience more quickly during the control condition than they did during the experimental condition. Our findings indicate that there appears to have been a significant group of participants, roughly half the participants, the Preservers of LineStorm (see below, What Line-Storm Is: Equipment, Art Work, and Preservation), who became immersed during the experimental condition. These participants tended to write more during the control and experimental conditions, they tended to experience the sound components of the system (control and experimental) in a way that led to their reporting less prominence of the visual aspects of the system, and they tended to lose track of time during the experimental condition. It seems likely that attention fluctuates over time, and the mind naturally wanders and returns. Future work would include investigation of the ways such natural fluctuations in attention would be relevant to our work. Considering the ordering of questionnaire completion was nearly always the same (Demographic, Experimental, Control), natural fluctuations in attention (and presence and engagement) may help to explain our . Considering how we might have been wearing out our participants, by making demands upon their attentional resources, future work might be done that minimized attentional fatigue. When interpreting our , we are interpreting the interpretations of our study participants. Their interpretations played a role in the coming-into-being of Line-Storm. Martin Heidegger wrote that, "Just as a work cannot be without being created, but is essentially in need of creators, so what is created cannot itself come into being without those who preserve it". With reference to Heideggers essay, "The Origin of the Work of Art," we see two ways Line-Storm can be approached. First, as equipment, Line-Storm is a tool we have made for a purpose. As equipment, Line-Storm has a thingly character and an equipmental character. As a thing, Line-Storm exists as an object that can be encountered in the world, like a rock. As equipment, LineStorm is experienced as part of a matrix of all equipment. It exists in a context of equipment, purposes, and activities. As ready-to-hand, it withdraws, becomes transparent, and the person using it is engrossed in the work. As present-at-hand, Line-Storm sticks out as a thing, instead of becoming transparent and allowing the user to become engrossed in the work. As art work, Line-Storm is created, not only made. It has a thingly character and a workly character. Preservation "does not degrade it to the role of a stimulator of [mere lived experience]". It is not a tool in this case, where it would be "released beyond itself, to be used up in usefulness". As art work, Line-Storm is bringing forth of the work that there lies this offering that it be. These two modes of being that Line-Storm permits, refer to two ways of being with LineStorm: as technician or as performer. For a technician, Line-Storm is equipment. Equipment serves a purpose and is not an end but a means. As Schopenhauer declared, equipment and "all other human works," that are not art works, "exist only for the maintenance and relief of our existence"; art works "exist for their own sake". When we evaluated Line-Storm in terms of its capacity for leading to a possible increase in self-reported presence and engagement, we treated it as equipment. Yet some participants, while using Line-Storm, treated it not as equipment but as art work. Hence, we will refer to the group of participants who gave higher ratings to Line-Storm, and who wrote more while using it, as the Preservers of Line-Storm. The Preservers let Line-Storm be what it is. Without the Preservers, Line-Storm "cannot itself come into being". Preservers here simply means to us are those participants who seem to like Line-Storm and used it enough so that to be creative in their own mind. Line-Storm permits itself to be used in performance. A performance with Line-Storm could be understood to point out the overlapping of sensory or perceptual modes commonly thought of as separate. Seeing, hearing, moving, and proprioception involve cross-modal transfer. The sound and visual aspects overlap more strongly in Line-Storm than in ordinary writing or drawing, because of the amplification of what had been quiet sounds, i.e. the sound made by stylus on the paper which was amplified and merged with other sounds, such as thunderstorm. We perceive an object or art work as affordances, the tripartite interrelation of environment, organism, and activity. Because of cross-modal transfer, sound, visual, and tactile affordances play together in Line-Storm. When we look, we see what we can do-although, as with a wooden sculpture-puzzle, we may not immediately see all affordances. To solve the puzzle is to discover hidden affordances. Line-Storm makes affordances prominent, in the writing stylus and writing pad, that may not have been apparent: their sound-producing capabilities, which can be used in a performance. Preservers of LineStorm find its affordances. This is a kind of knowing; the work of preserving a work is know-how. A performance with Line-Storm, as preservation, is a knowing and a realization of the affordances of Line-Storm, without which it cannot be Line-Storm. Preservation is the fulfillment of the work that is Line-Storm. Both preservers and we, as creators of Line-Storm, belong essentially to its creation. In 2006, a living Japanese maple tree was augmented with nitinol wires and optical and audio sensors. The tree moved its branches, using the nitinol "muscle" wires, in response to the presence of people detected by its sensors. Coffin discussed this art work in the context of Heideggerian phenomenology, and discussed two different implementations of the tree totem, Breeze. In one, Breeze was constructed using a live Japanese maple. The flexible branches of the maple, and the bushy shape of the tree, hid the mechanical components from view, and attendees of the festival, where Breeze was exhibited, tended to interact freely with the tree and differently than during the second exhibition. At the second exhibition, a mountain laurel was used, whose stiffer limbs and more open shape put the mechanical components of the installation on display. When the mechanical components were hidden from view, during the first exhibit, interactions took place with people treating the tree as ready-to-hand. The mechanical components of Breeze withdrew, became transparent, and the people at the first exhibit interacted with Breeze as an interactive art work. When a tool is present-at-hand, the tool exists differently for the person. A broken hammer is present-at-hand as an object, not ready-to-hand as a useful tool. A broken hammer does not allow the person to engage in the work but is open for inspection. During the second exhibit, with the mechanical components poorly hidden by the tall, open shape of the mountain laurel, attendees at the exhibit tended to comment on the engineering of Breeze instead of interacting with it freely as had the attendees during the first exhibit. This is quite interesting for our implementation as well. Line-Storm appears to have existed differently for different research study participants. We propose that those who seemed to enjoy using Line-Storm engaged with it as ready-to-hand, while those who did not appear to enjoy using Line-Storm engaged with it as present-at-hand. For the former, Line-Storm became transparent and withdrew itself, allowing the participants to write or draw as well as they would have with an ordinary pencil and paper. For the latter, Line-Storm obtruded and interfered with the writing task. Those participants for whom Line-Storm was present-at-hand were not able to engage with the work but were distracted by the strange contraption, the mechanical and interactive aural properties of the device. We propose that Csikszentmihalyi's concept of flow implies engagement with tools and tasks in a ready-to-hand mode. A tool being present-at-hand is indicative of the absence of the flow state. The peasant woman wears her shoes in the field. Only here are they what they are. They are all the more genuinely so, the less the peasant woman thinks about the shoes while she is at work, or looks at them at all, or is even aware of them. She stands and walks in them. That is how shoes actually serve. It is in this process of the use of equipment that we must actually encounter the character of equipment. Line-Storm is equipment, when a participant is using it as equipment. Its equipmental character manifests when the user is engaged with the work and not distracted by the system's appearance or feel. Its "thingly" character is manifest for a participant who is not having fun using it, is not in a flow state, and is not present and engaged while using it. We suggest that these latter participants did not experience Line-Storm. One component of Line-Storm is its interactivity. A certain type of actions and intentions are required on the part of the participant for Line-Storm to be what it is. Coffin wrote of Breeze, and of interactive systems more generally, that interactions with them may be "effortless, unscripted, emergent, and engaged" if the mapping of responses is well done with respect to our "meaning-making sensibilities". Our goal that Line-Storm would provide for increased presence and engagement was not met for all participants. Still, some participants appeared to have had fun and play while using Line-Storm. Some of these participants likely experienced Line-Storm as art work, and so we would have found preservers for our work, who brought out its workly character, and who would belong to it just as we belong to it as its creators. This justifies our efforts. The participants' prior knowledge is relevant when considering their responses to Line-Storm. Line-Storm, as a tool, exists not by itself but among a constellation of related tools; those related tools, some of which a given participant may be familiar with, and some of which they may not be familiar with, allow Line-Storm "to be this equipment that it is". A participant's degree of familiarity with related tools, such as an envelope, stamp, mail-box, and pencil and paper, help to determine what Line-Storm is for that participant. We see a nearly significant (r = 0.620, p = 0.056) correlation of current writing or drawing (by hand) practice to number of words written while using Line-Storm. We think that, this correlation indicates participants who regularly wrote or drew by hand were better able to experience Line-Storm as it was intended, and see its' authenticity. We conceived our work, initially, as an entertainment system, to be used for one's own pleasure while writing in a journal. We followed that by hoping to jolt users out of complacent acquaintance with paper and pencil and present the writing tools and writing situation as if for the first time, to encourage the practice of writing and sending handwritten letters. We finished the work by attempting to enhance human creativity when working with a writing stylus and paper writing pad, by increasing participants' sense of presence and engagement. We found correlations and K-means clustering that did suggest there was a group of participants who responded favorably to Line-Storm. We expected that a direct approach to enhancing creativity may/would fail; we attempted to construct a system the use of which would be an end and not only a means, and hoped this might lead, indirectly, to enhancing creativity by encouraging play and playfulness. We provided a ludic environment for creative work, in which some users would focus on using the system, not expecting an outcome and will create their own play/outcome and accept what emerges or not-no quest, no winners, no points or gold to deliver outcome-based satisfaction. In a ludic system, therefore, the creative work (outcome is what it is) and the would be a secondary consideration and may emerge by itself, an indirect of the use of the system. We hoped participants in our experiments would find themselves "losing themselves," and a group of participants did tend to lose track of time while they used or performed with Line-Storm. We believe these participants became more absorbed while using the experimental system, exactly our intention. Losing oneself while using the system might open one up to creative energies, thoughts, feelings, and actions that would ordinarily not occur, as Nietzsche wrote. Future work would include the following items, listed as follows: (a) Clean up Max/MSP code and put functionality back in place, that would allow the triggering of multiple thunderstorm samples in quick succession. We would reinstate the capability of triggering multiple thunderstorm samples in rapid sequence. A committee member responded positively to a version of our experimental system that did trigger multiple thunderstorm samples in rapid sequence, and their gratifyingly positive response to our system was what we had hoped to bring to our research participants. (b) Make a cover for the electronic components on the sensor-fob. This would provide better aesthetic appeal and would minimize distractions from things like blinking lights and hanging wires. (c) Extension to a mobile platform -a mobile platform would use a smartphone or tablet and would not require a laptop computer, which limits the places our system could be used. (d) Investigate the use of more miniaturized RF components. We do not need the relatively large antennae of the XBee radios, which can operate over a larger distance than we envision for the use of our system. Bluetooth would provide the necessary range. (e) Investigate using more miniaturized micro-controller boards. The Arduino Fio v3 was the smallest board we found, when we began our work, with all the functionality we needed. A smaller board would make a less intrusive sensor-fob. (f) Experiment with different styli, including a paintbrush, a child's crayon, a marker, a piece of chalk, a paint roller, and so on. Attaching a contact microphone to the surfaces used with many of these would probably produce a suitable-strength vibration for use with our system. (g) Experiment with a baton-type stylus like the one used by Paradiso and Machover in the Brain Opera. (h) Investigate a wrist-worn appliance to augment or replace the motion-tracking capability of the stylus sensor-fob. (i) Gather more data involving a larger sample size. (j) Vary the type of music listened to during the control condition. (k) Consider ways to run experiments without wearing out participants by making excessive demands on their attention. (l) Experiment with a multi-user system. Users could be situated in the same place or could communicate via a computer network such as the internet. (m) Collaborate with a composer of music or a composer of electroacoustic music. We discarded our attempts at constructing an interactive generator of electroacoustic music. Collaborating with a person skilled in the creation of electronic music would be of great benefit in future as well. Specifically, such collaboration would improve the system by mapping accelerometer data to sonic parameters in an effective way, something we were unable to do to our satisfaction. More data points will also help our analysis; we would have liked to have run more experiments. We used statistical techniques that were designed for small samples, the Student's t Test and the Wilcoxon Signed-Rank Test. Our K-means clustering analysis would have been improved, we believe, by the addition of more data points. Finally, it has occurred to us that augmentation itself is innovative. Augmenting means a possibility that is completely different than the original. The Preservers of Line-Storm, in our experiments, showed that there is promise for our augmented interface-it may not enhance the sense of presence and engagement and lead to more creative writing, which was the hypothesis we had hoped for, but, as we discussed, creativity is difficult to capture anyway. Still, our work provided a completely different experience through augmented interaction to creative writing which enhanced the user experience, which simple creative writing, using ordinary pen and paper, or word-processing software cannot provide.
Interactive stylus based sound incorporating writing system
1,494
scitldr
Language style transfer is the problem of migrating the content of a source sentence to a target style. In many applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles. In this paper, we present an encoder-decoder framework under this problem setting. Each sentence is encoded into its content and style latent representations. By recombining the content with the target style, we can decode a sentence aligned in the target domain. To adequately constrain the encoding and decoding functions, we couple them with two loss functions. The first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style. The second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style. We validate the effectiveness of our proposed model on two tasks: sentiment modification of restaurant reviews, and dialog response revision with a romantic style. Style transfer is a long-standing research problem that aims at migrating the content of a sample from a source style to a target style. Recently, great progress has been achieved by applying deep neural networks to redraw an image in a particular style BID7 BID10 BID2 BID20 BID12. However, until now very few approaches have been proposed for style transfer of natural language sentences, i.e., changing the style or genre of a sentence while preserving its semantic content. For example, we would like a system that can convert a given text piece in the language of Shakespeare BID14; or rewrite product reviews with a favored sentiment BID17.One important issue on language style transfer is that parallel data are unavailable. For instance, considering the task of rewriting a negative review of a product to its counterpart with a positive sentiment, we can hardly find paired data that describe the same content. Yet, many text generation frameworks require parallel data, such as the popular sequence-to-sequence model in machine translation and document summarization BID16, and thus are not applicable under this scenario. A few recent approaches have been proposed for style transfer with non-parallel data BID4 BID17. Their key idea is to learn a latent representation of the content disentangled from the source style, and then recombine it with the target style to generate the corresponding sentence. All the above approaches assume that data have only two styles, and their task is to transfer sentences from one style to the other. However, in many practical settings, we may deal with sentences in more than two styles. Taking the review sentiment modification as an example again, some reviews may be neither positive nor negative, but in a neutral style. Moreover, even reviews considered negative can be categorized into more fine-grained sentiments, such as anger, sadness, boredom and other negative styles. It may be beneficial if such styles are treated differently. As another example, consider a chatbot with a coherent persona, which has a consistent language behavior and interaction style BID9. A simple framework for this task is to first use human dialog data to train a chatbot system, such as a retrieval-based dialog model BID11, and then transfer the output responses with a language style transfer model so that multi-round responses always have a consistent style. Note that the human dialog sentences are collected from different users, and users' expressions of the content and tones may be in different personalized characteristics. Thus the output responses retrieved from the dialog model may have the language style of any user. Simply treating the responses with a single style and employing the existing style transfer models would lead to unsatisfactory . Hence, in this paper, we study the setting of language style transfer in which the source data to be transferred can have various (and possibly unknown) styles. Another challenging problem in language style transfer is that the transferred sentence should preserve the content of the original sentence disentangled from its style. To tackle this problem, BID17 assumed the source domain and the target domain share the same latent content space, and trained their model by aligning these two latent spaces. BID4 constrained that the latent content representation of the original sentence could be inferred from the transferred sentence. However, these attempts considered content modification in the latent content space but not the sentence space. In this work, we develop an encoder-decoder framework that can transfer a sentence from a source domain to its counterpart in a target domain. The training data in the two domains are non-parallel, and sentences in the source domain can have arbitrary language styles but those in the target domain are with a consensus style. We encode each sentence into two latent representations, one for the content disentangled from the style, and the other for the style. Intuitively, if a source sentence is considered having the target style with a high probability, its style representation should be close to the target style representation. To make use of this idea, we enforce that the discrepancy between an arbitrary style representation and the target style representation should be consistent with the closeness of its sentence style to the target style. A cycle consistency loss is further introduced to avoid content change by directly considering the transferred sentence. Its idea is that the generated sentence, when put back into the encoder and recombined with its original style representation, can recover the original sentence. We evaluate the performance of our proposed model on two tasks. The first is the sentiment modification task with its source domain containing more than one sentiments, and the second is to transfer general dialog responses to a romantic style. Most style transfer approaches in the literatures focus on vision data, and some of them are also designed for the non-parallel data setting. BID7 proposed to disentangle the content representations from image attributes, and control the image generation by manipulating the graphics code that encodes the attribute information. BID2 used the Convolutional Neural Networks (CNNs) to learn separated representations of the image content and style, and then created the new image from their combination. Some approaches have been proposed to align the two data domains with the idea of the generative adversarial networks (GAN) BID3. BID10 proposed the coupled GAN framework to learn a joint distribution of multidomain data by the weight-sharing constraint. BID20 introduced a cycle consistency loss, which minimizes the gap between the transferred images and the original ones. However, due to the discreteness of the natural language, this loss function cannot be directly applied on text data. In our work, we show how the idea of cycle consistency can be used on text data. Only a small number of approaches have been proposed for language style transfer. To handle the non-parallel data problem, BID14 revised the latent representation of a sentence in a certain direction guided by a classifier, so that the decoded sentence imitates those favored by the classifier. BID1 encoded textual property values with embedding vectors, and adopted a conditioned language model to generate sentences satisfying the specified content and style properties. BID4 used the variational auto-encoder (VAE) to encode the sentence into a latent content representation disentangled from the source style, and then recombine it with the target style to generate its counterpart, An additional distribution is added to enforce that the generated sentence and the original sentence share the same latent content representation. BID17 considered transferring between two styles simultaneously. Specifically, they utilized adversarial training in the Professor-Forcing framework BID8, to align the generated sentences from one style to the data domain of the other style. We also adopt similar adversarial training in our model. However, since we assume the source domain contains data with various and possibly unknown styles, we cannot align data from the target domain to the source domain as in BID17. We now formally present our problem formulation. Suppose there are two data domains, one source domain X 1 in which each sentence may have its own language style, and one target domain X 2 consisting of data with the same language style. During training, we observe n samples from X 1 and m samples from X 2, denoted as X 1 = {x DISPLAYFORM0 Note that we can hardly find a sentence pair (x DISPLAYFORM1 2) that describes the same content. Our task is to design a model to learn from these non-parallel training data such that for an unseen testing sentence x ∈ X 1, we can transfer it into its counterpartx ∈ X 2, wherex should preserve the content of x but with the language style in X 2. Similar to BID17; BID4, we assume each sentence x can be decomposed into two representations: one is the style representation y ∈ Y, and the other is the content representation z ∈ Z, which is disentangled from its style. Each sentence x (i) 1 ∈ X 1 has its individual style y DISPLAYFORM0 1, while all the sentences x (i) 2 ∈ X 2 share the same style, denoted as y *. Our model is built upon the encoder-decoder framework. In the encoding module, we assume that z and y of a sentence x can be obtained through two encoding functions E z (x) and E y (x) respectively: DISPLAYFORM1 DISPLAYFORM2 where E y (x) = 1 {x∈X1} · g(x) + 1 {x∈X2} · y, and 1 {·} is an indicator function. When a sentencex comes from source domain, we use a function g(x) to encode its style representation. For x from target domain, a shared style representation y is used. Both y * and parameters in g(x) are learnt jointly together with other parameters in our model. For the decoding module, we first employ a reconstruction loss to encourage that the sentence from the decoding function given z and y of a sentence x can well reconstruct x itself. Here, we use a probabilistic generator G as the decoding function and the reconstruction loss is: DISPLAYFORM3 where θ denotes the parameter of the corresponding module. To enable style transfer using non-parallel training data, we enforce that for a sample x 1 ∈ X 1, its decoded sequence using G given its content representation z and the target style y * should be in the target domain X 2. We use the idea of GAN BID3 ) and introduce an adversarial loss to be minimized in decoding. The goal of the discriminator D is to distinguish between G(z 1, y *) and G(z 2, y *), while the generator tries to bewilder the discriminator: DISPLAYFORM4 As discussed in Section 2, since our source domain X 1 contains sentences with various unknown language styles but not a consistent style, it is impossible for us to apply a discriminator to determine whether a sentence transferred from X 2 is aligned in the domain X 1 as in BID17.During optimization, we adopt the continuous approximation in BID4 for gradients propagation in adversarial training over discrete sentences. That is, instead of feeding a single word as the input to the generator, we use the approximation averaging word embeddings by a multinomial distribution. This distribution is computed as softmax(o t /γ), where o t is the logit vector output by the generator at time step t, γ > 0 is a temperature parameter. Next, we follow the framework of Professor-Forcing BID8, which matches two sequences of output words using a discriminator D. Specifically, we have one kind of sequences G(z 2, y *) teacher-forced by the ground-truth sample x 2 ∈ X 2, and the other one G(z 1, y *) with z 1 obtained from samples in X 1, in which the input at each time step is self-generated by the previous continuous approximation. However, the above encoder-decoder framework is under-constrained. First, for a sample x 1 ∈ X 1, y 1 can have an arbitrary value that minimizes the above losses in Equation 3 and 4, which may not DISPLAYFORM5 Figure 1: Basic model with the style discrepancy loss. Solid lines: encode and decode the sample itself; dash lines: transfer DISPLAYFORM6 Figure 2: Proposed cycle consistency loss (can be applied for samples in X 2 similarly).necessarily capture the sentence style. This will affect the other decomposed part z, making it not fully represent the content which should be invariant with the style. Second, the discriminator can only encourage the generated sentence to be aligned with the target domain X 2, but cannot guarantee to keep the content of the source sentence intact. To address the first problem, we propose a style discrepancy loss, to constrain that the learnt y should have its distance from y * guided by another discriminator which evaluates the closeness of the sentence style to the target style. For the second problem, we get inspired by the idea in BID20 and introduce a cycle consistency loss applicable to word sequence, which requires that the generated sentencex can be transferred back to the original sentence x. By using a portion of the training data, we can first train a discriminator D s to predict whether a given sentence x has the target language style with an output probability, denoted as p Ds (x ∈ X 2). When learning the decomposed style representation y 1 for a sample x 1 ∈ X 1, we enforce that the discrepancy between this style representation and the target style representation y *, should be consistent with the output probability from D s. Specifically, since the styles are represented with embedding vectors, we measure the style discrepancy using the 2 norm: DISPLAYFORM0 Intuitively, if a sentence has a larger probability to be considered having the target style, its style representation should be closer to the target style representation y *. Thus, we would like to have d(y 1, y *) positively correlated with 1 − p Ds (x 1 ∈ X 2). To incorporate this idea in our model, we use a probability density function q(y 1, y *), and define the style discrepancy loss as: DISPLAYFORM1 where f (·) is a valid probability density function. p Ds (x 1 ∈ X 2) is pre-trained and then fixed. If a sentence x 1 has a large p Ds (x 1 ∈ X 2), incorporating the above loss into the encoder-decoder framework will encourage a large q(y 1, y *) and hence a small d(y 1, y *), which means y 1 would be close to y *. In our experiment, we instantiate q(y 1, y *) with the standard normal distribution for simplicity: DISPLAYFORM2 However, better probability density functions can be used if we have some prior knowledge of the style distribution. With Equation 8, the style discrepancy loss can be equivalently minimized by: DISPLAYFORM3 3.4 CYCLE CONSISTENCY LOSS Inspired by BID20, we require that a sentence transferred by the generator G should preserve the content of its original sentence, and thus it should have the capacity to recover the original sentence in a cyclic manner. For a sample x 1 ∈ X 1 with its transferred sentencex 1 having the target style y *, we encodex 1 and combine its contentz 1 with its original style y 1 for decoding. We should expect that with a high probability, the original sentence x 1 is generated. For a sample x 2 ∈ X 2, though we do not aim to change its language style in our task, we can still compute its cycle consistency loss for the purpose of additional regularization. We first choose an arbitrary style y 1 obtained from a sentence in X 1, and transfer x 2 into this y 1 style. Next, we put this generated sentence into the encoder-decoder model with the style y *, and the original sentence x 2 should be generated. Formally, the cycle consistency is: DISPLAYFORM4 3.5 FULL OBJECTIVE An illustration of our basic model with the style discrepancy loss is shown in Figure 1 and the full model is combined with the cycle consistency loss shown in Figure 2. To summarize, the full loss function of our model is: DISPLAYFORM5 where λ 1, λ 2, λ 3 are parameters balancing the relative importance of the different loss parts. The overall training objective is a minmax game played among the encoder E z, E y, generator G and discriminator D: DISPLAYFORM6 We implement the encoder E z using an RNN with the last hidden state as the content representation, and the style encoder g(x) using a CNN with the output representation of the last layer as the style representation. The generator G is an RNN that takes the concatenation of the content and style representation as the initial hidden state. The discriminator D and the pre-trained discriminator D s used in the style discrepancy loss are CNNs with the similar network structure in E y followed by a sigmoid output layer. Yelp: Raw data are from the Yelp Dataset Challenge Round 10, which are restaurant reviews on Yelp. Generally, reviews rated with 4 or 5 stars are considered positive, 1 or 2 stars are negative, and 3 stars are neutral. For positive and negative reviews, we use the processed data released by BID17. For neutral reviews, we follow similar steps in BID17 to process and select the data. We first filter out neutral reviews (rated with 3 stars and categorized with the keyword 'restaurant') with the length exceeding 15 or less than 3. Then, data selection in is used to ensure a large enough vocabulary overlap between neutral data and data in BID17. Afterwards, we sample 500k sentences from the ing dataset as the neutral data. We use the positive data as the target style domain. Based on the three classes of data, we construct two datasets with multiple styles:• Positive+Negative (Pos+Neg): we add different numbers of positive data (50k, 100k, 150k) into the negative data, so that the source domain contains data with two sentiments.• Neutral+Negative (Neu+Neg): we combine neutral (50k, 100k, 150k) and negative data together. We consider these datasets are harder to learn from. For the Pos+Neg dataset, we can make use of a pre-trained classifier to possibly filter out some positive data so that most of the source data have the same style and the model in BID17 can work. However, the neutral data cannot be removed in this way. Also, most of the real data may be in the neutral sentiment, and we want to see if such sentences can be transferred well. Details about the data statistics can be found in TAB6 in the Appendix. Chat: We use sentences from a real Chinese dialog dataset as the source domain. Users can chat with various personalized language styles, which are not easy to be classified into one of the three sentiments as in Yelp. Romantic sentences are collected from several online novel websites and filtered by human annotators. Our task is to transfer the dialog sentences with a romantic style, characterized by the selected romantic sentences. TAB7 in the Appendix shows detailed statistics about this dataset. We implement our model using Tensorflow BID0. We use GRU as the encoder and generation cells in our encoder-decoder framework. Dropout BID18 ) is applied in GRUs and the dropout probability is set to 0.5. Throughout our experiments, we set the dimension of the word embedding, content representation and style representation as 200, 1000 and 500 respectively. For the style encoder g(x), we follow the CNN architecture in BID5, and use filter sizes of 200 × {1, 2, 3, 4, 5} with 100 feature maps each, so that the ing output layer is of size 500, i.e., the dimension of the style representation. The pre-trained discriminator D s is implemented similar to g(x) but using filter sizes 200 × {2, 3, 4, 5} with 250 feature maps each. Statistics of data used to pre-train D s are shown in TAB8 and TAB0 in the Appendix. The testing accuracy of the pre-trained D s is 95.82% for Yelp and 87.72% for Chat respectively. We further set the balancing parameters λ 1 = λ 2 = 1, λ 3 = 5, and train the model using the Adam optimizer BID6 with the learning rate 10 −4. All input sentences are padded so that they have the same length 20 for Yelp and 35 for Chat. Furthermore, we use the pre-trained word embeddings Glove for Yelp and the Chinese word embeddings trained on a large amount of Chinese news data for Chat when training the classifier. We compare our method with BID17 which is the state-of-the-art language style transfer model with non-parallel data, and we name as Style Transfer Baseline (STB). As described in Section 2 and 3, STB is built upon an auto-encoder framework. It focuses on transferring sentences from one style to the other. The text styles are represented by two embedding vectors. It assumes source domain and target domain share a content space, and relies on adversarial training methods to align content spaces of two domains. We keep the configurations of the modules in STB, such as the encoder, decoder and discriminator, the same as ours for a fair comparison. Following BID17, we use a model-based evaluation metric. Specifically, we use a pretrained evaluation classifier to classify whether the transferred sentence has the correct style. The classifier is implemented same as the discriminator D s. Statistics of the data used for the evaluation classifier are shown in TAB0 in the Appendix. The testing accuracy of evaluation classifiers is 95.36% for Yelp and 87.05% for Chat. We repeat the training three times for each experiment setting and report the mean accuracy on the testing data with their standard deviation. We first perform experiments on the source data containing both positive and negative reviews. In this setting, we specifically compare two versions of both STB and our model, one with the cycle consistency loss and one without, to validate the effectiveness of the cycle consistency loss 1. Results are shown in TAB0. It can be seen that incorporating the cycle consistency loss improves the performance for both STB and our proposed model consistently. We further manually examine the generated sentences for a detailed study of the various methods. TAB1 shows a few samples for the above setting with 150k positive samples used. Overall, our full model can generate grammatically correct positive reviews without changing the original content in more cases than the other methods. For some simple sentences such as the first example, all models perform well. For the second example in which the input sentence is more complex, both versions of STB and our basic model without the cycle consistency loss cannot generate fluent sentences, but our full model still succeeds. However, our model also suffers some mistakes as shown in the third example. Though it successfully makes the sentence positive, some additional information about the food is added, which is not discussed in the original sentence. Original Sentence service has gotten worse and worse at this location.STB service is great for the family and family. STB (with Cyc) service has always great and at this location. Ours (without Cyc) service has been better than the best experience. Ours service was super friendly and food was great.Next, we compare the of STB and our proposed method in TAB0. As the number of positive sentences in the source data increases, the average performance of both versions of STB decreases drastically. This is reasonable because STB introduces a discriminator to align the sentences from the target domain back to the source domain, and when the source domain contains more positive samples, it is hard to find a good alignment to the source domain. Meanwhile the performance of our model, even the basic one without the cycle consistency loss, does not fluctuate much with the increase of the number of positive samples, showing that our model is not that sensitive to the source data containing more than one sentiments. Overall, our model with the cycle consistency loss performs the best. The above setting is not challenging enough, because we can use a pre-trained discriminator similar to D s in our model, to remove those samples classified as positive with high probabilities, so that only sentences with a less positive sentiment remain in the source domain. Thus, we test our second dataset which combines neutral reviews and negative reviews as the source domain. In this setting, in case that some positive sentences exist in those neutral reviews, when STB is trained, we use the same pre-trained discriminator in our model to filter out samples classified as positive with probabilities larger than 0.9. In comparison, our model utilizes all the data, since it naturally allows for those data with styles similar to the target style. In the following, we report and analyze both STB and our model with the cycle consistency loss added. The experimental in TAB2 show that STB (with Cyc) suffers a large performance drop with 150k neutral data mixed in the source domain, while our model remains stable. In real applications, there may be only a small amount of data in the target domain. To simulate this scenario, we limit the amount of the target data (randomly sampled from the positive data) used for training, and evaluate the robustness of the compared methods. TAB3 shows the experimental . It is surprising to see that both methods obtain relatively steady accuracies with different numbers of target samples. Yet, our model surpasses STB (with Cyc) in all the cases. As in the Yelp experiment, we vary the number of target sentences to test the robustness of the compared methods. The experimental are shown in TAB4. As can be seen, STB (with Cyc) obtains a relatively low performance with only 10k target samples, and as more target samples are used, its performance increases. However, the accuracy of our model is relatively high even with 10k target samples used, and remains stable in all the cases. Thus, our model achieves better performance as well as stronger robustness on Chat. A few examples are shown in Table 6. We can see that our model generally successfully transfers the sentence into a romantic style with some romantic phrases used. Table 6: Example sentences on Chat transferred into a romantic style. English translations are provided (* denotes that the sentence has grammar mistakes in Chinese).Original Sentence 回眸一笑 就 好 It is enough to look back and smile STB (with Cyc) 回眸一笑 就 好 了 It would be just fine to look back and smile Ours 回眸一笑, 勿念 。 Look back and smile, please do not miss me. Original Sentence 得过且过 吧! Just live with it! STB (with Cyc) 想不开 吧, 我 的 吧 。 I just take things too hard. * Ours 爱到深处, 随遇而安 。 Love to the depths, enjoy myself wherever I am. Original Sentence 自己 的 幸福 给 别人 了 Give up your happiness to others STB (with Cyc) 自己 的 幸福 给 别人, 你 的 。 Give up your happiness to others. * Ours 自己 的 幸福 是 自己, 自己 的 。 Leave some happiness to yourself, yourself. In this paper, we present an encoder-decoder framework for language style transfer, which allows for the use of non-parallel data and source data with various unknown language styles. Each sentence is encoded into two latent representations, one corresponding to its content disentangled from the style and and the other representing the style only. By recombining the content with the target style, we can decode a sentence aligned in the target domain. Specifically, we propose two loss functions, i.e., the style discrepancy loss and the cycle consistency loss, to adequately constrain the encoding and decoding functions. The style discrepancy loss is to enforce a properly encoded style representation while the cycle consistency loss is used to ensure that the style-transferred sentences can be transferred back to their original sentences. Experimental on two tasks demonstrate that our proposed method outperforms the state-of-the-art style transfer method BID17 We randomly select 200 test samples from Yelp and perform human evaluations on four aspects of the : content: estimates if the content of an input sentence is preserved in a transferred sentence; content rating has 0 (changed), 1 (synonym substitution or partially changed), and 2 (unchanged); sentiment: estimates if the sentiment of a transferred sentence is consistent with the target sentiment; sentiment rating has 0 (unchanged and wrong), 1 (changed but wrong), 2 (correct); fluency: estimates the fluency of transferred sentences; fluency is rated from 1 (unreadable) to 4 (perfect); overall: estimates the overall quality of transferred sentences; overall rating ranges from 1 (failed) to 4 (perfect).We hired five annotators and averaged their evaluations. TAB0 shows on Yelp when the source domain contains not only negative sentences but also 150k positive sentences (row 3 in TAB0), and TAB0 shows on Yelp when the target domain contains only 100k positive sentences (row 1 in TAB3). As can be seen, our model is better in terms of sentiment accuracy and overall quality, which is consistent with the automatic evaluation . We experiment on revising modern text in the language of Shakespeare at the sentence-level as in BID14. Following their experimental setup, we collect 29388 sentences authored by Shakespeare and 54800 sentences from non-Shakespeare-authored works. The length of all the sentences ranges from 3 to 15. Statistics of data for training and evaluating the style transfer model are shown in TAB0, 14, and 15 in Section 6.1. Since the dataset is small, we train the discriminator D s using a subset of the data for training the style transfer model. The testing accuracy of D s is 87.6%. The evaluation classifier has a testing accuracy 88.7%.Our model achieves a classification accuracy of 95.1% and STB with cycle consistency loss achieves 94.1%. Following are some examples. Compared with STB, our model can generate sentences which are more fluent and have a higher probability to have a correct target style. However, we find that both STB and our model tend to generate short sentences and change the content of source sentences in more cases in this set of experiment than in the Yelp and Chat datasets. We conjecture this is caused by the scarcity of training data. Sentences in the Shakespeare's style form a vocabulary of 8559 words, but almost 60% of them appear less than 10 times. On the other hand, the source domain contains 19962 words, but there are only 5211 common words in these two vocabularies. Thus aligned words/phrases may not exist in the dataset.
We present an encoder-decoder framework for language style transfer, which allows for the use of non-parallel data and source data with various unknown language styles.
1,495
scitldr
In this paper, we propose a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs). Optimizing the standard L2 loss in a decoder matrix that spans the principal subspace of the sample covariance of the data, but fails to identify the exact eigenvectors. This downside originates from an invariance that cancels out in the global map. Here, we prove that our loss function eliminates this issue, i.e. the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. For this new loss, we establish that all local minima are global optima and also show that computing the new loss (and also its gradients) has the same order of complexity as the classical loss. We report numerical on both synthetic simulations, and a real-data PCA experiment on MNIST (i.e., a 60,000 x784 matrix), demonstrating our approach to be practically applicable and rectify previous LAEs' downsides. Ranking among the most widely-used and valuable statistical tools, Principal Component Analysis (PCA) represents a given set of data within a new orthogonal coordinate system in which the data are uncorrelated and the variance of the data along each orthogonal axis is successively ordered from the highest to lowest. The projection of data along each axis gives what are called principal components. Theoretically, eigendecomposition of the covariance matrix provides exactly such a transformation. For large data sets, however, classical decomposition techniques are infeasible and other numerical methods, such as least squares approximation schemes, are practically employed. An especially notable instance is the problem of dimensionality reduction, where only the largest principal components-as the best representative of the data-are desired. Linear autoencoders (LAEs) are one such scheme for dimensionality reduction that is applicable to large data sets. An LAE with a single fully-connected and linear hidden layer, and Mean Squared Error (MSE) loss function can discover the linear subspace spanned by the principal components. This subspace is the same as the one spanned by the weights of the decoder. However, it failure to identify the exact principal directions. This is due to the fact that, when the encoder is transformed by some matrix, transforming the decoder by the inverse of that matrix will yield no change in the loss. In other words, the loss possesses a symmetry under the action of a group of invertible matrices, so that directions (and orderings/permutations thereto) will not be discriminated. The early work of and connected LAEs and PCA and demonstrated the lack of identifiability of principal components. Several methods for neural networks compute the exact eigenvectors (; ; ;), but they depend on either particular network structures or special optimization methods. It was recently observed that regularization causes the left singular vectors of the decoder to become the exact eigenvectors, but recovering them still requires an extra decomposition step. point out, no existent method recovers the eigenvectors from an LAE in an optimization-independent way on a standard network -this work fills that void. Moreover, analyzing the loss surface for various architectures of linear/non-linear neural networks is a highly active and prominent area of research (e.g. ; ; ;). Most of these works extend the of for shallow LAEs to more complex networks. However, most retain the original MSE loss, and they prove the same critical point characterization for their specific architecture of interest. Most notably extends the of to deep linear networks and shallow RELU networks. In contrast in this work we are going after a loss with better loss surface properties. We propose a new loss function for performing PCA using LAEs. We show that with the proposed loss function, the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. The idea is simple: for identifying p principal directions we build up a total loss function as a sum of p squared error losses, where the i th loss function identifies only the first i principal directions. This approach breaks the symmetry since minimizing the first loss in the first principal direction, which forces the second loss to find the first and the second. This constraint is propagated through the rest of the losses, ing in all p principal components being identified. For the new loss we prove that all local minima are global minima. Consequently, the proposed loss function has both theoretical and practical implications. Theoretically, it provides better understanding of the loss surface. Specifically, we show that any critical point of our loss L is a critical point of the original MSE loss but not vice versa, and conclude that L eliminates those undesirable global minima of the original loss (i.e., exactly those which suffer from the invariance). Given that the set of critical points of L is a subset of critical points of MSE loss, many of the previous work on loss surfaces of more complex networks likely extend. In light of the removal of undesirable global minima through L, examining more complex networks is certainly a very promising direction. As for practical consequences, we show that the loss and its gradients can be compactly vectorized so that their computational complexity is no different from the MSE loss. Therefore, the loss L can be used to perform PCA/SVD on large datasets using any method of optimization such as Stochastic Gradient Descent (SGD). Chief among the compellingly reasons to perform PCA/SVD using this method is that, in recent years, there has been unprecedented gains in the performance of very large SGD optimizations, with autoencoders in particular successfully handling larger numbers of high-dimensional training data (e.g., images). The loss function we offer is attractive in terms of parallelizability and distributability, and does not prescribe any single specific algorithm or implementation, so stands to continue to benefit from the arms race between SGD and its competitors. More importantly, this single loss function (without an additional post hoc processing step) fits seamlessly into optimization pipelines (where SGD is but one instance). The is that the loss allows for PCA/SVD computation as single optimization layer, akin to an instance of a fully differentiable building block in a NN pipeline , potentially as part of a much larger network. Let X ∈ R n×m and Y ∈ R n×m be the input and output matrices, where m centered sample points, each n-dimensional, are stacked column-wise. Let x j ∈ R n and y j ∈ R n be the j th sample input and output (i.e. the where ·, · F and · F are the Frobenius inner product and norm, I i;p is a p × p matrix with all elements zero except the first i diagonal elements being one. (Or, equivalently, the matrix obtained by setting the last p − i diagonal elements of a p × p identity matrix to zero, e.g. I 2;3 = 1 0 0 0 1 0 0 0 0 .) In what follows, we shall denote the transpose of matrix M by M. Moreover, the matrices A ∈ R n×p, and B ∈ R p×n can be viewed as the weights of the decoder and encoder parts of an LAE. The are based on the following standard assumptions that hold generically: Assumption 1. For an input X and an output Y, let Σ xx:= XX, Σ xy:= XY, Σ yx:= Σ xy and Σ yy = Y Y be their sample covariance matrices. We assume • The input and output data are centered (zero mean). • Σ xx, Σ xy, Σ yx and Σ yy are positive definite (of full rank and invertible). • The covariance matrix Σ:= Σ yx Σ −1 xx Σ xy is of full rank with n distinct eigenvalues λ 1 > λ 2 > · · · > λ n. • The decoder matrix A has no zero columns. Claim. The main of this work proved in Theorem 2 is as follows: If the above assumptions hold then all the local minima of L(A, B) are achieved iff A and B are of the form where the i th column of U 1:p is the unit eigenvector of Σ:= Σ yx Σ −1 xx Σ xy corresponding to the i th largest eigenvalue and D p is a diagonal matrix with nonzero diagonal elements. In other words, A contains ordered unnormalized eigenvectors of Σ corresponding to the p largest eigenvalues. Moreover, all the local minima are global minima with the value of the loss function at those global minima being where λ i is the i th largest eigenvalue of Σ:= Σ yx Σ −1 xx Σ xy. In the case of autoencoder (Y = X): Σ = Σ xx. Finally, while L(A, B) in the given form contains O(p) matrix products, we will show that it can be evaluated with constant (less than 7) matrix products independent of the value p. In this paper, the underlying field is always R, and positive semidefinite matrices are symmetric by definition. The following constant matrices are used extensively throughout. The matrices T p ∈ R p×p and S p ∈ R p×p are defined as Another matrix that will appear in the formulation isŜ p:= T −1 p. Clearly, the diagonal matrix T p is positive definite. As shown in Lemma 2, S p andŜ p are positive definite as well. The general strategy to prove the above claim is as follows. First the analytical gradients of the loss is derived in a matrix form in Propositions 1 and 2. We compare the gradients with that of the original Minimum Square Error (MSE) loss. Next, we analyze the loss surface by solving the gradient equations which yields the general structure of critical points based on the rank of the decoder matrix A. Next, we delineate several interesting properties of the critical points, notably, any critical point of the loss is also a critical point for the MSE loss but not the other way around. Finally, by performing second order analysis on the loss in Theorem 2 the exact equations for local minima are derived which is shown to be as claimed. LetL(A, B) and L(A, B) be the original loss, and the proposed loss function, respectively, i.e., The first step is to calculate the gradients with respect to A and B and set them to zero to derive the implicit expressions for the critical points. In order to do so, first, in Lemma 5, for a fixed A, we derive the directional (Gateaux) derivative of the loss with respect to B along an arbitrary direction As shown in the proof of the lemma, d B L(A, B)W is derived by writing the norm in the loss as an inner product, opening it up using linearity of inner product, dismiss second order terms in W (i.e. O( W 2)) and rearrange the as the inner product between the gradient with respect to B, and the direction W, which yields where, • is the Hadamard product and the constant matrices T p and S p, were defined in the beginning. Second, the same process is done in Lemma 6, to derive d A L(A, B)V; the derivative of L with respect to A in an arbitrary direction V ∈ R n×p, for a fixed B, which is then set to zero to derive the implicit expressions for the critical points. The are formally stated in the two following propositions. Proposition 1. For any fixed matrix A ∈ R n×p the function L(A, B) is convex in the coefficients of B and attains its minimum for any B satisfying the equation where • is the Hadamard (element-wise) product operator, and S p and T p are constant matrices defined in the previous section. Further, if A has no zero column, then L(A, B) is strictly convex in B and has a unique minimum when the critical B is and in the autoencoder case it becomes The proof is given in appendix A.2. Remark 1. Note that as long as A has no zero column, S p • (A A) is nonsingular (we will explain the reason soon). In practice, A with zero columns can always be avoided by nudging the zero columns of A during the gradient decent process. Proposition 2. For any fixed matrix B ∈ R p×n the function L(A, B) is a convex function in A. Moreover, for a fixed B, the matrix A that satisfies is a critical point of L(A, B). The proof is given in appendix A.3. V zero for any pair of directions (V, W). Therefore, the implicit equations for critical points are given below, next to the ones derived by forL(A, B). For L(A, B): Remark 2. Notice the similarity, and the difference only being the presence of Hadamard product by S p in the left and by diagonal T p in the right. Therefore, practically, the added computational cost of evaluating the gradients is negligible compare to that of MSE loss. The next step is to determine the structure of (A, B) that satisfies the above equations, and find the subset of those solutions that account for local minima. For the original loss, the first expression (A ABΣ xx = A Σ yx) is used to solve for B and put it in the second to derive an expression solely based on A. Obviously, in order to solve the first expression for B, two cases are considered separately: the case where A is of full rank p, so A A is invertible, and the case of A being of rank r < p. Here, we assume A of any rank r ≤ p has no zero column (since this can be easily avoided in practice) and consider S p • (A A) to be always invertible. Therefore, (A, B) define a critical point of lossesL and L if ForL(A, B) and full rank A: For L(A, B) and no zero column A: Before, we state the main theorem we need the following definitions. First, a rectangular permutation matrix Π r ∈ R r×p is a matrix that each column consists of at most one nonzero element with the value 1. If the rank of Π r is r with r < p then clearly, Π r has p − r zero columns. Also, by taking away those zero columns the ant r × r submatrix of Π r is a standard square permutation matrix. Second, under the conditions provided in Assumption 1, the matrix Σ:= Σ yx Σ −1 xx Σ xy has an eigenvalue decomposition Σ = U ΛU, where the i th column of U, denoted as u i, is an eigenvector of Σ corresponding to the i th largest eigenvalue of Σ, denoted as λ i. Also, Λ = diag(λ 1, · · ·, λ n) is the diagonal vector of ordered eigenvalues of Σ, with λ 1 > λ 2 > · · · > λ n > 0. We use the following notation to organize a subset of eigenvectors of Σ into a rectangular matrix. Let for any r ≤ p, That is the columns of U Ir are the ordered orthonormal eigenvectors of Σ associated with eigenvalues λ i1 < · · · < λ ir. Clearly, when r = p, we have n×p and B ∈ R p×n such that A is of rank r ≤ p. Under the conditions provided in Assumption 1 and the above notation, The matrices A and B define a critical point of L(A, B) if and only if for any given r−index set I r, and a nonsingular diagonal matrix D ∈ R r×r, A and B are of the form where, C ∈ R r×p is of of full rank r with nonzero and normalized columns such that −1 T p C is a rectangular permutation matrix of rank r and CΠ C = I r. For all 1 ≤ r ≤ p, such C always exists. In particular, if matrix A is of full rank p, i.e. r = p, the two given conditions on Π C are satisfied iff the invertible matrix C is any squared p × p permutation matrix Π. In this case (A, B) define a critical point of L(A, B) iff they are of the form The proof is given in appendix A.4. Remark 3. The above theorem provides explicit equations for the critical points of the loss surface in terms of the rank of the decoder matrix A and the eigenvectors of Σ. This explicit structure allows us to further analyze the loss surface and its local/global minima. Here, we provide a proof sketch for the above theorem to make the claims more clear. Again as a reminder, the EVD of Σ:= Σ yx Σ −1 xx Σ xy is Σ = U ΛU. For bothL and L, the correspondinĝ B(A) is replaced by B on the RHS of critical point equations. For the loss L(A, B), as shown in the proof of the theorem, in the following identity where is symmetric so the RHS is symmetric too, so Λ∆ = (Λ∆) = ∆ Λ = ∆Λ. Therefore, ∆ commutes with the diagonal matrix of eigenvalues Λ. Since eigenvalues are assumed to be distinct, ∆ has to be diagonal as well. By Lemma 2 T p (S p • (A A)) −1 T p is positive definite and U is an orthogonal matrix. Therefore, r = rank(A) = rank(∆) = rank(U ∆U), which implies that the diagonal matrix ∆, has r nonzero and positive diagonal entries. There exists an r−index set I r corresponding to the nonzero diagonal elements of ∆. Forming a diagonal matrix ∆ Ir ∈ R r×r by filling its diagonal entries (in order) by the nonzero diagonal elements of ∆, we have which indicates that the matrix A has the same column space as U Ir. Therefore, there exists a full rank matrixC ∈ R r×p such that A = U IrC. Since A has no zero column,C has no zero column. Further, by normalizing the columns ofC we can write A = U Ir CD, where D ∈ R p×p is diagonal that contains the norms of columns ofC. did something similar for full rank A for the lossL to derive (AL = U IpC). But theirC can be any invertible p × p matrix. However, in our case, the matrix C ∈ R r×p corresponding to rank r ≤ p matrix A, has to satisfy eq. by replacing A by U Ir CD and eq. by replacingB(A) byB(U Ir CD). In the case of , for the original lossL, equations similar to eq. and eq. appear but they are are satisfied trivially by any invertible matrixC. Simplifying those equations by using A = U Ir CD after some algebraic manipulation in the following two conditions for C: As detailed in proof of Theorem 1, solving for C leads to its specific structure as laid out in the theorem. Remark 4. Note that when A is of rank r < p with no zero columns then the invariant matrix C is not necessarily a rectangular permutation matrix but permutation matrix with CΠ C = I r. It is only when r = p that the invariant matrix C becomes a permutation matrix. Nevertheless, as we show in the following corollary, the global map is always ∀r ≤ p: xx. It is possible to find further structure (in terms of block matrices) for the invariant matrix C when r < p. However, this is not necessary as we soon show that all rank deficient matrix As are saddle points for the loss and ideally should be passed by during the gradient decent process. Based on some numerical our conjecture is that when r < p the matrix C can only start with a r × k rectangular permutation matrix of rank r with r ≤ k ≤ p and the rest of p − k columns of C is arbitrary as long as none of the columns are identically zero. Corollary 1. Let (A, B) be a critical point of L(A, B) under the conditions provided in Assumption 1 and rankA = r ≤ p. Then the following are true 2. For all 1 ≤ r ≤ p, for any critical pair (A, B), the global map G:= AB becomes The proof is given in appendix A.5. Remark 5. The above corollary implies that L(A, B) not only does not add any extra critical point compare to the original lossL(A, B), it provides the same global map G:= AB. It only limits the structure of the invariance matrix C as described in Theorem 1 so that the decoder matrix A can recover the exact eigenvectors of Σ. The above identity shows that the number of matrix operations required for computing the loss L(A, B) is constant and thereby independent of the value of p. The proof is given in appendix A.6. Theorem 2. Let A * ∈ R n×p and B * ∈ R p×n such that A * is of rank r ≤ p. Under the conditions provided in Assumption 1, (A *, B *) define a local minima of the proposed loss function iff they are of the form where the i th column of U 1:p is a unit eigenvector of Σ:= Σ yx Σ −1 xx Σ xy corresponding the i th largest eigenvalue and D p is a diagonal matrix with nonzero diagonal elements. In other words, A * contains ordered unnormalized eigenvectors of Σ corresponding to the p largest eigenvalues. Moreover, all the local minima are global minima with the value of the loss function at those global minima being where λ i is the i th largest eigenvalue of Σ. The proof is given in appendix A.7. Remark 6. Finally, the second and third assumptions we made in the beginning in Assumption 1 can be relaxed by requiring only Σ xx to be full rank. The output data can have a different dimension than the input. That is Y ∈ R n×m and X ∈ R n ×m, where n = n. The reason is that the given loss function structurally is very similar to MSE loss and can be represented as a Frobenius norm on the space of n × m matrices. In this case the covariance matrix Σ:= Σ yx Σ −1 xx Σ xy is still n × n. Clearly, for under-constrained systems with n < n the full rank assumption of Σ holds. For the overdetermined case, where n > n the second and third assumptions in Assumption 1 can be relaxed: we only require Σ xx to be full rank since this is the only matrix that is inverted in the theorems. Note that if p > min(n, n) then Λ Ip: the p × p diagonal matrix of eigenvalues of Σ for a p-index-set I p bounds to have some zeros and will be say rank r < p, which in turn, in the encoder A with rank r. However, the Theorem 1 is proved for encoder of any rank r ≤ p. Finally, following theorem 2 then the first r columns of the encoder A converges to ordered eigenvectors of Σ while the p − r remaining columns span the kernel space of Σ. Moreover, Σ need not to have distinct eigenvectors. In this case ∆ Ir becomes a block diagonal matrix, where the blocks correspond to identical eigenvalues Σ Ir. In this case, the corresponding eigenvectors in A * are not unique but they span the respective eigenspace., where Y = X is also applied in our experiments. The weights of networks are initialized to random numbers with a small enough standard deviation (10 −7 in our case). We choose to use the Adam optimizer with a scheduled learning rate (starting from 10 −3 and ending with 10 −6 in our case), which empirically benefits the optimization process. The two training processes are stopped at the same iteration at which one of the models firstly finds all of the principal directions. As a side note, we feed all data samples to the network at one time with batch size equal to m, although mini-batch implementations are apparently amendable. We use the classical PCA approach to get the ground truth principal direction matrix A * ∈ R n×p, by conducting Eigen Value Decomposition (EVD) to XX ∈ R n×n or Singular Value Decomposition (SVD) to X ∈ R n×m. As a reminder, A ∈ R n×p stands for the decoder weight matrix of an trained LAE given a loss function L. To measure the distance between A * and A, we propose an absolute cosine similarity (ACS) matrix inspired by mutual coherence , which is defined as: where A * i ∈ R n×1 denotes the i th ground truth principal direction, and A j ∈ R n×1 denotes the j th column of the decoder A, i, j = 1, 2,..., p. The elements of ACS ∈ R p×p in eq. take values between, measuring pair-wise similarity across two sets of vectors. The absolute value absorbs the sign ambiguity of principal directions. The performances of LAEs are evaluated by defining the following metrics: where I is the indicator function and is a manual tolerance threshold (= 0.01 in our case). If two vectors have absolute cosine similarity over 1 −, they are deemed equal. Considering some columns of decoder may be correct principal directions but not in the right order, we introduce Ratio T P and Ratio F P in eqs. and to check the ratio of correct in-place and out-of-place principal directions respectively. Then Ratio T otal in eq. measures the total ratio of the correctly obtained principal directions by the LAE regardless of the order. Datasets As a proof-of-concept, both synthetic data and real data are used. For the synthetic data, 2000 zero-centered data samples are generated from a 1000-dimension zero mean multivariate normal distribution with the covariance matrix being diag(N p). For the real data, we choose to use MNIST dataset , which includes 60,000 grayscale handwritten digits images, each of dimension 28 × 28 = 784. Synthetic Data Experiments In our experiment, p, the number of desired principal components (PCs), is set to 100, i.e. the dimension is to be reduced from 1000 to 100. Figures 1 and 2 demonstrate a few . First, during the training process, the loss ratio of both losses continuously decreases to 1, i.e. they both converge to the optimal loss value. However, when both get close enough, L require more iterations since the optimizer is forced to find the right directions: it fully converges only after it has found all the principal directions in the right order. Second, using the loss L in finding more and more correct principal directions, with Ratio T P continuously rising; and ultimately affords all correct and ordered principal directions, Loss Ratio Convergence of L andL to their corresponding optimum loss Performance of finding the principal directions for both L andL Figure 2: Performance of both losses L andL in finding the principal directions at the columns of their respective decoders. with Ratio T P ending with 100%. Notice that occasionally and temporarily, some of the principal directions is found but not at their correct position, which is indicated by the rise of Ratio F P in the figure. However, as optimization continues they are shifted to the right column, which in Ratio F P going back to zero, and Ratio T P reaching one. As forL, it fails to identify any principal directions; both Ratio T P and Ratio F P forL stay at 0, which indicates that none of the columns of the decoderÃ, aligns with any principal direction. Third, as shown in the figure, while the optimizer finds almost all the principal directions rather quickly, it requires much more iterations to find some final ones. This is because some eigenvalues in the empirical covariance matrix of the finite 2000 samples become very close (the difference becomes less than 1). Therefore, the loss has to get very close to the optimal loss, making the gradient of the loss hard to distinguish between the two. We set the number of principal components (PCs) as 100, i.e., the dimension is to be reduced from 784 to 100. We also try to reconstruct with the top-10 columns found in this case. As in Fig. 3, the reconstruction performance of L is consistently better thanL. That also reflects thatL does not identify PCs, while L is directly applicable to performing PCA without bells and whistles. In this paper, we have introduced a loss function for performing principal component analysis and linear regression using linear autoencoders. We have proved that the optimizing with the given loss in the decoder matrix converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. We have also demonstrated the claims on a synthetic data set of random samples drawn from a multivariate normal distribution and on MNIST data set. There are several possible generalizations of this approach we are currently working on. One is improving performance when the corresponding eigenvalues of two principal directions are very close and another is generalization of the loss for tensor decomposition. Before we present the proof for the main theorems, the following two lemmas introduce some notations and basic relations that are required for the proofs. Lemma 2. The constant matrices T p ∈ R p×p and S p ∈ R p×p are defined as Clearly, the diagonal matrix T p is positive definite. Another matrix that will appear in the formula- The following properties of Hadamard product and matrices T p and S p are used throughout: where, • is the Hadamard (element-wise) product. Moreover, if Π 1, Π 2 ∈ R p×p are permutation matrices then 3. S p is invertible and its inverse is a symmetric tridiagonal matrix 4. S p is positive definite. is positive semidefinite. If (not necessarily full rank) A has no zero column then S p • (A A) is positive definite. 7. Let D D D, E E E ∈ R p×p be positive semidefinite matrices, where E E E has no zero diagonal element, and D D D is of rank r ≤ p. Also, let for any r ≤ p, 2. This is a standard , and no proof is needed. 3. Directly compute S p S −1 p: 4. Firstly, note that S −1 p is symmetric and nonsingular so all the eigenvalues are real and nonzero. It is also a diagonally dominant matrix (, Def 6.1.9) since ∀i ∈ {1, · · ·, p}: where the inequality is strict for the first and the last row and it is equal for the rows in the middle. Moreover, by Gersgorin circle theorem (Since ∀i : C i ≥ R i we have all the eigenvalues are non-negative. They are also nonzero, hence, S −1 p is positive definite, which implies S p is also positive definite. 6. Clearly, the matrix T p is achieved by setting the off-diagonal elements of S p to zero. Hence, For the diagonal matrices Hadamard product and matrix product are interchangeable so the latter may also be written as T p D D D. The same argument applies for the second identity. 7. This property can easily be proved by induction on p and careful bookkeeping of indices. Lemma 3 (Simultaneous diagonalization by congruence). Let M 1, M 2 ∈ R p×p, where M 1 is positive definite and M 2 is positive semidefinite. Also, let D D D, E E E ∈ R r×r be positive definite diagonal matrices with r ≤ p. Further, assume there is a C ∈ R r×p of rank r ≤ p such that Then there exists a nonsingularC ∈ R p×p that its first r rows are the matrix C and in which E E E ∈ R p−r×p−r is a nonnegative diagonal matrix. Clearly, the rank of M 2 is r plus the number of nonzero diagonal elements of E E E. Proof. The proof is rather straightforward since this lemma is the direct consequence of Theorem 7.6.4 in. The theorem basically states that if M 1, M 2 ∈ R p×p is symmetric and M 1 is positive definite then there exists an invertible S ∈ R p×p such that SM 1 S = I p and SM 2 S is a diagonal matrix with the same inertia as M 2. Here, we have M 2 that is positive semidefinite and C ∈ R r×p of rank r ≤ p such that Therefore, since S is of full rank p and 2 C is of rank r ≤ p, there exists p − r rows in S that are linearly independent of rows of D D D −1 2 C. EstablishC ∈ R p×p by adding those p − r rows to C. ThenC has p linearly independent rows so it is nonsingular, and fulfills the lemma's proposition that isC Lemma 4. Let A and B define a critical point of L. Further, let V ∈ R n×p and W ∈ R p×n are such that Further, for Finally, in case the critical A is of full rank p and so, (A, B) = (U Ip ΠD,B(U Ip ΠD)), for the encoder direction V with V F = O(ε) and W =W we have, Proof. As described in appendix B.1, the second order Taylor expansion for the loss L(A, B) is then given by eq., i.e. Now, based on the first item in Corollary 1, BΣ xx B is a p×p diagonal matrix, so based on eq.: The substitution then yields eq.. Finally, in the above equation xx. We have Replace the above in eq. and simplify: which finalizes the proof. For this proof we use the first and second order derivatives for L(A, B) wrt B derived in Lemma 5. From eq., we have that for a given A the second derivative wrt to B of the cost L (A, B) at B, and in the direction W is the quadratic form The matrix Σ xx is positive-definite and by Lemma 2, is convex in coefficients of B for a fixed matrix A. Also the critical points of L(A, B) for a fixed A is a matrix B that satisfies ∀W ∈ R p×n: d B L(A, B)W = 0 and hence, from eq. we have For a fixed A, the cost L(A, B) is convex in B, so any matrix B that satisfies the above equation corresponds to a minimum of L (A, B). Further, if A has no zero column then by Lemma 2, Therefore, the cost L(A, B) becomes strictly convex and the unique global minimum is achieved at B =B(A) as defined in eq.. For this proof we use the first and second order derivatives for L(A, B) wrt A derived in Lemma 6. For a fixed B, based on eq. the second derivative wrt to A of L(A, B) at A, and in the direction V is the quadratic form The matrix Σ xx is positive-definite and by Lemma 2, is convex in coefficients of A for a fixed matrix B. Based on eq. the critical point of L(A, B) for a fixed B is a matrix A that satisfies for all which is eq.. Before we start, a reminder on notation and some useful identities that are used throughout the proof. The matrix Σ:= Σ yx Σ −1 xx Σ xy has an eigenvalue decomposition Σ = U ΛU, where the i th column of U, denoted as u i, is an eigenvector of Σ corresponding to the i th largest eigenvalue of Σ, denoted as λ i. Also, Λ = diag(λ 1, · · ·, λ n) is the diagonal vector of ordered eigenvalues of Σ, with λ 1 > λ 2 > · · · > λ n > 0. We use the following notation to organize a subset of eigenvectors of Σ into a rectangular matrix. Let for any r ≤ p, That is the columns of U Ir are the ordered orthonormal eigenvectors of Σ associated with eigenvalues λ i1 < · · · < λ ir. The following identities are then easy to verify: The sufficient condition: Let A ∈ R n×p of rank r ≤ p and no zero column be given by eq., B ∈ R p×n given by eq., and the accompanying conditions are met. Notice that U Ir U Ir = I r implies that DC CD = DC U Ir U Ir CD = A A, so xx =B(A), which is eq.. Therefore, based on Proposition 1, for the given A, the matrix B defines a critical point of L (A, B). For the gradient wrt to A, first note that with B given by eq. we have The matrix Π C is a rectangular permutation matrix so Π C Λ Ir Π C is diagonal so as. Therefore, BΣ xx B is diagonal and by eq. in Lemma 2-6 we have which is eq.. Therefore, based on Proposition Proposition 2, for the given B, the matrix A define a critical point of L(A, B). Hence, A and B together define a critical point of L(A, B). Based on Proposition 1 and Proposition 2, for A (with no zero column) and B, to define a critical point of L (A, B), B has to beB(A) given by eq., and A has to satisfy eq.. That is where, ∆:= U AT p (S p • (A A)) −1 T p A U is symmetric and positive semidefinite. The LHS of the above equation is symmetric so the RHS is symmetric too, so Λ∆ = (Λ∆) = ∆ Λ = ∆Λ. Therefore, ∆ commutes with the diagonal matrix of eigenvalues Λ. Since, eigenvalues are assumed to be distinct, ∆ has to be diagonal as well. By Lemma 2 T p (S p • (A A)) −1 T p is positive definite and U is an orthogonal matrix. Therefore, r = rank(A) = rank(∆) = rank(U ∆U), which implies that the diagonal matrix ∆, has r nonzero and positive diagonal entries. There exists an r−index set I r corresponding to the nonzero diagonal elements of ∆. Forming a diagonal matrix ∆ Ir ∈ R r×r by filling its diagonal entries (in order) by the nonzero diagonal elements of ∆ we have which indicates that the matrix A has the same column space as U Ir. Therefore, there exists a full rank matrixC ∈ R r×p such that A = U IrC. Since A has no zero column,C has no zero column. Further, by normalizing the columns ofC we can write A = U Ir CD, where D ∈ R p×p is diagonal that contains the norms of columns ofC. Therefore, A is exactly in the form given by eq.. The matrix C has to satisfy eq. that is Now that the structure of A has been identified, evaluateB(A) of eq. by setting A = U Ir CD, that is xx, which by defining Π C:= (S p • (C C)) −1 T p C gives eq. for B as claimed. While C has to satisfy eq., A and B in the given form have to satisfy eq. that provides another condition for C as follows. First, note that Now, replace A and B in eq. by their respective identities that we just derived. Performing the same process for eq. we have Now we have to find C such that it satisfies eq. and eq.. To make the process easier to follow, lets have them in one place. The matrix C ∈ R r×p have to satisfy Since C is a rectangular matrix, solving above equations for C in this form seems intractable. We use a trick to temporarily extend C into an invertible square matrix as follows. • is positive definite and M 2 is positive semidefinite, so they are simultaneously diagonalizable by congruence that is based on Lemma 3 and eq. and eq., there exists a nonsingularC ∈ R p×p such that C consists of the first r rows ofC andC where,∆ Ir = ∆ Ir ⊕ I r−p is a p × p diagonal matrix andΛ Ir = Λ Ir ⊕ Λ is another p × p diagonal matrix, in which Λ ∈ R r−p×r−p is a nonnegative diagonal matrix. • Substitute∆ Ir from eq. in eq., then left multiply byC −1, and right multiply bȳ C I r;p:C • Now we can revert back everything to C again. Since C consists of the first r rows ofC we haveC I r;pC = C C, andC I r;pΛIrC = C Λ Ir C, which turns the above equation into • In the above equation, replace • By the second property of Lemma 2 we can collect diagonal matrices T −1 p's around S p to arrive at Both D D D r and E E E r in the above identity are positive semidefinite. Moreover, since by assumption C has no zero columns, E E E r has no zero diagonal element. Then the 7 th property of Lemma 2 implies the following two : 1. The matrix D D D r is diagonal. The rank of D D D r is r so it has exactly r positive diagonal elements and the rest is zero. This argument is true for T −1 T p C of rank r should have p − r zero rows. Let J r be an r−index set corresponding to nonzero diagonal elements of Π C Λ Ir Π C. Then the matrix Π C [J r, N r] (r × r submatrix of Π C consist of its J r rows) is nonsingular. 2. For every i, j ∈ J r and i = j, (E E E r) i,j = 0. Since E E E r:= C C and so (E E E r) i,j is the inner product of i th and j th columns of C, we conclude that the columns of C[N r, J r] (r × r submatrix of C consist of its J r columns) are orthogonal or in other words C[N r, J r] C[N r, J r] is diagonal. The columns of C are normalized. Therefore, C[N r, J r] C[N r, J r] = I r and hence, C[N r, J r] is an orthogonal matrix. • We use the two to solve the original eq. and eq.. First use Π C:= (S p • (C C)) −1 T p C to shrink them into: Next, by the first , the matrix T which is one of the two claimed conditions. What is left is to show that Π C is a rectangular permutation matrix. From the first we also have Π C has exactly r nonzero columns indexed by By the second C[N r, J r] is an orthogonal matrix therefore, is an r × r positive definite diagonal matrix with Λ Ir having distinct diagonal elements, and should be a square permutation matrix. Putting back the zero columns, we conclude that C should be such that permutation matrix and CΠ C = I r. Note that it is possible to further analyze these conditions and determine the exact structure of C. However, this is not needed in general for the critical point analysis of the next theorem except for the case where r = p and C is a square invertible matrix. In this case, square matrix Π C is of full rank p, p T p Π = Π, which verifies eq. and eq. for A and B when A is of full rank p. A.5 PROOF OF COROLLARY 1 1. We already show in the proof Theorem 1 that for critical (A, B) the matrix BΣ xx B is given by eq. that is The matrix Π C is a p × r rectangular permutation matrix so The diagonal matrix Λ Ir is of rank r therefore, BΣ xx B is of rank r. (A, B) is of the form given by eq. and eq. with the proceeding conditions on the invariance C. Therefore, the global map is Again by assumption (A, B) define a critical point of L(A, B) so by Theorem 1 they are of the form given by eq. and eq. with the proceeding conditions on the invariance C. Hence, Hence, eq. is satisfied. For the second equation we use the first property of this corollary that is BΣ xx B is diagonal and satisfy eq. of Proposition 2 that is Hence, the second condition, eq. is also satisfied. Therefore, any critical point of L(A, B) is a critical point ofL(A, B). A.6 PROOF OF LEMMA 1 Proof. We have which is eq.. Proof. The full rank matrices A * and B * given by eq. and eq. are clearly of the form given by Theorem 1 with I p = N p:= {1, 2, · · ·, p}, and Π p = I p. Hence, they define a critical point of L (A, B). We want to show that these are the only local minima, that is any other critical (A, B) is a saddle points. The proof is similar to the second partial derivative test. However, in this case the Hessian is a forth order tensor. Therefore, the second order Taylor approximation of the loss, derived in Lemma 4, is used directly. To prove the necessary condition, we show that at any other critical point (A, B), where the first order derivatives are zero, there exists infinitesimal direction along which the second derivative of loss is negative. Next, for the sufficient condition we show that the any critical point of the form (A *, B *) is a local and global minima. Recall that U Ip is the matrix of eigenvectors indexed by the p−index set I p and Π is a p × p permutation matrix. Since all the index sets I r, r ≤ p are assumed to be ordered, the only way to have U Np = U Ip Π is by having I p = N p and Π = I p. Let A (with no zero column) and B define an arbitrary critical point of L(A, B). Then Based on the previous theorem, either A = U Ir C with r < p or A = U Ip ΠD while in both cases B =B(A) given by eq.. If (A, B) is not of the form of (A *, B *) then there are three possibilities either 1) A = U Ir CD with r < p, or 2) The first two cases corresponds to not having the "right" and/or "enough" eigenvectors, and the third corresponds to not having the "right" ordering. We introduce the following notation and investigate each case separately. Let ε > 0 and U i;j ∈ R n×p be a matrix of all zeros except the i th column, which contains u j; the eigenvector of Σ corresponding to the j th largest eigenvalue. Therefore, where, E i ∈ R p×p is matrix of zeros except the i th diagonal element that contains 1. In what follows, for each case we define a encoder direction V ∈ R n×p with V F = O(ε), and set the decoder xx. Then we use eq. and eq. of Lemma 4, to show that the given direction (V, W) infinitesimally reduces the loss and hence, in every case the corresponding critical (A, B) is a saddle point. 1. For the case A = U Ir CD, with r < p, note that based on the first item in Corollary 1, BΣ xx B is a p×p diagonal matrix of rank r so it has p−r zero diagonal elements. Pick an i ∈ N p such that (BΣ xx B) ii is zero and a j ∈ N p \ I r. Set V = εU i;j D and W =W. Clearly, Notice, V F, W F = O(ε), so based on eq. of Lemma 4, we have is a positive definite matrix, as. Hence, any (A, B) = (U Ir CD,B(U Ir CD)) with r < p is a saddle point. 2. Next, consider the case where A = U Ip ΠD with I p = N p. Then there exists at least one j ∈ I p \ N p and i ∈ N p \ I p such that i < j (so λ i > λ j). Let σ be the permutation corresponding to the permutation matrix Π. Also, let ε > 0 and U σ(j);i ∈ R n×p be a matrix of all zeros except the σ(j) th column, which contains u i; the eigenvector of Σ corresponding to the i th largest eigenvalue. Set V = εU σ(j);i D and W =W. Then, since i / ∈ I p we have Since V F, W F = O(ε), based on eq. of Lemma 4, we have Note that in the above, the diagonal matrix Π Λ Ip Π has the same diagonal elements as Λ Ip but they are permuted by σ. So E σ(j) Π Λ Ip Π selects σ(j) th diagonal element of Π Λ Ip Π that is the j th diagonal element of Λ Ip, which is nothing but λ j. Now, since i < j 3. Finally consider the case where A = U Np ΠD with Π = I p. Since Π = I p, the permutation σ of the set N p, corresponding to the permutation matrix Π, has at least a cycle Hence, Π can be decomposed as Π = Π (i1i2···i k)Π, whereΠ is the permutation matrix corresponding to other cycles of σ. The cycle (i 1 i 2 · · · i k) can be decomposed into transpositions as Note that Π (i k i1), the permutation matrix corresponding to transposition (i k i 1) is a symmetric involutory matrix, i.e. Π 2 (i k i1) = I p. Set V = ε(U i1;i1 −U i k ;i k)ΠD and W =W. Again we replace V and W in eq. of Lemma 4. There are some tedious steps to simplify the equation, which is given in appendix A.7.1. The final is as follows. With the given V and W, the third and forth terms of the RHS of eq. are canceled and the first two terms are simplified to in which, m = max{k − 1, 2}. This means that If the selected cycle is just a transposition By the above definition of i m, we have i m − i 1 > 0 and since Hence, the first term in the above equation is negative and as ε → 0, we have L(A + V, B + W) − L(A, B) < 0. Therefore, any any (A, B) = (U Ip ΠD,B(U Ip ΠD)) with Π = I p is a saddle point. From Lemma 1 we know that the loss L(A, B) can be written in the form of eq.. Use this which is eq., as claimed. Notice that the above value is independent of the diagonal matrix D p. From the necessary condition we know that any critical point not in the form of (A *, B *) is a saddle point. Hence, due to the convexity of the loss at least one (A *, B *) is a global minimum but since the value of the loss at (A *, B *) is independent of D p all these critical points yield the same value for the loss. Therefore, any critical point in the form of (A *, B *) is a local and global minima. We investigate each term on the RHS separately. but before note that where,σ and its function inverseσ −1 are permutations corresponding toΠ andΠ respectively. ΠT pΠ is a diagonal matrix where diagonal elements of T p are ordered based onσ −1. Moreover, recall that we decomposed the permutation matrix Π in A with a cycle, where i 1, i 2, · · · i k are fixed points ofΠ. Therefore, withσ being the permutation corresponding toΠ we havẽ where, m = max{k − 1, 2}. This means that If the selected cycle is just a transposition For the first term we have Under review as a conference paper at ICLR 2020 which is eq. as claimed. For the second term we have Finally, we have to show that the third and the forth terms of the eq. are canceled. First, observe that Now, note that in both cases the matrices that are multiplied elementwise with ΠS pΠ are diagonal and hence, we only need to look at diagonal elements of ΠS pΠ. Moreover, where, i 1 · · · i k are fixed points of permutation corresponding toΠ soΠS pΠ has the same values at diagonal positions i 1 and i k as the original matrix S p. The only permutation that is only on the left side is Π (i1i k) which exchanges the i 1 and i k rows of S p. Since S p is such that the elements at each row before the diagonal element are the same and i k > i 1, we have the i 1 and i k diagonal elements of ΠS pΠ have the same value. Let that value be denoted as s. Then the sum of the above two equations yields m(λ i1 + λ i k) − m(λ i1 + λ i k) = 0, as claimed. In order to derive and analyze the critical points of the cost function which is a real-valued function of matrices we use the first and second order Fréchet derivatives as described in chapter 4 of. For a function f: R n×m → R the first order Fréchet derivative at the point A ∈ R n×m is a linear functional df (A): R n×m → R such that where, if V F, W F = O(ε) then R(V, W) = O(ε 3). Clearly, as at critical points where d A L(A, B)V +d B L(A, B)W = 0, as ε → 0 we have R V,W (A, B) → 0 and the sign of the sum of the second order partial Fréchet derivatives determines the type of the critical point very much similar to second partial derivative test for two variable functions. However, here for local minima we have to show the sign is positive in all directions and for saddle points have to show the sign is positive in some directions and negative at least in on direction. Finally, note that the smoothness of the loss entails that Fréchet derivative and directional derivative (Gateaux) both exist and (foregoing some subtleties in definition) are the same.
A new loss function for PCA with linear autoencoders that provably yields ordered exact eigenvectors
1,496
scitldr
Users have tremendous potential to aid in the construction and maintenance of knowledges bases (KBs) through the contribution of feedback that identifies incorrect and missing entity attributes and relations. However, as new data is added to the KB, the KB entities, which are constructed by running entity resolution (ER), can change, rendering the intended targets of user feedback unknown–a problem we term identity uncertainty. In this work, we present a framework for integrating user feedback into KBs in the presence of identity uncertainty. Our approach is based on having user feedback participate alongside mentions in ER. We propose a specific representation of user feedback as feedback mentions and introduce a new online algorithm for integrating these mentions into an existing KB. In experiments, we demonstrate that our proposed approach outperforms the baselines in 70% of experimental conditions. Structured knowledge bases (KBs) of entities and relations are often incomplete and noisy, whether constructed by hand or automatically. For example, it has been reported that 71% of people in Freebase are missing a place of birth attribute and 75% have no known nationality BID5. Similarly, while YAGO2 is estimated to be about 95% accurate on facts extracted from Wikipedia, this translates to roughly 5.7 million incorrect facts involving 2.6 million entities 1 BID11. The vast research in cleaning and correction of databases is further evidence of the permeation of errors throughout KB construction in multiple domains BID5,b,.As the primary consumers of KBs, human users have significant potential to aid in KB construction and maintenance. From a user's standpoint, a KB contains a set of entities, each entity possessing attributes and optionally participating in relationships with other entities. Thus, KB errors manifest as spurious and missing attributes and relationships. However, the data that gives rise to a KB is a collection of raw evidence, which can be understood as mentions that require clustering by entity resolution (ER) into a set of inferred entities. The attributes and relations of the inferred KB entities with which the user interacts are drawn from this underlying clustering of the mentions. Therefore, the t = 0 t = 1 t = 2 c a u s e s s p l i t Figure 1: Example of identity uncertainty inherent in user feedback. The figure represents the state of the KB for the entity, Rajarshi Das, before and after two pieces of feedback are given. User Feedback #1, which provides the homepage attribute, could refer to either Rajarshi Das-1 or -2, neither of which existed at the time the feedback was given.spurious and missing attributes and relationships, may stem from a variety of sources, including: noisy mentions produced by information extraction, mistakes in ER, missing data, etc. In light of new data that is continually being added to a KB, inferred KB entities may change. Specifically, the arrival of new mentions and user feedback can trigger modifications of the underlying mention clustering, ing in the creation of new inferred entities, removal of previously inferred entities or alteration of the existing inferred entities' attributes and relations. The volatility of the underlying mention clustering poses a formidable challenge to the task of integrating user feedback with KB content, especially when the precise targets of feedback are unknown, a phenomenon known as identity uncertainty , ].As an example, consider Figure 1, which displays an entity in a KB of researchers and user feedback provided about that entity. First, a user notices the entity Rajarshi Das is missing a homepage and so provides rajarshd.github.io. Later, a user provides feedback that the paper, which was published in 1991 and titled Genetic Reinforcement Learning with Multilayer Neural Networks, was not written by the Rajarshi Das affiliated with University of Massachusetts Amherst. This feedback causes the Rajarshi Das entity to be split into two entities. After the split, it cannot be determined which of the two newly created entities should have the homepage provided by the first piece of user feedback. The uncertainty arises from the ambiguity of which true identity of Rajarshi Das is referred to in the user feedback providing the homepage. In this paper, we present a new framework for reasoning about user feedback amidst identity uncertainty (§3, §4). Our approach is founded on the idea that user feedback should attributed publications with respect to a set of currently inferred KB entities (§6.5). We propose three baseline approaches for integrating user feedback and two feedback simulation schemes and measure the number of pieces of feedback required to recover the ground-truth entities under each experimental setting. Our show that our proposed approach based on FMs outperform the baselines in 70% of experimental conditions. Our work initiates the investigation of user feedback integration amidst identity uncertainty in KBs, an under-explored and important problem whose solution would dramatically improve the effectiveness of users in the process of KB construction and maintenance. Our goal is to construct a framework for automatically reasoning about the integration of user feedback and KBs under identity uncertainty. In this section, we define formal models for mentions and entities, which serve as building blocks for the remainder of our discussion. A KB is comprised of a set of mentions M = {x 0, · · ·, x n} which refer to a set of ground truth entities E = {e 0, · · ·, e k}. Each mention, x i, refers to exactly one ground-truth entity, denoted e (x i). The goal in ER is to construct a partition of the mentions,Ê = {ê 0, · · ·,ê l} as similar to E as possible. Eachê ∈Ê is known as an inferred entity. Mentions are comprised of attributes, which serve as evidence of inferred entity attributes and relations. Each mention, x i ∈ M, has a corresponding set of attributes, A(x i) = {a 0, · · ·, a m}, which is a subset of the entire set of attributes A, i.e., A(x i) ⊂ A. Any subset of mentions, e, also has a corresponding set of attributes, A(e) ⊂ A, that is derived from its underlying mentions in a process called canonicalization. We focus on a simple method of canonicalization, which derives the attributes of a set of mentions, e, as the union of attributes of the corresponding mentions. Our model of mentions, entities and attributes is reminiscent of previously proposed Bayesian models used for ER BID27. Like many instances of previous work, we choose to model inferred entities hierarchically BID4 BID26 BID33 BID14 BID17 BID36. Hierarchical modeling allows for the usage of a learned entity-level linkage function during inference, i.e., a function g: 2 M × 2 M → R, which scores the compatibility of two sets of mentions (rather than pairs of mentions). Learned linkage functions have been shown to improve entity resolution systems by helping to identify set-level inconsistencies and similarities with respect to attributes like: gender, animacy and number (singular/plural) BID24 BID7 BID3. Additionally, hierarchical models promote efficiency in inference and facilitate the representation of uncertainty by encoding multiple partitions of the underlying mentions simultaneously BID35 BID10 BID8 BID12.We model the set of inferred entities using a binary tree T. Each leaf, l ∈ T, stores a unique mention, e.g., l.x = x i, and each internal node, v ∈ T represents that set of mentions stored at its descendant leaves, lvs(v). Each node v ∈ T stores an attribute map, m: A → R, that maps attributes to their corresponding weights. The attribute map at each leaf, l.m, maps all attributes in l.x to 1 as its weight 2. The attribute map of an internal node, v.m, is constructed via canonicalization. For now, consider a canonicalization procedure that constructs that attribute maps of internal nodes as follows: DISPLAYFORM0 where ch(·) returns the children of its arguments, a is an attribute, and m[a] is the weight of a in the map m. In words, the weight of an attribute in v.m is the sum of that attribute's weight in v's children's maps. A subset of mentions exhibits an attribute a if the weight of a in the corresponding attribute map is greater than 0. Attributes that do not appear in DISPLAYFORM1 The compatibility of any two nodes in the tree can be scored via the linkage function, g. Each node, v, stores its linkage score, v.σ, where the linkage score of a node is computed by evaluating g on the attribute maps of its two children, ch(v). The linkage score of each leaf is positive infinity. Once the linkage score of all nodes in a tree, T, are computed, the set of inferred entities,Ê, can be extracted from T using a threshold, τ. In particular, the inferred entities correspond to the tallest nodes in T whose descendants all possess linkage scores greater than or equal to the threshold. Despite significant research effort, ER models are inevitably imperfect and lead to partitions in which mentions from different ground-truth entities are clustered together, or mentions from the same ground-truth entity are split apart. As previously discussed, KB users are well-situated to identify these errors so that the underlying partition of the mentions can be adjusted. However, as with KB mentions, identity uncertainty may permeate user feedback, which must be resolved. In this section, we present a formal model of user feedback that enables joint resolution of identity uncertainty with respect to both mentions and feedback simultaneously. Recall the example of identity uncertainty shown in Figure 1. The example shows how there can be ambiguity about the entity to which a piece of feedback refers. In the figure's example, the homepage of Rajarshi Das is given in User Feedback #1. The entity about which the feedback is given is later split, ing in uncertainty regarding the identity to which it refers. At a high-level, we propose to represent user feedback as mentions. In this way, ER may reason about user feedback and standard mentions jointly. More precisely, each piece of user feedback is represented as a feedback mention (FM). Like standard mentions, each FM, f, possesses an attribute map, called packaging, f.m pack: A → R that defines its compatibility with mentions and other FMs alike via the linkage function. In the context of hierarchical ER (§2.2), this translates to each piece of user feedback being stored at a leaf in the tree, just like standard mentions. ER is run over all mentions and feedback together, which allows the feedback to impact ER. An example FM and its integration in ER, using the OG algorithm (§4), is shown in FIG0. One key difference between FMs and standard mentions is that an FM may be initialized with attributes that map to negative weights. Negatively weighted attributes allow feedback to express incompatibility with other mentions and FMs. Thus, negatively weighted attributes are used to encourage splits in the underlying partition of the mentions. For example, in the scientific KB discussed above (Section 1), the user supplies feedback claiming that Rajarshi Das from UMass Amherst did not author the paper Genetic Reinforcement Learning with Multilayer Neural Networks from 1991. This feedback can be represented as a mention with name (attribute) "Rajarshi Das" and institution "UMass Amherst" mapped to positive weights and the paper attribute of 1991 paper mapped to a negative weight. The addition of such feedback encourages a split of the inferred KB entity Rajarshi Das, which is built from mentions of two, distinct, ground-truth entities as shown in Figure 1. Negatively weighted attributes also allow for the correction of spurious inferred entity attributes that stem from noisy mentions (i.e., mentions with incorrectly extracted attributes).Consider an instance of hierarchal ER in which there exist a node, v, with spurious attribute a which stems from a noisy mention, and let v.f pack [a] = 1. Since the error stems from noise, the underlying partition of the mentions may not require adjustment; v simply requires the removal of attribute a. This can be accomplished with negatively weighted attributes and canonicalization (§2.2). Specifically, consider an FM, f, with f.m pack [a] = −1. If f were made a descendant of v, through canonicalization, the negative weight would be propagated up the tree and v.m pack [a] = 0, effectively removing the spurious attribute. In some cases, like the one described above, a FM may be incompatible with its intended target according to the linkage function (especially when attempting to correct a spurious attribute). Therefore, each FM, f, is endowed with a second attribute map, called its payload, f.m pay. The attributes in a FM's payload are not used by the linkage function to assess its compatibility with other nodes in T. However, payload attributes do affect canonicalization. Specifically, for a parent p and children v and v, the packaging at p is computed by summing the weights in the packaging and payload maps of both v and v. This allows attributes in the payloads of v and v to remain "hidden" from one another until the nodes are merged, at which point these attributes are propagated to their parent and used in subsequent compatibility computations. Using the aforementioned canonicalization notation, the attribute map for the parent node p is: DISPLAYFORM0 Intuitively, the packaging can be thought of as a set of attributes used to guide initial placement of a FM, while the payload contains missing and incompatible attributes that may negatively affect the FM's placement. Our proposed representation of feedback as mentions is compatible with any hierarchical inference algorithm for ER. In real-world settings, new data and feedback are created and integrated with the KB continuously over time. Thus, we focus on online KB construction, where data points (i.e., mentions and user feedback) are integrated with the KB one at a time. In this section, we present the online grafting (OG) algorithm for online, hierarchical ER. At all times, the algorithms maintains a tree-consistent partition corresponding to the set of inferred KB entities. This partition is computed via a threshold, τ (which can be learned at training time). The algorithm is comprised of two recursive subprocedures: swap (swap l) and graft, which promote local and global optimality, respectively. The OG algorithm proceeds as follows. When a new data point, x, arrives, it is added as a sibling of its nearest neighbor leaf, v. Note that adding x as a sibling of v makes v's previous sibling, a, into the aunt of both x and v. Next, we invoke the swap l subroutine, recursively. During a swap l, consider x, v and a and check whether DISPLAYFORM0 i.e., whether the v and it's previous sibling, a, are more compatible than v and x. If so, swap the positions of the two subtrees rooted at x and a (the of which has a and v as siblings and x as their aunt). Repeat this procedure until x's sibling is more compatible with x than with its previous sibling, or until x's sibling is the root of an inferred entity. See Algorithms 1 and 2 for pseudocode. After swapping terminates, the graft subroutine is invoked recursively from the parent of x, par(x) = p. A graft invoked from a node whose linkage score is below the threshold, DISPLAYFORM1 Swap x and a in T return true else return false Algorithm 3 graft(T, g, p) τ, terminates immediately and no further grafts are attempted. If p.σ > τ, search the leaves of T for, v, the most compatible leaf with p that is not a descendant of p. Test whether, DISPLAYFORM2 DISPLAYFORM3 e., p and v are more compatible with each other than with their respective siblings, and their linkage score is higher than the threshold τ. If the test succeeds, make v the sibling of p and re-invoke the graft subroutine from par(p). If the test fails, consider 3 cases: DISPLAYFORM4, then repeat the test between p and par(v).Intuitively, the graft subroutine iteratively attempts mergers between ancestors of x and nodes compatible with those ancestors in T. Notice that a merger between two nodes in T can only occur if: 1) both nodes are more compatible with each other than with their siblings and 2) their ant parent has a linkage score higher than the threshold, i.e., the two nodes belong to the same inferred entity. The graft subroutine promotes global optimality and helps to make ER more robust to data point arrival order. See Algorithm 3 for pseudocode and FIG2 for an illustration of the algorithm's tree operations. Recall that the linkage function, g, takes two nodes, v, v ∈ T and returns their compatibility. Define the precision of a node pair (v, v) to be In words, the precision of a pair is the fraction of leaf-pairs, where one leaf is a descendent of v and the other a descendent of v, where both leaves belong to the same ground-truth entity. Note that FMs have no ground-truth label and are not included in this calculation. We train g to regress to the precision of its input node pair. Our training procedure is inspired by work in entity and event coreference that trains a linkage function to regress to precision of mergers in the context of bottom up agglomerative clustering BID13. DISPLAYFORM0 We select nodes for training in the context of running the OG algorithm. When a new mention, x, arrives, we generate training examples between it and all other leaves (mimicking the nearest neighbor search). Next, we generate training examples between x and ancestors of its nearest neighbor (according to model score) resembling the swap l subroutine. Finally, we insert x into the tree and generate training examples between the parent of x (from which the graft subroutine is initiated) and all nodes that can be viably grafted. The same procedure is repeated for each incoming mention. The linkage function, g, also needs to be trained to appropriately handle user feedback. After a batch of N training mentions have been added to a tree, we generate up to N pieces of user feedback (generated via the Detailed scheme, discussed in Section 6). The generated feedback is either positive and intended to encourage a graft, or negative and intended to encourage a split of some inferred entity. The training feedback are also inserted into the tree, one at a time, using the OG algorithm. For each piece of feedback, we generate a training example between the feedback and its intended sibling in the tree (designated at feedback generation time), and set the precision of the example to be 1.0. Next, a merge between the feedback and its intended sibling is hallucinated and a training example is generated between the ing parent node and the feedback's target. For positive feedback, the target is the root of a subtree that belongs near the feedback's sibling in the tree, and for negative feedback, the target is a node with which the feedback and its sibling are incompatible. These two types of examples help to train the model to use positive feedback to encourage grafting and negative feedback to encourage splitting. We tune the threshold, τ, on a development set. In particular, at regular intervals throughout training, a hierarchical clustering of a set of development mentions is constructed using the OG algorithm and the current linkage function model parameters. A search is performed measuring the pairwise F1 score 3 for the selection of entities determined by each value of τ. At test time, the parameters and threshold that ed in the best partition of the hierarchy on the dev set are used. We perform experiments testing various styles feedback representation in the context of online author disambiguation-a particular instantiation of ER in which the ground-truth entities are real-world authors. We use the Rexa author disambiguation dataset, which includes 8 author canopies; each canopy contains ambiguous mentions of authors with the same first initial and last name BID4. The mentions are derived from publications and contain: coauthors, titles, publishing venue, and year of publication. The goal is to partition the mentions by real-world author. Our experimental setup is composed of two phases. In the first phase, the set of mentions arrive online and are incrementally added to a hierarchical clustering, T, using the OG algorithm (§4). The second phase proceeds in rounds. At the start of round t, a set of inferred entities,Ê t, is constructed using a threshold, τ, tuned at training time on a development set (§5). IfÊ t = E, then the episode terminates. Otherwise, we simulate user interaction by generating feedback, f t, made with respect to a randomly selected inferred entity,ê ∈Ê t. Then, f t is added to T using the OG algorithm, potentially triggering a repartitioning of the mentions. No more than 400 rounds are permitted. Although rare, if after 400 rounds, the ground-truth entities have not been discovered, the method is recorded as having taken 400 + d rounds, where d is the number of mentions that would need to be swapped to discover E. We measure the mean number of rounds required to discover E for each method, repeated over 25 trials, and report a paired-t statistic (and corresponding significance level) between each baseline feedback representation style and our proposed FM representation. We simulate positive and negative feedback using node purity and completeness. A node v ∈ T is pure if: ∃i s.t. ∀l ∈ lvs(v), e (l) = e i, i.e., all of mentions stored at the leaves of v correspond to the same ground-truth entity, e i. A node v ∈ T is complete if: ∃i s.t. {l ∈ lvs(T): e (l) = e i } ⊆ lvs(v), i.e., that v's leaves contain all mentions of some ground-truth entity e i.To generate both positive and negative feedback, we sample an intended destination and an intended target. The destination is a particular node in the tree to which the feedback is intended to be similar. The target of the feedback is a different node that the feedback is intended to be merged with or separated from upon insertion. Note that even with full n a e 4 y h E X 4 8 8 1 e V G 6 U e t 9 z U j H 3 G 9 0 y N w b D a p / 9 a 0 f P W L C e 9 e X R 8 X 9 3 m J C 0 P w 9 X w e m r / T T g 9 w e j w 6 N u V n b I U / K M v C A p e U 0 O y T s y J h P C i C R f y F f y L f o c f Y 9 + R D 8 3 1 m j Q n X l C t i r 6 9 R f M R g O k < / l a t e x i t >ê n a e 4 y h E X 4 8 8 1 e V G 6 U e t 9 z U j H 3 G 9 0 y N w b D a p / 9 a 0 f P W L C e 9 e X R 8 X 9 3 m J C 0 P w 9 X w e m r / T T g 9 w e j w 6 N u V n b I U / K M v C A p e U 0 O y T s y J h P C i C R f y F f y L f o c f Y 9 + R D 8 3 1 m j Q n X l C t i r 6 9 R f O 5 w O l < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " v t T F I L r H U m V a E J G d m 4 t N t q 9 M 5 E k = " n a e 4 y h E X 4 8 8 1 e V G 6 U e t 9 z U j H 3 G 9 0 y N w b D a p / 9 a 0 f P W L C e 9 e X R 8 X 9 3 m J C 0 P w 9 X w e m r / T T g 9 w e j w 6 N u V n b I U / K M v C A p e U 0 O y T s y J h P C i C R f y F f y L f o c f Y 9 + R D 8 3 1 m j Q n X l C t i r 6 9 R f O 5 w O l < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " v t T F I L r H U m V a E J G d m 4 t N t q 9 M 5 E k = " n a e 4 y h E X 4 8 8 1 e V G 6 U e t 9 z U j H 3 G 9 0 y N w b D a p / 9 a 0 f P W L C e 9 e X R 8 X 9 3 m J C 0 P w 9 X w e m r / T T g 9 w e j w 6 N u V n b I U / K M v C A p e U 0 O y T s y J h P C i C R f y F f y L f o c f Y 9 + R D 8 3 1 m j Q n X l C t i r 6 9 R f M R g O k < / l a t e x i t >ê 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " v t T F I L r H U m V a E J G d m 4 t N t q 9 M 5 E k = " n a e 4 y h E X 4 8 8 1 e V G 6 U e t 9 z U j H 3 G 9 0 y N w b D a p / 9 a 0 f P W L C e 9 e X R 8 X 9 3 m J C 0 P w 9 X w e m r / T T g 9 w e j w 6 N u V n b I U / K M v C A p e U 0 O y T s y J h P C i C R f y F f y L f o c f Y 9 + R D 8 3 1 m j Q n X l C t i r 6 9 R f O 5 w O l < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " v t T F I L r H U m V a E J G d m 4 t N t q 9 M 5 E k = " n a e 4 y h E X 4 8 8 1 e V G 6 U e t 9 z U j H 3 G 9 0 y N w b D a p / 9 a 0 f P W L C e 9 e X R 8 X 9 3 m J C 0 P w 9 X w e m r / T T g 9 w e j w 6 N u V n b I U / K M v C A p e U 0 O y T s y J h P C i C R f y F f y L f o c f Y 9 + R D 8 3 1 m j Q n X l C t i r 6 9 R f O 5 w O l < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " v t T F I L r H U m V a E J G d m 4 t N t q 9 M 5 E k = " n a e 4 y h E X 4 8 8 1 e V G 6 U e t 9 z U j H 3 G 9 0 y N w b D a p / 9 a 0 f P W L C e 9 e X R 8 X 9 3 m J C 0 P w 9 X w e m r / T T g 9 w e j w 6 N u V n b I U / K M v C A p e U 0 O y T s y J h P C i C R f y F f y L f o c f Y 9 + R D 8 3 1 m j Q n X l C t i r 6 9 R f O 5 w O l < / l a t e x i t > DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 Concise Target (b) Negative feedback generation. Figure 5: Detailed and Concise feedback. To generate either positive or negative feedback, begin by randomly sampling an inferred entity. Then, sample a destination-the root of a pure subtree that is also a descendant of the sampled entity. The packaging of the feedback contains the attributes at the destination. Finally, sample a target, which is used to construct the feedback's payload. The target is a sampled mention in the Concise setting, or the largest, pure ancestor of a sampled mention in the Detailed setting.knowledge of the destination and the target (and the ER algorithm), it is difficult to design feedback that will cause the intended tree rearrangements exactly because other nodes in T may interfere during nearest neighbor search by being very compatible with either the feedback, target, or destination. Positive feedback is constructed with the intention of merging two nodes in the tree via a graft. To generate positive feedback, sample a node that is the root of a pure and incomplete subtree and whose parent is impure, r; this node is the destination of the feedback. Then, randomly select a mention, x, that is of the same ground-truth entity as the leaves of r, but is not a descendant of r. If constructing concise feedback, x is the target of the feedback; if constructing detailed feedback, traverse the ancestors of x until s, the first ancestor of x whose parent is impure. The node s becomes the target of the feedback. See Figure 5a for a visual illustration. Negative feedback is constructed with the intention of splitting an inferred entity. We simulate negative feedback by randomly sampling an impure inferred entity (i.e., subtree) and finding its root, r. We construct the destination of the feedback by randomly sampling a mention x ∈ lvs(r) and finding s, the ancestor of x closest to the root of T that is pure. If constructing concise feedback, sample a mention x ∈ lvs(r) \ lvs(s) to be the target; if constructing detailed feedback, traverse the ancestors of x until s, the ancestor of x closest to the root of T that is pure. In both cases, s becomes the target of the feedback. See Figure 5b for a visual illustration. Integrating user feedback under identity uncertainty has not been the subject of significant study. Therefore, we propose and compare the following baseline feedback representations:1. Feedback Mentions (FM) -the approach advocated in this work. Feedback is constructed with packaging and payload attribute maps and is integrated with existing mentions via the OG algorithm.2. Packaging Mentions (pack) -similar to our approach, but the feedback mentions have no payloads. All attributes that would have been included in a payload are instead added to the corresponding packaging.3. Hard Assignment (assign) -generate feedback with both packaging and payload. But then, find the node v ∈ T to which it would have been made a sibling by the OG algorithm and permanently assign the feedback to v. If v is an internal node and any of its leaves are ever moved (e.g., by a graft) such that they are no longer descendents of v, remove and delete all feedback assigned to v, since, because of identity uncertainty, it is unclear whether the feedback was intended to apply to the moved mentions.4. Hard Mention Assignment (assign-m) -similar to the assign approach but the feedback must be assigned to a leaf in T. Since mentions are atomic (rather than ephemeral, like inferred entities), the assigned feedback is never deleted. Our first experiment resembles a scenario in which users interact with a KB of scientists and provide feedback with respect to the KB's belief about a scientist's expertise. The expertise of an inferred entity is represented as a bag of key phrases drawn from the titles of its underlying mentions. Users supply missing keywords and identify incorrect keywords. In this experiment, packaging contains the set of attributes at the sampled destination and the payload contains the keywords at the target (generated from mention titles). Importantly, expertise key phrases are a shared attribute, that is, multiple ground-truth entities exhibit some of the same expertise. Example Rexa data and simulated user feedback is shown in 6 6.5 Authorship FeedbackOur second experiment resembles the scenario in which a user browses a KB of scientist profiles, similar to Google Scholar 4, and identifies incorrectly assigned and missing publications. Similar to the first experiment, the packaging contains attributes mapping to positive weights in the sampled destination. However, here, payloads contain titles stored in the sampled targets. Note that publication authorship is not a shared attribute, i.e., no two ground-truth entities in the same canopy have collaborated on any publication. Tables 1a and 1b contain the of the expertise and title experiments, respectively. Each table reports the paired t-statistic between each baseline method and our proposed Example feedback is shown for a concise target. The packaging and payload for authorship FM would contain the two titles mentioned with positive and negative weights respectively. approach (FM), under detailed and concise feedback generation schemes, with respect to the number of pieces of feedback required to discover the ground-truth partition of the mentions. Each row represents a canopy in which the experiment is performed, and each column corresponds to a baseline method and feedback generation setting. Each cell contains the difference between the mean number of rounds required by the FM approach and a baseline approach to discover the ground-truth partition (higher is better). Positive numbers are bolded; asterisks (*) indicate statistical significance (p < 0.05) and two asterisks (**) indicate statistical significance (p < 0.01). Rows are omitted if the initial partition of the mentions, constructed by the OG algorithm and subject to no user feedback, is correct. The paired-t statistics compare our proposed feedback representation (FM) to the three baseline feedback representations. We find that FM outperforms pack in both the detailed and concise settings of Experiment I on all but two of the canopies. In 7 out of 14 canopies, the are statistically significant. These underscore the importance of using only certain attributes (stored in the packaging) during the initial nearest neighbor search. We hypothesize that storing shared attributes in the payload is especially important because otherwise they can interfere with initial routing. When feedback is made with respect to attributes that are not shared, as in Experiment II, separating packaging and payload is less important. This is evidenced by the pack approach slightly outperforming FMs in the detailed setting, but never significantly. FMs generally outperform pack in the concise setting. We hypothesize that this is a of better initial placement of the feedback in the tree by the OG algorithm. In comparing, FM and assign we find that our proposed approach typically performs better in Experiment II while the baseline performs better in Experiment I. We note that the feedback in Experiment I is more ambiguous than Experiment II (because expertise is a shared attribute). We hypothesize that assign's better performance in Experiment I is due to the baseline's approach of deleting feedback to mitigate errors caused by identity uncertainty with respect to user feedback. We note that this agrees with the observation Table 1: Paired-t statistic. Each cell represents that difference in mean number of feedback-rounds required to discover the ground-truth entities over 25 runs between a baseline, denoted by the column heading, and our proposed approach (FM). Positive numbers indicate that FM requires fewer rounds of feedback than its competitor (larger numbers are better). Two asterisks (**) indicates that the statistic is significant at a 0.01 significance level; one asterisk indicates statistical significance at the 0.05 level. The mcguire j canopy is excluded from Tables 1a and 1b and the robinson h canopy is excluded from Table 1b since in these canopies, either: 0 or 1 edits are required to discover the ground-truth entities across baselines.that FM generally outperforms assign-m in both experiments, in that assign-m is similar to the assign strategy but never deletes feedback. Effective utilization of user feedback has been the subject of significant study in the context of KB construction. Early work, like NELL, primarily enlists humans for labeling data, which are used to train downstream models BID2. Other work has used active learning in training relation extraction models BID1. Another approach employed by the DeepDive system asks humans to identify relevant features by writing feature extraction rules in support of KB construction BID25 BID0. The study of leveraging user feedback in ER has primarily focused on the solicitation of pairwise feedback. For example, given a set of mention pairs, the CrowdER system automatically prunes the set of pairs that are highly unlikely to be coreferent, and then constructs crowdsourcing HITs to collect binary labels for the remaining pairs BID29. In other similar work, human are asked to identify matching mentions across databases in data integration BID16. Recent work studies online ER with an oracle, in which the goal is to design efficient strategies for soliciting humans for pairwise constraints among mentions BID28 BID18 BID16.Recent work in author coreference also involves humans-in-the-loop BID36. This work discusses both pairwise constraints as well as identity constraints. Unlike our work, their identity-level feedback is treated as a collection of pairwise constraints. As we point out, feedback that can be reduced to a set of pairwise constraints is insufficient for general KB feedback as pairwise feedback is only designed for correcting errors in ER (and not general KB errors). Similarly, many examples of user feedback are inexpressible using pairwise constraints. The OG algorithm is closely related to the recently proposed clustering algorithm, GRINCH BID22, which also uses a graft procedure in an incrementally built hierarchical clustering. Unlike GRINCH, OG uses a threshold τ to determine when tree re-arrangements are made and to maintain the current set of inferred entities. The most closely related work to ours is a preliminary study of incorporating user feedback in the context of data integration BID34 BID32. In this work, users supply pairs of mention-like records that posses either should-link or should-not-link factors, which either softly repel the pair or encourages their merger. This work presents a framework for reasoning about user feedback under identity uncertainty during KB construction. We advocate representing user feedback as feedback mentions that participate in ER alongside standard mentions. Our feedback mentions are endowed with a packaging-used to identify similar mentions during ER-and a payload-that is used to add missing attributes to inferred entities, correct mistakes and influence future ER decision. We give a hierarchical model of inferred entities and present the OG algorithm for performing online ER amongst standard and feedback mentions. In experiments, we show that our approach often outperforms baseline approaches in terms of efficiency with respect to recovering the ground-truth partition in ER. Our work is a foundational step in addressing a significant and under-explored problem in automatic KB construction whose solution could improve the accuracy and efficacy of integrating expressive user feedback with KB content.
This paper develops a framework for integrating user feedback under identity uncertainty in knowledge bases.
1,497
scitldr
Machine learning algorithms are vulnerable to poisoning attacks: An adversary can inject malicious points in the training dataset to influence the learning process and degrade the algorithm's performance. Optimal poisoning attacks have already been proposed to evaluate worst-case scenarios, modelling attacks as a bi-level optimization problem. Solving these problems is computationally demanding and has limited applicability for some models such as deep networks. In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training. We propose a Generative Adversarial Net with three components: generator, discriminator, and the target classifier. This approach allows us to model naturally the detectability constrains that can be expected in realistic attacks and to identify the regions of the underlying data distribution that can be more vulnerable to data poisoning. Our experimental evaluation shows the effectiveness of our attack to compromise machine learning classifiers, including deep networks. Despite the advancements and the benefits of machine learning, it has been shown that learning algorithms are vulnerable and can be the target of attackers, who can gain a significant advantage by exploiting these vulnerabilities . At training time, learning algorithms are vulnerable to poisoning attacks, where small fractions of malicious points injected in the training set can subvert the learning process and degrade the performance of the system in an indiscriminate or targeted way. Data poisoning is one of the most relevant and emerging security threats in applications that rely upon the collection of large amounts of data in the wild . Some applications rely on the data from users' feedback or untrusted sources of information that often collude towards the same malicious goal. For example, in IoT environments sensors can be compromised and adversaries can craft coordinated attacks manipulating the measurements of neighbour sensors evading detection . In many applications curation of the whole training dataset is not possible, exposing machine learning systems to poisoning attacks. In the research literature optimal poisoning attack strategies have been proposed against different machine learning algorithms (; ; Muñoz-González et al., 2017;), allowing to assess their performance in worst-case scenarios. These attacks can be modelled as a bi-level optimization problem, where the outer objective represents the attacker's goal and the inner objective corresponds to the training of the learning algorithm with the poisoned dataset. Solving these bi-level optimization problems is challenging and can be computationally demanding, especially for generating poisoning points at scale. This limits its applicability against some learning algorithms such as deep networks or where the training set is large. In many cases, if no detectability constraints are considered, the poisoning points generated are outliers that can be removed with data filtering (a). Furthermore, such attacks are not realistic as real attackers would aim to remain undetected in order to be able to continue subverting the system in the future. As shown in , detectability constraints for these optimal attack strategies can be modelled, however they further increase the complexity of the attack, limiting even more the application of these techniques. Taking an entirely different and novel approach, in this paper we propose a poisoning attack strategy against machine learning classifiers with Generative Adversarial Nets (GANs) . This allows us to craft poisoning points in a more systematic way, looking for regions of the data distribution where the poisoning points are more influential and, at the same time, difficult to detect. Our proposed scheme, pGAN, consists on three components: generator, discriminator and target classifier. The generator aims to generate poisoning points that maximize the error of the target classifier but minimize the discriminator's ability to distinguish them from genuine data points. The classifier aims to minimize some loss function evaluated on a training dataset that contains a fraction of poisoning points. As in a standard GAN, the problem can be formulated as a minimax game. pGAN allows to systematically generate adversarial training examples, which are similar to genuine data points but that can degrade the performance of the system when used for training. The use of a generative model allows us to produce poisoning points at scale, enabling poisoning attacks against learning algorithms where the number of training points is large or in situations where optimal attack strategies with bi-level optimization are intractable or difficult to compute, as it can be the case for deep networks. Additionally, our proposed model also includes a mechanism to control the detectability of the generated poisoning points. For this, the generator maximizes a convex combination of the losses for the discriminator and the classifier evaluated on the poisoning data points. Our model allows to control the aggressiveness of the attack through a parameter that controls the weighted sum of the two losses. This induces a trade-off between effectiveness and detectability of the attack. In this way, pGAN can be applied for systematic testing of machine learning classifiers at different risk levels. Our experimental evaluation in synthetic and real datasets shows that pGAN is capable of compromising different machine learning classifiers bypassing different defence mechanisms, including outlier detection Paudice et al. (2018a), , PCA-based defences and label sanitization Paudice et al. (2018b). We analyse the trade-off between detectability and effectiveness of the attack: Too conservative strategies will have a reduced impact on the target classifier but, if the attack is too aggressive, most poisoning points can be detected as outliers. The first practical poisoning attacks were proposed in the context of spam filtering and anomaly detection . But these attacks do not easily generalize to different learning algorithms. presented a more systematic approach, modelling optimal poisoning attacks against SVMs for binary classification as a bi-level optimization problem, which can be solved by exploiting the Karush-Kuhn-Tucker conditions in the inner problem. A similar approach is proposed by for poisoning embedded feature selection methods, including LASSO, ridge regression, and elastic net. proposed a more general framework to model and solve optimal poisoning attacks for convex classifiers. They exploit the implicit function theorem to compute the gradients required to solve the corresponding bi-level optimization problem. Muñoz-González et al. proposed back-gradient optimization to estimate the gradients required to solve bi-level optimization problems for optimal poisoning attacks against multi-class classifiers. This approach allows to attack a broader range of learning algorithms and reduces the computational complexity with respect to previous works. However, all these techniques are limited to compromise deep networks trained with a large number of training points, where many poisoning points are required even to compromise a small fraction of the training dataset. Previous attacks did not model explicitly appropriate detectability constraints. Thus, the ing poisoning points can be far from the genuine data distribution and can be easily identified as outliers (a; ; b). showed that it is still possible to craft attacks capable of bypassing outlier-detection-based defences with an iterative constrained bi-level optimization problem, where, at each iteration, the constraints change according to the current solution of the bi-level problem. However, the high computational complexity of this attack limits its practical application in many scenarios. proposed a different approach to craft targeted attacks against deep networks by exploiting influence functions. This approach allows to create adversarial training examples by learning small perturbations that, when added to some specific genuine training points, change the predictions for a target set of test points. showed that it is possible to perform targeted attacks when the adversary is not in control of the labels for the poisoning points. introduced a poisoning attack with generative models using autoencoders to generate the malicious points. Although this method is more scalable than attacks based on bi-level optimization, the authors do not provide a mechanism to control the detectability of the poisoning points. Our model, pGAN, is a GAN-based model with three components (generator, discriminator and target classifier) to generate systematically adversarial training examples. First, we shortly describe the considered model for the attacker. Then, we introduce the formulation of pGAN and, finally, we provide some practical considerations for the implementation of pGAN. The attacker's knowledge of the targeted system depends on different aspects: the learning algorithm, the objective function optimized, the feature set or the training data. In our case we consider perfect knowledge attacks, where we assume the attacker knows everything about the target system: the training data, the feature set, the loss function and the machine learning model used by the victim. Although unrealistic in most practical scenarios, this assumption allows us to perform worst-case analysis of the performance of the system under attack. However, our proposed attack strategy also supports limited knowledge, exploiting the transferability property of poisoning attacks (Muñoz-González et al., 2017). For the attacker's capabilities, we consider here a causative attack (; 2010), where the attacker can manipulate a fraction of the training data to influence the learning algorithm. We assume that the attacker can manipulate all the features to craft the poisoning points as long as the ing points are within the feasible domain for the distribution of genuine training points. Finally, we also assume that the attacker can also control the labels of the injected poisoning points. In a multi-class classification task, let X ∈ R d be the d-dimensional feature space, where data points x are drawn from a distribution p x (x) and Y is the space of class labels. The learning algorithm, C, aims to learn the mapping f: X → Y by minimizing a loss function, L C, evaluated on a set of training points S tr. The objective of the attacker is to introduce a fraction, λ ∈, of malicious points in S tr to maximize L C when evaluated on the poisoned training set. The Generator, G, aims to generate poisoning points by learning a data distribution that is effective at increasing the error of the target classifier, but that is also close to the distribution of genuine data points, i.e. the generated poisoning points are similar to honest data points to evade detection. Thus, G receives some noise z ∼ p z (z|Y p) as input and implicitly defines a distribution of poisoning points, p p (x), which is the distribution of the samples G(z|Y p) conditioned on Y p ⊂ Y, the set of target class labels for the attacker. The Discriminator, D, aims to distinguish between honest training data and the generated poisoning points. It estimates D(x|Y p), the probability that x came from the genuine data distribution p x rather than p p. As in G, the samples used in the discriminator are conditioned on the set of labels Y p. The Classifier, C, is representative for the attacked algorithm. In perfect knowledge attacks C can have the same structure as the actual target classifier. For blackbox attacks we can exploit attack transferability, and then, use C as a surrogate model that can be somewhat similar to the actual (unknown) classifier. During the training of pGAN, C is fed honest and poisoning training points from p x and p p respectively, where the fraction of poisoning points is controlled by a parameter λ ∈. In contrast to traditional GAN schemes, G in pGAN plays a game against both D and C. This can also be formalized as a minimax game where the maximization problem involves both D and C. Similar to conditional GANs , the objective function for D (which also depends on G) can be written as: (The objective function for C is given by: where λ is the fraction of poisoning points introduced in the training dataset and L C is the loss function used to train C. Note that the poisoning points in belong to a subset of poisoning class labels Y p, whereas the genuine points used to train the classifier are from all the classes. The objective in is just the negative loss used to train C evaluated on a mixture of honest and poisoning points (from the set of classes in Y p) controlled by λ. Given and, pGAN can then be formulated as the following minimax problem: with α ∈. In this case, the maximization problem can be seen as a multi-objective optimization problem to learn the parameters of both the classifier and the discriminator. Whereas for C and D the objectives are decoupled, the generator optimizes a convex combination of the two objectives in and. The parameter α controls the importance of each of the two objective functions towards the global goal. So, for high values of α, the attack points will prioritize evading detection, rendering attacks with (possibly) a reduced effectiveness. Note that for α = 1 we have the same minimax game as in a standard conditional GAN . On the other hand, low values of α will in attacks with higher impact in the classifier's performance. However the generated poisoning points will be more detectable by outlier detection systems. For α = 0, pGAN does not consider any detectability constraint and the generated poisoning points are only constrained by the output activation functions in the G. In this case pGAN can serve as a suboptimal approximation of the optimal attack strategies in (; ; Muñoz-González et al., 2017) where no detectability constraints are imposed. Similar to we train pGAN following a coordinated gradient-based strategy to solve the minimax problem in. We sequentially update the parameters of the three components using mini-batch stochastic gradient descent/ascent. For the generator and the discriminator data points are sampled from the conditional distribution on the subset of poisoning labels Y p. For the classifier, honest data points are sampled from the data distribution including all the classes. A different number of iterations can be considered for updating the parameters of the three blocks. The details of the training algorithm are provided in Appendix A. The formulation of pGAN in allows to perform both error-generic and error-specific poisoning attacks (Muñoz-González et al., 2017), which aim to increase the error of the classifier in an indiscriminate or a specific way. However, the nature of these errors can be limited by Y p, i.e. the classes for which the attacker can inject poisoning points. To generate targeted attacks or to produce more specific types of errors in the system we need to use a surrogate model for the target classifier in pGAN, including only the classes or samples considered in the attacker's goal. For example, if the attacker wants to inject poisoning points labelled as i to increase the classification error for class j, we can use a binary classifier in pGAN considering only classes i and j, where the generator aims to produce samples from class i. As in other GAN schemes, pGAN can also be difficult to train and can be prone to mode collapse. To mitigate these problems, we used in our experiments some of the standard techniques proposed to improve GANs training, such as dropout or batch-normalization . We also applied one-side label smoothing, not only for the labels in the discriminator but also for the labels of the genuine points in the classifier. As suggested by , to avoid small gradients for G from the discriminator's loss function, especially in early stages where the quality of the samples produced by G is poor, we train G to maximize log(D(G(z|Y p))) rather than minimizing log(1 − D(G(z|Y p))). In contrast to standard GANs, in pGAN the learned distribution of poisoning points p p is expected to be different from the distribution of genuine points p x. Thus, the accuracy of the discriminator in pGAN will always be greater that 0.5. Then, the stopping criteria for training pGAN cannot be based on the discriminator's accuracy. We need to find a saddle point where the objectives in and are maximized for D and G respectively (i.e. pGAN finds local maxima) and the the combined objective in is minimized w.r.t. G (i.e. pGAN finds a local minimum). Finally, the value of λ plays an important role in the training of pGAN. If λ is small, the gradients for G from the classifier's loss in can be very small compared to the gradients from the discriminator's loss in. Thus, the generator focuses more on evading detection by the discriminator rather than increasing the error of the target classifier, ing in blunt attacks. Then, even if the expected fraction of poisoning points to be injected in the target system is small, larger values of λ are preferred to generate more successful poisoning attacks. In our experiments in Sect. 4 we analyse the effectiveness of the attack as a function of λ. To illustrate how pGAN works we first performed a synthetic experiment with a binary classification problem, generating two bivariate Gaussian distributions that slightly overlap. We trained pGAN for different values of α with 500 training points from each Gaussian distribution. We targeted a logistic regression classifier with λ = 0.8. In Fig. 1 we show the distribution of poisoning (red dots) and genuine (green and blue dots) data points. The poisoning points are labelled as the green data points. Thus, G aims to generate malicious points, similar to the green ones (i.e. D aims to discriminate between red and green data points). For α = 1 we have the same as in a standard GAN, so that the distribution of red points matches the distribution of the green ones. But, as we decrease the value of α, the distribution of red points shifts towards the region where both green and blue distributions overlap. We can observe that for α = 0.2 the poisoning points are still close to genuine green points, i.e. we cannot consider the red points as outliers in most cases. For α = 0 the generator does not have detectability constraints, focusing only on increasing the error of the classifier. It is interesting to observe that, in this case, pGAN does not produce points interpolating the distribution of the two genuine classes, but the distribution learned by the generator is far from the region where the distributions of the blue and green points overlap. 1 This suggests that for α = 0 pGAN is not just producing a simple interpolation between the two classes, but G looks for regions close to the decision boundary where the classifier is weaker. The complete details of the experiment and the effect on the decision boundary after injecting the poisoning points can be found in Appendix B. We performed our experimental evaluation on MNIST , Fashion-MNIST (FM-NIST) , and CIFAR-10 datasets, using Deep Neural Networks (DNNs) for MNIST and FMNIST and Convolutional Neural Networks (CNNs) for CIFAR. All details about the datasets used and the experimental settings in our experiments are described in Appendix C. To test the effectiveness of pGAN to generate stealthy poisoning attacks we applied the defence strategy proposed by Paudice et al. (2018a): We assumed that the defender has a fraction of trusted data points that can be used to train one outlier detector for each class in the classification problem. Thus, we pre-filter the (genuine and malicious) training data points with these outlier detectors before training. As in (a) we used the distance-based anomaly detector proposed by , which was proven to be effective against optimal poisoning attacks (; Muñoz-González et al., 2017). The outlierness score is computed based on the euclidean distance between the tested data point and its k-nearest neighbours from a subset of s points, which are sampled without replacement from the set of points used to train the outlier detector. In our experiments we used the same values proposed in (a): k = 5 for the number of neighbours and s = 20 for the number of training points to be sampled. We set the threshold of the outlier detector so that the α-percentile is 0.95. The α-percentile controls the fraction of genuine points that is expected to be retained after applying the outlier detector (i.e. 95% in our case). To provide a better understanding of the behaviour of pGAN we first trained and tested our attack targeting binary classifiers. For this, in MNIST we selected digits 3 and 5, for FMNIST we picked the classes sneaker and ankle boot and, for CIFAR, automobile and truck classes. The poisoning points were labelled as 5, ankle boot and truck respectively. First, we analysed the effectiveness of the attack as a function of α. For each dataset we trained 5 different generators for each value of α explored, [0.0, 0.1, 0.3, 0.5, 0.7, 0.9]. We set λ = 0.9·Pr(Y p), where Pr(Y p) is the prior probability of the samples from the poisoning class, Y p (i.e. digit 5, ankle boot, and truck). For testing, we used 500 (genuine) samples per class to train the outlier detectors and 500 samples per class to train a separate classifier. We evaluated the effectiveness of the attack varying the fraction of poisoning points, exploring values in the range 0 − 40%. To preserve the ratio between classes we substitute genuine samples from the poisoning class with the malicious points generated by pGAN (rather than adding the poisoning points to the given training dataset). For each pGAN generator and for each value of the fraction of poisoning points explored, we did 10 independent runs with independent splits for the outlier detectors and the classifier training sets. In Fig. 2 we show the test classification error for MNIST and FMNIST as a function of the fraction of poisoning points averaged over the 5 generators and the 10 runs for each generator. In MNIST, the attack is more effective for α = 0.1, increasing the error from 2.5% when there's no attack to more than 12% when 40% of the training dataset is compromised. For bigger values of α the effect of the attack is more limited. For α = 0, i.e. when no detectability constraints are considered the effect of the attack is more limited than for α = 0.1, although in this case, the points capable of bypassing the outlier detector can still produce some degradation in the target system. Similarly, for FMNIST the attack with α = 0.1 produces more effective poisoning data points, although the overall effect of the attack is more limited compared to MNIST. For α = 0 the attack is mitigated by the outlier detector in most cases, and has only some effectiveness for larger fractions of poisoning points. It is interesting to observe that, despite the baseline error (i.e. when there is no attack) is lower for MNIST (2.5% vs 4.75% in FMNIST), it is more difficult to poison FMNIST. This suggests that the impact of the attack not only depends on the separation between the two classes but also on the topology of the classification problem. In Fig. 3 we show some of the poisoning examples generated by pGAN (with α = 0.3). For MNIST the malicious data points (labelled as 5) exhibit features from both, digits 3 and 5. In some cases, although the poisoning digits are similar to a 3, it is difficult to automatically detect these points as outliers, as many of the pixels that represent these malicious digits follow a similar pattern compared to genuine 5s, i.e. they just differ in the upper trace of the generated digits. In other cases, the malicious digits look like a 5 that have some characteristics that make them closer to 3s. In the case of FMNIST, the samples generated by pGAN (labelled as ankle boots) can be seen as an interpolation of the two classes. The malicious images look like high-top sneakers or low-top ankle boots. Thus, it is difficult to detect them as malicious points, as they clearly resemble some of the genuine ankle boots in the genuine training set. Actually, for some of the genuine images, it is difficult to identify them as a sneaker or an ankle boot. More examples for different values of α are also shown in Appendix D. The in Fig. 4(left) show that for α = 0, i.e. with no detectability constraints, the attack is significantly more effective than for the other values of α explored. In this case, we observed that the outlier detector can be easily bypassed in this dataset, as the number of features and the complexity of the classification tasks are higher compared to MNIST and FMNIST. This shows some of the limitations of some existing defences to mitigate data poisoning attacks. We can observe that in this case, the test classification error increases from 12% to 24% after injecting 40% of poisoning points. The increase is more moderate for larger values of α. In Fig. 4 (right) we show some of the examples generated by pGAN with α = 0.3, which exhibit characteristics from the two classes: cars and trucks. Outlier detection: In Fig. 5 (centre) we show the fraction of data points pre-filtered by the outlier detectors in MNIST dataset as a function of α. We explored two values for the α-percentile (the threshold of the detectors): 0.90 and 0.95. As expected, the fraction of rejected genuine data points is, on average, 10% and 5% respectively. However, the fraction of rejected malicious points for α ≥ 0.1 is smaller than for the genuine points for the two detectors. This is because the generator pays less attention to samples that are in low density regions for the data distribution of the genuine points, and then, the generated poisoning points are conservative. For α = 0 the fraction of rejected malicious points is also not very high. This can be due to the similarity between the two classes. Then, even if the generated poisoning points, labelled as 5, look like a 3 they are still close to the distribution of genuine 5s when targeting a non-linear classifier. Sensitivity w.r.t. λ: For analysing the sensitivity of pGAN w.r.t. λ, the fraction of poisoning points used for C, we performed an experiment on MNIST dataset (digits 3 and 5). We set α = 0.2 and explored different values for λ = λ/Pr(Y p) ranging from 0.1 to 1. With the same experimental settings as before we trained 5 generators for each value of λ. We also tested the effectiveness of the attack on a separate classifier, with 10 independent runs for each generator and value of λ explored. For the attacks we injected 20% of poisoning points. In Fig. 5 (left) we show the averaged classification error on the test dataset as a function of λ. We can observe that, for small λ, the effect of the attack is more limited. The reason is that, when training pGAN the effect of the poisoning points on C is very reduced, and then, the gradients of w.r.t. the parameters of G can be very small compared to the gradients coming from the discriminator. Then, G focuses more on optimizing the discriminator's objective. In this case, even for λ = 1 the attack is still effective, just slightly decreasing the error rate compared to λ = 0.9. Comparison with existing poisoning attacks in the research literature is challenging: Optimal poisoning attacks as in Muñoz-González et al. are computationally very expensive for the size of the networks and datasets used in our experiments in Fig. 2. This is even worse if we consider detectability constraints as in. On the other side, comparing with standard label flipping in an unfair comparison for pGAN, as label flipping do not consider detectability constraints. In other words, we can expect label flipping to be more effective than pGAN when no defence is applied, but this attack is clearly more detectable (b). To provide a fairer comparison, we implemented an heuristic for generating label flipping attacks with detectability constraints. Thus, we flipped the labels of training samples from the target class that are closer to the source class. For this, we computed the distance of the training points from the target class to the mean of the training points of the source class. Then, we flipped the labels of the closest points, so that the malicious points should be more difficult to detect. In Fig. 6 we show the comparison of this label flipping strategy with pGAN (α = 0.1) for MNIST, using the same settings as in the experiment in Fig. 2. We can observe that pGAN is more effective than the label flipping attack and that the effect of the two attacks is different. Label flipping increases both the false positive and false negative rates of the target classifier, whereas pGAN aims only to increase the false positive rate, i.e. pGAN is producing an error-specific attack, giving the attacker more control on the kind of errors to be produced in the classifier. Attack effectiveness as a function of the number of training points: In Fig. 5 (right) we show how the number of training data points impact the effect of the attack. For this, we trained 5 pGAN generators with α = 0.1 and tested on classifiers with different number of training points ranging from 1, 000 to 10, 000 and injecting 20% of poisoning points. For each generator and value of the number of training points explored we did 5 independent runs. We also used 500 samples per class to train the outlier detectors. The in Fig. 5 (right) show that the difference in performance between the poisoned and the clean classifier reduces as the number of training samples increases. This is expected, as the stability of the learning algorithm increases with the number of training data points, limiting the ability of the attacker to perform indiscriminate poisoning attacks. This does not mean that learning algorithms trained with large datasets are not vulnerable to data poisoning, as attackers can still be very successful at performing targeted attacks, focusing on increasing the error on particular instances or creating backdoors . In these scenarios we can also use pGAN to generate more targeted attacks using a surrogate model for the classifier including the subset of samples that the attacker aims to misclassify. Finally we performed an error-specific attack on MNIST using the 10 classes. In this case the objective of the attacker is to increase the error of digit 3 being misclassified as a 5. For this, we trained pGAN using a surrogate classifier including only digits 3 and 5, and then, tested against a multi-class classifier trained on 10, 000 data points (see the details of the architecture used in Appendix C). For pGAN we used α = 0.1 and λ = 0.9 · Pr(Y p). We varied the fraction of poisoning points exploring values in the range 0−4%. The in Fig. 7 (left) show that, although the overall test classification error only increases slightly, the test error of digit 3 being classified as a 5 is significantly affected by the attack, increasing from 1.1%, when there is no attack, to 13.1% with just 4% of poisoning points. In Fig. 7 (right) we show the average difference in the confusion matrix evaluated on the clean dataset and the poisoned dataset (4% poisoning). We can observe that the detection rate of digit 3 decreases up to 11%, and that this decrease is due to an increase of 12% on the error of digit 3 being incorrectly classified as a 5. This experiment support the usefulness of pGAN to generate targeted attacks, showing that even with a small fraction of poisoning points we can craft successful targeted (error-specific) attacks. Apart from the outlier detection defence described previously, we also tested pGAN against 3 different types of defences: First, we considered , a meta-algorithm for robust optimization that aims to remove outliers and points that can have a negative impact on the learning algorithm at training time. We followed the settings described in applying the algorithm separately for each class. We set the value of, the parameter that controls the fraction of points to be removed, to 0.1 ( for different values of are also shown in Appendix E.1). For the sake of computational tractability we applied Sever using the parameters in the last layer of the target classifier, as these parameters are more influenced by the poisoning attack. 2 Secondly, we considered the defence introduced by which relies on Principal Component Analysis (PCA) to detect poisoning points. This model assumes that the clean data lies in a low-rank subspace and that poisoning points will have a large component out of this subspace. As in Sever, this defence has a parameter that controls the fraction of points to be rejected. In our experiment we set = 0.05. Further for different values of are shown in Appendix E.2. Finally we also tested pGAN attack against the label sanitization technique introduced by (b), which relabels training data points according to a KNN-based algorithm so that a point is relabelled if, at least, n k neighbours have the same label, different from the label of the evaluated point. Following similar settings as in Paudice et al. (2018b), we set k = 3 for KNN and n k = 2. Using the same settings as in our previous experiments we tested the 3 defences against pGAN attacks with α = 0.1 for MNIST and FMNIST. This value of α produced the most effective attacks against the outlier detection defences as shown in Fig. 2. The in Fig. 8 show that pGAN successfully bypasses all the defences in the two datasets. We can observe that Sever performs worse than the outlier detector defence for MNIST 3 although the degradation with the increasing number of poisoning points follows a similar trend. In the case of FMNIST, Sever is more effective compared to the other defences when the number of poisoning points is reduced, but in this case, the algorithm degrades faster when we increase the fraction of malicious points. In Appendix E.1 we show that pGAN produces similar effect on Sever for different values of. The PCA-based defence performs worse than the outlier detector in MNIST, whereas in FMNIST the difference between the two defences is very small. In both cases, both algorithms degrade in the same way as we increase the number of poisoning points. In Appendix E.2 we show that for larger values of this defence performs worse. Finally, in Fig. 8 (right) we can observe that label sanitization completely fails to defend against our attack in MNIST. As pGAN produces poisoning points that are correlated, the KNN-based algorithm proposed to do the relabelling is not capable of detecting the poisoning points. Furthermore, some of the genuine points from the target class are incorrectly relabelled, making the problem even worse. Similar are obtained for FMNIST, as shown in Appendix E.3. The in Fig. 8 along with the previous in Figs. 2-4, show that pGAN not only can generate successful poisoning points at scale even when using detectability constraints but also that pGAN is successful against state-of-the-art defences published in the literature. The pGAN approach we introduce in this paper allows to naturally model attackers with different levels of aggressiveness and the effect of different detectability constraints on the robustness of the algorithms. This allows to a) study the characteristics of the attacks and identify regions of the data distributions where poisoning points are more influential, yet more difficult to detect, b) systematically generate in an efficient and scalable way attacks that correspond to different types of threats and c) study the effect of mitigation measures such as improving detectability. In addition to studying the tradeoffs involved in the adversarial model, pGAN also allows to naturally study the tradeoffs between performance and robustness of the system as the fraction of poisoning points increases. Our experimental evaluation shows that pGAN effectively bypasses different strategies to mitigate poisoning attacks, including outlier detection, label sanitization, PCA-based defences and Sever algorithm. We train pGAN following a coordinated gradient-based strategy by sequentially updating the parameters of the three components using mini-batch stochastic gradient descent/ascent. The procedure is described in Algorithm 1. For the generator and the discriminator data points are sampled from the conditional distribution on the subset of poisoning labels Y p. For the classifier, honest data points are sampled from the data distribution including all the classes. We alternate the training for the three components with the i, j and k number of steps for the discriminator, classifier, and generator respectively. In practice, we choose i, j > k, i.e. we update more often the discriminator and the classifier. For example, in our experiments we set i, j = 4 and k = 1. for number of training iterations do for i steps do sample mini-batch of m noise samples {z n |Y p} m n=1 from p z (z|Y p) get mini-batch of m training samples {x n} m n=1 from p x (x|Y p) update the discriminator by ascending its stochastic gradient end for for j steps do sample mini-batch of m noise samples {z n |Y p} m n=1 from p z (z|Y p) get mini-batch of m training samples {x n} m n=1 from p x (x) update the classifier by ascending its stochastic gradient end for for k steps do sample mini-batch of m noise samples {z n |Y p} m n=1 from p z (z|Y p) update the generator by descending its stochastic gradient end for end for For the synthetic experiment shown in the paper we sample our training and test data points from two bivariate Gaussian distributions, N (µ 0, Σ 0) and N (µ 1, Σ 1), with parameters: In Fig. 9 we show the effect of the poisoning attack on the decision boundary. For testing pGAN we trained a separate logistic regression classifier with 40 genuine training examples (20 per class) and adding extra 20% poisoning points (8 samples). We trained the classifier using Stochastic Gradient Descent (SGD) with a learning rate of 0.01 for 1, 000 epochs. In this case, no outlier detector is applied to pre-filter the training points. The in Fig. 9 show that for α = 0 the attack is very effective, although the poisoning points depicted in red (which are labelled as green) are far from the genuine distribution of green points. Then, as we increase the value of λ the attack is blunt. In this synthetic example, the classifier is quite stable: the number of features is very small (two), and the topology of the problem is simple (the classes are linearly separable and the overlapping between classes is small) and the classifier is simple. Thus, the effect of the poisoning attack when detectability constraints are considered, i.e. α = 0, is very reduced. Note that the purpose of this synthetic example is just to illustrate the behaviour of pGAN as a function of λ rather than showing an scenario where the attack can be very effective. Here we provide complete details about the settings for the experiments described in the paper. In Table 2 we show the characteristics of the datasets used in our experimental evaluation. The parameters for training pGAN for MNIST and FMNIST are shown in Table 3, whereas the parameters for CIFAR are described in Table 4. In all cases, for pGAN generator we used (independent) Gaussian noise with zero mean and unit variance. 6, 000/6, 000 1, 000/1, 000 784 CIFAR (automobile vs truck) 5, 000/5, 000 1, 000/1, 000 3, 072 For MNIST we trained pGAN for 2, 000 epochs using a batch-size of 200, setting i, j = 4 and k = 1 in Alg. 1. For FMNIST we used similar settings but training for 3, 000 epochs. For CIFAR we trained pGAN using a batch-size of 25 for 300 epochs, with i, j = 4 and k = 1. Finally, the architecture of the Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs) trained to test the attacks is described in Tables 5 and 6. In Figs. 10 -12 we show samples generated with pGAN for different values of α in MNIST, FMNIST and CIFAR respectively. The class labels of the poisoning points are 5 and ankle boot and truck for each of the datasets. In all cases we can observe that for small values of α (but with α > 0), the generated examples exhibit characteristics from the two classes involved in the attack, although pGAN tries to preserve features from the (original) poisoning class to evade detection. For values of α close to 1, the samples generated by pGAN are similar to those we can generate with a standard GAN. Here we show the sensitivity analysis of Sever performance to pGAN attack w.r.t. its parameter, which controls the fraction of training points removed. For this, using the same experimental settings as in the paper, we tested the performance of Sever for = [0.05, 0.1, 0.2]. The in Fig. 13 show that the performance of the attack is not very sensitive to the value of for both MNIST and FMNIST. As for Sever, we tested the performance of the PCA defence for different values of, which as in the previous case, controls the fraction of training points removed. We explored the values = [0.05, 0.1, 0.2]. The in Fig. 14 show that, in this case, the performance is further degraded as we increase the value of, especially for FMNIST. In Fig. 15 we compared the performance of the label sanitization defence and the outlier detection defence for both MNIST and FMNIST. We can observe that label sanitization performs very poorly against pGAN attacks compared to outlier detection. The average test error increases up to 30% with 30% of poisoning points for MNIST, whereas for FMNIST the error increases up to 22% for the same fraction of poisoning points.
In this paper we propose a novel generative model to craft systematic poisoning attacks with detectability constraints against machine learning classifiers, including deep networks.
1,498
scitldr
The Expectation-Maximization (EM) algorithm is a fundamental tool in unsupervised machine learning. It is often used as an efficient way to solve Maximum Likelihood (ML) and Maximum A Posteriori estimation problems, especially for models with latent variables. It is also the algorithm of choice to fit mixture models: generative models that represent unlabelled points originating from $k$ different processes, as samples from $k$ multivariate distributions. In this work we define and use a quantum version of EM to fit a Gaussian Mixture Model. Given quantum access to a dataset of $n$ vectors of dimension $d$, our algorithm has convergence and precision guarantees similar to the classical algorithm, but the runtime is only polylogarithmic in the number of elements in the training set, and is polynomial in other parameters - as the dimension of the feature space, and the number of components in the mixture. We generalize further the algorithm by fitting any mixture model of base distributions in the exponential family. We discuss the performance of the algorithm on datasets that are expected to be classified successfully by those algorithms, arguing that on those cases we can give strong guarantees on the runtime. Over the last few years, the effort to find real world applications of quantum computers has greatly intensified. Along with chemistry, material sciences, finance, one of the fields where quantum computers are expected to be most beneficial is machine learning. A number of different algorithms have been proposed for quantum machine learning (; ; ; ; Subaşı et al., 2019;), both for the supervised and unsupervised setting, and despite the lack of large-scale quantum computers and quantum memory devises, some quantum algorithms have been demonstrated in proof-of-principle experiments (; ;). Here, we look at Expectation-Maximization (EM), a fundamental algorithm in unsupervised learning, that can be used to fit different mixture models and give maximum likelihood estimates with the so-called latent variable models. Such generative models are one of the most promising approaches for unsupervised problems. The goal of a generative model is to learn a probability distribution that is most likely to have generated the data collected in a training set V ∈ R n×d of n vectors of d features. Fitting the model consists in learning the parameters of a probability distribution p in a certain parameterized family that best describes our vectors v i. We will see that, thanks to this formulation, we can reduce a statistical problem into an optimization problem using maximum likelihood estimation (ML) estimation. The likelihood is the function that we use to measure how good a model is for explaining a given dataset. For a given machine learning model with parameters γ, the likelihood of our data set V is the probability that the data have been generated by the model with parameters γ, assuming each point is independent and identically distributed. We think the likelihood as a function of γ, holding the dataset V fixed. For p(v i |γ) the probability that a point v i comes from model γ, the likelihood is defined as L(γ; V):= n i=1 p(v i |γ). From this formula, we can see that in order to find the best parameters γ * of our model we need to solve an optimization problem. For numerical and analytical reasons, instead of maximizing the likelihood L, it is common practice to find the best model by maximizing the log-likelihood function (γ; V) = log L(γ; V) = n i=1 log p(v i |γ). In this context, we want to find the model that maximizes the log-likelihood: γ * M L:= arg max γ n i=1 log p(v i |γ). The procedure to calculate the log-likelihood depends on the specific model under consideration. A possible solution would be to use a gradient based optimization algorithm on. Unfortunately, due to the indented landscape of the function, gradient based techniques often do not perform well. Therefore, it is common to solve the maximum likelihood estimation (or maximum a priori) problem using the Expectation-Maximization (EM) algorithm. EM is an iterative algorithm which is guaranteed to converge to a (local) optimum of the likelihood. This algorithm has a striking variety of applications, and has been successfully used for medical imaging , image restoration , problems in computational biology , and so on. EM has been proposed in different works by different authors, but has been formalized as we know it only in 1977 . For more details, we refer to . In this work, we introduce Quantum Expectation-Maximization (QEM), a new algorithm for fitting mixture models. We detail its usage in the context of Gaussian Mixture Models, and we extend the to other distributions in the exponential family. We also generalize the by showing how to compute the MAP: the Maximum A Posteriori estimate of a mixture model. MAP estimates can be seen as the Bayesian version of maximum likelihood estimation problems. MAP estimates are often preferred over ML estimates, due to a reduced propensity to overfit. Our main can be stated as: Result (Quantum Expectation-Maximization). (see Theorem 3.9) For a data matrix V ∈ R n×d stored in an appropriate QRAM data structure and for parameters δ θ, δ µ > 0, Quantum Expectation-Maximization (QEM) fits a Maximum Likelihood (or a Maximum A Posteriori) estimate of a Gaussian Mixture Model with k components, in running time per iteration which is dominated by: where Σ is a covariance matrix of a Gaussian distribution, η is a parameter of the dataset related to the maximum norm of the vectors, δ θ, δ µ are error parameters in the QEM algorithm, µ(< √ d) is a factor appearing in quantum linear algebra and κ is the condition number of a matrix. Here we only kept the term in the running time that dominates for the range of parameters of interest. In Theorem 3.9 we explicate the running time of each step of the algorithm. The QEM algorithm runs for a number of iterations until a stopping condition is met (defined by a parameter τ > 0) which implies a convergence to a (local) optimum. Let's have a first high-level comparison of this with the standard classical algorithms. The runtime of a single iteration in the standard implementation of the EM algorithm is at least O(knd 2) . The advantage of the quantum algorithm is an exponential improvement with respect to the number of elements in the training set, albeit with a worsening on other parameters. It is crucial to find datasets where such a quantum algorithm can offer a speedup. For a reasonable range of parameters (d = 40, k = 10, η = 10, δ = 0.5, κ(V) = 25, κ(Σ) = 5, µ(Σ) = 4) which is motivated by some experimental evidence reported in Section 4, datasets where the number of samples in the order of O(10 12) might be processed faster on a quantum computer. One should expect that some of the parameters of the quantum algorithm can be improved, especially the dependence on the condition numbers and the errors, which can make enlarge the type of datasets where QEM can offer an advantage. Note that we expect the number of iterations of the quantum algorithm to be proportional to the number of iteration of the classical case. This is to be expected since the convergence rate does not change, and it is corroborated by previous experimental evidence in a similar scenario: the number of iterations needed by q-means algorithm for convergence, is proportional to the number of iterations of the classical k-means algorithm . Expectation-Maximization is widely used for fitting mixture models in machine learning . Most mixture models use a base distribution in the exponential family: Poisson , Binomial, Multinomial, log-normal , exponential (, Dirichlet multinomial , and others. EM is also used to fit mixtures of experts, mixtures of the student T distribution (which does not belong to the exponential family, and can be fitted with EM using ) and for factor analysis, probit regression, and learning Hidden Markov Models . There is a fair amount of quantum algorithms that have been proposed in the context of unsupervised learning. (Aïmeur et al., 2013; b;). Recently, classical machine learning algorithms were obtained by "dequantizing" quantum machine learning algorithms (b; a; c; Gilyén et al., 2018a;). The runtime of these classical algorithm is poly-logarithmic in the dimensions of the dataset. However, the polynomial dependence on the rank, the error, and the condition number, make these new algorithms impractical on interesting datasets, as shown experimentally by . Fast classical algorithm for GMM exists, albeit assuming only one shared covariance matrix , and without a polylogarithmic dependence in the number of elements in the training set. Independently of us, Miyahara, Aihara, and Lechner extended the q-means algorithm for Gaussian Mixture Models , using similar techniques. The main difference is that in their work the update step is performed using a hard-clustering approach (as in the k-means algorithm), that is for updating the centroid and the covariance matrices of a cluster j, only the data points for which cluster j is nearest are taken into account. In our work, we use the soft clustering approach (as in the classical EM algorithm), that is for updating the centroid and the covariance matrices of cluster j, all the data points weighted by their responsibility for cluster j are taken into account. Both approaches have merits and can offer advantages . As is common in machine learning literature, we introduce the Expectation-Maximization algorithm by using it to fit Gaussian Mixture Models (GMM). Mixture models are a popular generative model in machine learning. The intuition behind mixture models is to model complicated distributions by using a group of simpler (usually uni-modal) distributions. In this setting, the purpose of the learner is to model the data by fitting the joint probability distribution which most likely have generated our samples. In this section we describe GMM: probably the most used mixture model used to solve unsupervised classification problems. In fact, given a sufficiently large number of mixture components, it is possible to approximate any density defined in R d . In unsupervised settings, we are given a training set of unlabeled vectors v 1 · · · v n ∈ R d which we represent as rows of a matrix V ∈ R n×d. Let y i ∈ [k] one of the k possible labels for a point v i. We posit that the joint probability distribution of the data p(v i, y i) = p(v i |y i)p(y i), is defined as follow: The θ j are the mixing weights, i.e. the probabilities that y i = j, and N (µ j, Σ j) is the Gaussian distribution centered in µ j ∈ R d with covariance matrix Σ j ∈ R d×d. Note that the variables y i are unobserved, and thus are called latent variables. There is a simple interpretation for this model. We assume the data is created by first selecting an index j ∈ [k] by sampling according to Multinomial(θ), and then a vector v i is sampled from N (µ j, Σ j). Fitting a GMM to a dataset reduces to finding an assignment for the parameters: that best maximize the log-likelihood for a given dataset. Note that the algorithm used to fit GMM can return a local minimum which might be different than γ *: the model that represents the global optimum of the likelihood function. We use the letter φ to represent our base distribution, which in this case is the probability density function of a multivariate Gaussian distribution N (µ, Σ). Using this formulation, a GMM is expressed as: p(v) = k j=1 θ j φ(v; µ j, Σ j) where θ j are the mixing weights of the multinomial distribution such that k j=1 θ j = 1. The probability for an observation v i to be assigned to the component j is given by:. This value is called responsibility, and corresponds to the posterior probability of the sample i being assigned label j by the current model. As anticipated, to find the best parameters of our generative model, we maximize the log-likelihood of the data. For GMM, the likelihood is given by the following formula Alas, it is seldom possible to solve maximum likelihood estimation analytically (i.e. by finding the zeroes of the derivatives of the log-like function, and this is one of those cases. Fortunately, Expectation-Maximization is an iterative algorithm that solves numerically the optimization problem of ML estimation. To complicate things, the likelihood function for GMM is not convex, and thus we might find some local minima . If we were to know the latent variable y i, then the log-likelihood for GMM would be: This formula can be easily maximized with respect to the parameters θ, µ, and Σ. In the Expectation step we calculate the missing variables y i's, given a guess of the parameters (θ, µ, Σ) of the model. Then, in the Maximization step, we use the estimate of the latent variables obtained in the Expectation step to update the estimate of the parameters. While in the Expectation step we calculate a lower bound on the likelihood, in the Maximization step we maximize it. Since at each iteration the likelihood can only increase, the algorithm is guaranteed to converge, albeit possibly to a local optimum (see for the proof). During the Expectation step all the responsibilities are calculated, while in the Maximization step we update our estimate on the parameters γ t+1 = (θ t+1, µ t+1, Σ t+1). The stopping criterion for GMM is usually a threshold on the increment of the log-likelihood: if the log-likelihood changes less than a threshold between two iterations, then the algorithm stops. Notice that, since the value of the log-likelihood significantly depends on the amount of data points in the training sets, it is often preferable to adopt a scale-free stopping criterion, which does not depend on the number of samples. For instance, in the toolkit scikit-learn the stopping criterion is given by a tolerance on the average increment of the log-probability, which is chosen to be smaller than a certain τ, say 10 −3. More precisely, the stopping criterion is |E[log p(v i ; γ Dataset assumptions in GMM As in q-means , we have an assumption on the dataset that all elements of the mixture contribute proportionally to the total responsibility: i, e This is equivalent to requiring that clusters share a comparable amount of points in the "well-clusterability" assumption in q-means . It is also equivalent to assuming that θ j /θ l = Θ ∀j, l ∈ [k]. For convenience, in this work, we also assume that the dataset is normalized such that the shortest vector has norm 1 and define η:= max i v i 2 to be the maximum norm squared of a vector in the dataset. This is not a necessary requirement for our dataset, but it will simplify the analysis of our algorithm, allowing us to give strict bounds on the runtime. Preliminaries We assume a basic understanding of quantum computing, we recommend Nielsen and Chuang where σ is the smallest singular value which is greater than τ. With nnz(V) we mean the number of non-zero elements of the rows of V. When we say κ(V) we mean the condition number of the matrix V, that is the ratio between the biggest and the smallest (non-zero) singular value. All the tools used in this work, like quantum algorithms for computing distances and linear algebraic operations, are reported in the Supplementary Material section. In this section, we present a quantum Expectation-Maximization algorithm to fit a GMM. The algorithm can also be adapted fit other mixtures models where the probability distributions belong to the exponential family. As the GMM is both intuitive and one of the most widely used mixture models, our are presented for the GMM case. A robust version of the EM algorithm Similar to the work of , we define a ∆-robust version of the EM algorithm which we use to fit a GMM. The difference between this formalization and the original EM for GMM is simple. Here we explain the numerical error introduced in the training algorithm. Quantum access to the mixture model As in the classical algorithm, we use some subroutines to compute the responsibilities and update our current guess of the parameters. The classical algorithm has clearly two separate steps for Expectation and Maximization. In contrast, the quantum algorithm uses a subroutine to compute the responsibilities inside the step that performs the Maximization, that is the subroutines for computing responsibilities are called multiple times during the quantum Maximization step. During the quantum Maximization step, the algorithm updates the model parameters γ t by creating quantum states corresponding to parameters γ t+1 and then recovering classical estimates for these parameters using quantum tomography or amplitude amplification. In order for this subroutines to be efficient, the values of the GMM are stored in QRAM data structures and are updated following each maximization step. Definition 1 (Quantum access to a GMM). We say that we have quantum access to a GMM if the dataset V ∈ R n×d and model parameters θ j ∈ R, µ j ∈ R d, Σ j ∈ R d×d for all j ∈ [k] are stored in QRAM data structures which allow us to perform in time O(polylog(d)) the following mappings: • |j |i |0 → |j |i |σ Require: Quantum access to a GMM model, precision parameters δ θ, δ µ, and threshold τ. Ensure: A GMM γ t that maximizes locally the likelihood (γ; V), up to tolerance τ. 1: Use a heuristic described at the beginning of this section to determine an initial guess for γ 0 = (θ 0, µ 0, Σ 0), and store these parameters in the QRAM. 2: Use Lemma 3.1 to estimate the log determinant of the matrices {Σ Step 1: Get an estimate of θ t+1 such that θ t+1 − θ t+1 ≤ δ θ using Lemma 3.4. Step 2: Get an estimate {µ j t+1} k j=1 by using Lemma 3.6 to estimate each µ t+1 j and |µ Step 3: Get an estimate {Σ j t+1} k j=1 by using Lemma 3.7 to estimate Σ t+1 j F and |Σ t+1 j such that Step 4: Estimate E[p(v i ; γ t+1)] up to error τ /2 using Theorem 3.8. Step 5: Store γ t+1 in the QRAM, and use Lemma 3.1 to estimate the determinants {log det( . 10: Quantum initialization strategies exists, and are described in the Appendix. In this step of the quantum algorithm we are just showing how to compute efficiently the responsibilities as a quantum state. First, we compute the responsibilities in a quantum register, and then we show how to put them as amplitudes of a quantum state. We start by a classical algorithm used to efficiently approximate the log-determinant of the covariance matrices of the data. At each iteration of Quantum Expectation-Maximization we need to compute the determinant of the updated covariance matrices, which is done thanks to Lemma 3.1. We will see from the error analysis that in order to get an estimate of the GMM, we need to call Lemma 3.1 with precision for which the runtime of Lemma 3.1 gets subsumed by the running time of finding the updated covariance matrices through 3.7. Thus, we do not explicitly write the time to compute the determinant from now on in the algorithm and when we say that we update Σ we include an update on the estimate of log(det(Σ)) as well. Lemma 3.1 (Determinant evaluation). There is an algorithm that, given as input a matrix Σ and a parameter 0 < δ < 1, outputs an estimate log(det(Σ)) such that |log(det(Σ)) − log(det(Σ))| ≤ with probability 1 − δ in time: Now we can state the main brick used to compute the responsability: a quantum algorithm for evaluating the exponent of a Gaussian distribution. Lemma 3.2 (Quantum Gaussian Evaluation). Suppose we have stored in the QRAM a matrix V ∈ R n×d, the centroid µ ∈ R d and the covariance matrix Σ ∈ R d×d of a multivariate Gaussian distribution φ(v|µ, Σ), as well as an estimate for log(det(Σ)). Then for 1 > 0, there exists a quantum algorithm that with probability 1 − γ performs the mapping, ) is the exponent for the Gaussian probability density function. The running time of the algorithm is Using controlled operations it is simple to extend the previous Theorem to work with multiple Gaussians distributions (µ j, Σ j). That is, we can control on a register |j to do |j |i |0 → |j |i |φ(v i |µ j, Σ j). In the next Lemma we will see how to obtain the responsibilities r ij using the previous Theorem and standard quantum circuits for doing arithmetic, controlled rotations, and amplitude amplification. The Lemma is stated in a general way, to be used with any probability distributions that belong to an exponential family. Lemma 3.3 (Calculating responsibilities). Suppose we have quantum access to a GMM with parameters There are quantum algorithms that can: 1. Perform the mapping |i |j |0 → |i |j |r ij such that |r ij − r ij | ≤ 1 with probability 1 − γ in time: ij with high probability in time: Now we need to get a new estimate for the parameters of our model. This is the idea: at each iteration we recover the new parameters of the model as quantum states, and then recover it using tomography, amplitude estimation, or sampling. Once the new model has been recovered, we update the QRAM such that we get quantum access to the model γ t+1. The possibility to estimate θ comes from a call to the unitary we built to compute the responsibilities, and postselection. Lemma 3.4 (Computing θ t+1). We assume quantum access to a GMM with parameters γ t and let δ θ > 0 be a precision parameter. There exists an algorithm that estimates We use quantum linear algebra to transform the uniform superposition of responsibilities of the j-th mixture into the new centroid of the j-th Gaussian. Let R t j ∈ R n be the vector of responsibilities for a Gaussian j at iteration t. The following claim relates the vectors R t j to the centroids µ t+1 j. Claim 3.5. Let R t j ∈ R n be the vector of responsibilities of the points for the Gaussian j at time The proof is straightforward. Lemma 3.6 (Computing µ t+1 j). We assume we have quantum access to a GMM with parameters γ t. For a pre- From the ability to calculate responsibility and indexing the centroids, we derive the ability to reconstruct the covariance matrix of the Gaussians as well. Again, we use quantum linear algebra subroutines and tomography to recover an approximation of each Σ j. Recall that we have defined the matrix V ∈ R n×d Lemma 3.7 (Computing Σ t+1 j). We assume we have quantum access to a GMM with parameters γ t. We also have computed estimates µ j t+1 of all centroids such that µ j t+1 − µ t+1 j ≤ δ µ for precision parameter δ µ > 0. Then, there exists a quantum algorithm that outputs estimates for the new covariance matrices {Σ with high probability, in time, Now we are going to show how it is possible to get an estimate of the log-likelihood using a quantum procedure and access to a GMM model. A good estimate is crucial, as it is used as stopping criteria for the quantum algorithm as well. Classically, we stop to iterate the EM algorithm when | (γ t ; V) − (γ t+1 ; V)| < n, or equivalently, we can set a tolerance on the average increase in log probability:. From this we can estimate an upper bound on the log-likelihood as n log Lemma 3.8 (Quantum estimation of likelihood). We assume we have quantum access to a GMM with parameters γ t. For τ > 0, there exists a quantum algorithm that estimates E[p(v i ; γ t)] with absolute error τ in time Putting together all the previous Lemmas, we write the main of the work. Theorem 3.9 (QEM for GMM). We assume we have quantum access to a GMM with parameters γ t. For parameters δ θ, δ µ, τ > 0, the running time of one iteration of the Quantum Expectation-Maximization (QEM) algorithm is For the range of parameters of interest, the running time is dominated by T Σ. The proof follows directly from the previous lemmas. Note that the cost of the whole algorithm is given by repeating the Estimation and the Maximization steps several times, until the threshold on the log-likelihood is reached. Note also that the expression of the runtime can be simplified from the observation that the cost of performing tomography on the covariance matrices Σ j dominates the cost. In this section, we present the of some experiments on real datasets to bound the condition number and the other parameters of the runtime. Let's discuss the value of κ(Σ), κ(V), µ(Σ), and µ(V). We can thresholding the condition number by discarding small singular values of the matrix, as used in quantum linear algebra, might be advantageous. This is indeed done often in classical machine learning models, since discarding the eigenvalues smaller than a certain threshold might even improve upon the metric under consideration (i.e. often the accuracy), and is a form of regularization (, Section 6.5). This is equivalent to limiting the eccentricity of the Gaussians. We can have a similar consideration on the condition number of the dataset κ(V). As shown before, the condition number of the matrix V appearing in Lemma 3.2 is κ 2 (V). Similarly, we can claim that the value of µ(V) will not increase significantly as we add vectors to the training set. Remember that we have some choice in picking the function µ: in previous experiments we have found that choosing the maximum 1 norm of the rows of V lead to values of µ around 10, and also in this case we expect the samples of a well-clusterable dataset to be constant. Also, µ is bounded by the Frobenius norm of V. In case the matrix V can be clustered with high-enough accuracy by k-means, it has been showed that the Frobenius norm of the matrix is proportional to √ k. Given that EM is a more powerful extension of k-means, we can rely on similar observations too. Usually, the number of features d is much more than the number of components in the mixture, i.e. d k, so we expect d 2 to dominate the k 3.5 term in the cost needed to estimate the mixing weights. This makes the runtime of a single iteration proportional to: As we said, the quantum running time saves the factor that depends on the number of samples and introduces a number of other parameters. Using our experimental we can see that when the number of samples is large enough one can expect the quantum running time to be faster than the classical one. Note as well that one can expect to save some more factors from the quantum running time with a more careful analysis. Experiments. In the algorithm, we need to set the parameters δ µ and δ θ to be small enough such that the likelihood is perturbed less than τ /4. We have reasons to believe that on well-clusterable data, the value of these parameters will be large enough, such as not to impact dramatically the runtime. A quantum version of k-means algorithm has already been simulated on real data under similar assumptions . There, the authors analyzed on the MNIST dataset the performances of q-means, the δ-resistant version of the classical k-means algorithm. The experiment concluded that, for datasets that are expected to be clustered nicely by this kind of clustering algorithms, the value of the parameters δ µ, δ θ did not decrease by increasing the number of samples nor the number of features. We expect similar behaviour in the EM case, namely that for large datasets the impact on the runtime of the errors (δ µ, δ θ) does not cancel out the exponential gain in the dependence on the number of samples. For instance, in all the experiments of q-means on the MNIST dataset the value of δ µ (which in their case was called just δ) has been between 0.2 and 0.5. The value of τ is usually (for instance in scikit-learn ) chosen to be 10 −3. The value of η has always been below 11. We also analyzed some other real-world dataset, which can be fitted well with the EM algorithm (; APPML; Voxforge.org) to perform speaker recognition: the task of recognizing a speaker from a voice sample, having access to a training set of recorded voices of all the possible speakers. Details of the measurements are reported in the Supplementary Material section, here we report only the in Table 1. After this, we also experimented the impact of errors on the mixing weights in the accuracy of a ML estimate of a GMM by perturbing the trained model, by adding some random noise. With a value of δ θ = 0.035, δ µ = 0.5 we correctly classified 98.2% utterances. Table 1: We estimate some of the parameters of the VoxForge (Voxforge.org) dataset. The averages for the matrix V are taken over 34 samples, while for Σ is over 170 samples. The accuracy reported in the experiments is measured on 170 samples in the test set, after the threshold on the eigenvalues of Σ. Each model is the of the best of 3 different initializations of the EM algorithm. The first and the second column are the maximum singular values of all the covariance matrices, and the absolute value of the log-determinant. The column κ * (Σ) consist in the thresholded condition number for the covariance matrices. In , the experimental suggest that the influence of the extra parameters in the quantum running time (condition thresholds, errors, etc.) is moderate. This allows us to be optimistic that, when quantum computers with quantum access to data become a reality, our algorithm (and improved versions that reduce even more the complexity with respect to these extra parameters) could be useful in analyzing large datasets. 6 SUPPLEMENTARY MATERIAL Here we report useful Theorems, Lemmas, and Claims that we use in the main text. Theorem 6.1 (Determinant estimation ). Let M ∈ R d×d be a positive definite matrix with eigenvalues in the interval (σ min, 1). Then for all δ ∈ and > 0 there is a classical algorithm that outputs log(det(M)) such that |log(det(M)) − log(det(M))| ≤ 2 | log(det(M))| with probability at least 1 − δ in time Require: Dataset V, tolerance τ > 0. Ensure: A GMM γ t = (θ t, µ t, Σ t) that maximizes locally the likelihood (γ; V) up to tolerance τ. 1: Select γ 0 = (θ 0, µ 0, Σ 0) using classical initialization strategies described in Subsection 6.3. 2: t = 0 3: repeat Expectation ∀i, j, calculate the responsibilities as: 5: Maximization Update the parameters of the model as: 6: t=t+1 7: until 8: 9: Return Proof. We need to find the K such that for all x, y ∈ R d, we have that σ j (y) − σ j (x) ≤ K y − x. Observing that σ j is differentiable and that if we apply Cauchy-Schwarz to the statement of the Mean-Value-Theorem we derive that ∀x, y ∈ U, ∃c such that f (x) − f (y) ≤ ∇f (c) F x − y. So to show Lipschitz continuity it is enough to select K ≤ ∇σ j * The partial derivatives In our case we can deduce that: Claim 6.4. (b) Let θ be the angle between vectors x, y, and assume that θ < π/2. Then, We will also use Claim 4.5 from . Claim 6.5. Let b be the error we commit in estimating |c such that |c − |c < b, and a the error we commit in the estimating the norms, Definition 2 (Exponential Family). A probability density function or probability mass function p(v|ν), where V ⊆ R, ν ∈ R p is said to be in the exponential family if can be written as: where: • ν ∈ R p is called the canonical or natural parameter of the family, • o(ν) is a function of ν (which often is just the identity function), • T (v) is the vector of sufficient statistics: a function that holds all the information the data v holds with respect to the unknown parameters, • A(ν) is the cumulant generating function, or log-partition function, which acts as a normalization factor, • h(v) > 0 is the base measure which is a non-informative prior and de-facto is scaling constant. To prove our , we are going to use the quantum procedures listed hereafter. Theorem 6.6 (Amplitude estimation and amplification ). If there is unitary operator U such that U |0 l = |φ = sin(θ) |x, 0 + cos(θ) |G, 0 ⊥ then sin 2 (θ) can be estimated to multiplicative error η in time η sin(θ) ) and |x can be generated in expected time O(We also need some state preparation procedures. These subroutines are needed for encoding vectors in v i ∈ R d into quantum states |v i . An efficient state preparation procedure is provided by the QRAM data structures. We stress the fact that our continues to hold, no matter how the efficient quantum loading of the data is provided. For instance, the data can be accessed through a QRAM, through a block encoding, or when the data can be produced by quantum circuits. Theorem 6.7 (QRAM data structure (a) ). Let V ∈ R n×d, there is a data structure to store the rows of V such that, 1. The time to insert, update or delete a single entry v ij is O(log 2 (n)). In our algorithm we will also use subroutines for quantum linear algebra. For a symmetric matrix M ∈ R d×d with spectral norm M = 1 stored in the QRAM, the running time of these algorithms depends linearly on the condition number κ(M) of the matrix, that can be replaced by κ τ (M), a condition threshold where we keep only the singular values bigger than τ, and the parameter µ(M), a matrix dependent parameter defined as et al., 2018; Gilyén et al., 2018b). Theorem 6.8 (Quantum linear algebra (; Gilyén et al., 2018b) ). Let M ∈ R d×d such that M 2 = 1 and x ∈ R d. Let, δ > 0. If M is stored in appropriate QRAM data structures and the time to prepare |x is T x, then there exist quantum algorithms that with probability at least 1 − 1/poly(d) return a state |z such that Theorem 6.9 (Quantum linear algebra for matrix products ). Let M 1, M 2 ∈ R d×d such that M 1 = M 2 = 1 and x ∈ R d, and a vector x ∈ R d stored in QRAM. Let > 0. Then there exist quantum algorithms that with probability at least 1 − 1/poly(d) returns a state |z such that |z − |M x ≤ in time O((κ(M)(µ(M 1)T M1 + µ(M 2)T M2 )) log(1/)), where T M1, T M2 is the time needed to index the rows of M 1 and M 2. The linear algebra procedures above can also be applied to any rectangular matrix V ∈ R N ×d by considering instead The final component needed for the q-means algorithm is a linear time algorithm for vector state tomography that will be used to recover classical information from the quantum states corresponding to the new centroids in each step. Given a unitary U that produces a quantum state |x, by calling O(d log d/ 2) times U, the tomography algorithm is able to reconstruct a vector x that approximates |x such that | x − |x ≤. Theorem 6.10 (Vector state tomography ). Given access to unitary U such that U |0 = |x and its controlled version in time T (U), there is a tomography algorithm with time complexity O(T (U) ) that produces unit vector x ∈ R d such that x − x 2 ≤ with probability at least (1 − 1/poly(d)). Lemma 6.11 (Distance / Inner Products Estimation (; ; a) ). Assume for a data matrix V ∈ R N ×d and a centroid matrix C ∈ R k×d that the following unitaries |i |0 → |i |v i, and |j |0 → |j |c j can be performed in time T and the norms of the vectors are known. For any ∆ > 0 and > 0, there exists a quantum algorithm that computes | with probability at least 1 − 2∆ in time O vi cj T log(1/∆). Unlike k-means clustering, choosing a good set of initial parameters for a mixture of Gaussian is by no means trivial, and in multivariate context is known that the solution is problem-dependent. There are plenty of proposed techniques, and here we describe a few of them. Fortunately, these initialization strategies can be directly translated into quantum subroutines without impacting the overall running time of the quantum algorithm. The simplest technique is called random EM, and consists in selecting initial points at random from the dataset as centroids, and sample the dataset to estimate the covariance matrix of the data. Then these estimates are used as the starting configuration of the model, and we may repeat the random sampling until we get satisfactory . A more standard technique borrows directly the initialization strategy of k-means++ proposed in , and extends it to make an initial guess for the covariance matrices and the mixing weights. The initial guess for the centroids is selected by sampling from a suitable, easy to calculate distribution. This heuristic works as following: Let c 0 be a randomly selected point of the dataset, as first centroid. The other k −1 centroids are selected by selecting a vector v i with probability proportional to d 2 (v i, µ l(vi) ), where µ l(vi) is the previously selected centroid that is the closest to v i in 2 distance. These centroids are then used as initial centroids for a round of k-means algorithm to obtain µ 0 1 · · · µ 0 j. Then, the covariance matrices can be initialized as Σ T, where C j is the set of samples in the training set that have been assigned to the cluster j in the previous round of k-means. The mixing weights are estimated as C j /n. Eventually Σ 0 j is regularized to be a PSD matrix. There are other possible choices for parameter initialization in EM, for instance, based on Hierarchical Agglomerative Clustering (HAC) and the CEM algorithm. In CEM we run one step of EM, but with a so-called classification step between E and M. The classification step consists in a hard-clustering after computing the initial conditional probabilities (in the E step). The M step then calculates the initial guess of the parameters . In the small EM initialization method we run EM with a different choice of initial parameters using some of the previous strategies. The difference here is that we repeat the EM algorithm for a few numbers of iterations, and we keep iterating from the choice of parameters that returned the best partial . For an overview and comparison of different initialization techniques, we refer to (Blömer & ;). Quantum initialization strategies For the initialization of γ 0 in the quantum algorithm we can use the same initialization strategies as in classical machine learning. For instance, we can use the classical random EM initialization strategy for QEM. A quantum initialization strategy can also be given using the k-means++ initializion strategy from , which returns k initial guesses for the centroids c, where is the average squared distance between two points of the dataset, and is the tolerance in the distance estimation. From there, we can perform a full round of q-means algorithm and get an estimate for µ 0 1 · · · µ 0 k. With q-means and the new centroids store in the QRAM we can create the state Where l(v i) is the label of the closest centroid to the i-th point. By sampling S ∈ O(d) points from this state we get two things. First, from the frequency f j of the second register we can have an guess of θ 0 j ← |C j |/n ∼ f j /S. Then, from the first register we can estimate Σ Sampling O(d) points and creating the state in Equation takes time O(dkη) by Theorem 6.11 and the minimum finding procedure described in . Techniques illustrated in can also be used to quantize the CEM algorithm which needs a hardclustering step. Among the different possible approaches, the random and the small EM greatly benefit from a faster algorithm, as we can spend more time exploring the space of the parameters by starting from different initial seeds, and thus avoid local minima of the likelihood. What we presented in the previous section is the most general model of GMM. For simple datasets, it is common to assume some restrictions on the covariance matrices of the mixtures. The translation into a quantum version of the model should be straightforward. We distinguish between these cases: 1. Soft k-means. This algorithm is often presented as a generalization of k-means, but it can actually be seen as special case of EM for GMM -albeit with a different assignment rule. In soft k-means, the assignment function is replaced by a softmax function with stiffness parameter β. This β represents the covariance of the clusters. It is assumed to be equal for all the clusters, and for all dimensions of the feature space. Gaussian Mixtures with constant covariance matrix (i.e. Σ j = βI for β ∈ R) can be interpreted as a kind of soft or fuzzy version of k-means clustering. The probability of a point in the feature space being assigned to a certain cluster j is: where β > 0 is the stiffness parameter. 2. Spherical. In this model, each component has its own covariance matrix, but the variance is uniform in all the directions, thus reducing the covariance matrix to a multiple of the identity matrix (i.e. Σ j = σ 2 j I for σ j ∈ R). 3. Diagonal. As the name suggests, in this special case the covariance matrix of the distributions is a diagonal matrix, but different Gaussians might have different diagonal covariance matrices. 4. Tied. In this model, the Gaussians share the same covariance matrix, without having further restriction on the Gaussian. 5. Full. This is the most general case, where each of the components of the mixture have a different, SDP, covariance matrix. Lemma 6.12 (Determinant evaluation). There is an algorithm that, given as input a matrix Σ and a parameter 0 < δ < 1, outputs an estimate log(det(Σ)) such that |log(det(Σ)) − log(det(Σ))| ≤ with probability 1 − δ in time: Proof. In order to apply Theorem 6.1, we need to be sure that all the eigenvalues lie in (σ min, 1). In order to satisfy this condition, we can scale the matrix by a constant factor c, such that Σ = Σ/c. In this way, log det(Σ) = log. This will allow us to recover the value of log det(Σ) by using Theorem 6.1. We apply the Theorem with precision = 1/4 to get an estimate γ such that 2. Then, to have an estimate with absolute error, we apply Theorem 6.1 with precision = 4γ. This gives us an estimate for log(det(Σ)) with error 2 log(det(Σ)) ≤ in time: Lemma 6.13 (Error in the responsibilities of the exponential family). Let v i ∈ R n be a vector, and let {p(v i |ν j)} k j=1 be a set of k probability distributions in the exponential family, defined as p(Then, if we have estimates for each exponent with error, then we can compute each r ij such that |r ij −r ij | ≤ √ 2k for j ∈ [k]. Proof. The proof follows from rewriting the responsibility of Equation as: In this form, it is clear that the responsibilities can be seen a softmax function, and we can use Theorem 6.3 to bound the error in computing this value. Let T i ∈ R k be the vector of the exponent, that is In an analogous way we define T i the vector where each component is the estimate with error. The error in the responsibility is defined Because the function σ j is Lipschitz continuous, as we proved in Theorem 6.3 with a Lipschitz constant K ≤ √ 2, we have that, |σ j (Lemma 6.14 (Quantum Gaussian Evaluation). Suppose we have stored in the QRAM a matrix V ∈ R n×d, the centroid µ ∈ R d and the covariance matrix Σ ∈ R d×d of a multivariate Gaussian distribution φ(v|µ, Σ), as well as an estimate for log(det(Σ)). Then for 1 > 0, there exists a quantum algorithm that with probability 1 − γ performs the mapping, • U G, 1: |i |0 → |i |s i such that |s i − s i | < 1, where ) is the exponent for the Gaussian probability density function. The running time of the algorithm is, Proof. We use quantum linear algebra and inner product estimation to estimate the quadratic form µ and separately approximate each term in the sum to error 1 /4. We describe the procedure to estimate Recall that (through Lemma 3.1) we also have an estimate of the log-determinant to error 1. Thus we obtain an approximation for − + log(det(Σ))) within error 2 1. We have the upper bound, for Σ stored in a QRAM data structure. Further observing that u ≤ √ η and v i ≤ √ η, the running time for this computation is O Lemma 6.15 (Calculating responsibilities). Suppose we have quantum access to a GMM with parameters γ t = (θ t, µ t, Σ t). There are quantum algorithms that can: 1. Perform the mapping |i |j |0 → |i |j |r ij such that |r ij − r ij | ≤ 1 with probability 1 − γ in time: ij with high probability in time: Proof. For the first statement, let's recall the definition of responsibility:. With the aid of U G, 1 of Lemma 3.2 we can estimate log(φ(v i |µ j, Σ j)) for all j up to additive error 1, and then using the current estimate of θ t, we can calculate the responsibilities create the state, The estimate r ij is computed by evaluating a weighted softmax function with arguments log(φ( The estimates log(φ(v i |µ j, Σ j) are then uncomputed. The runtime of the procedure is given by calling k times Lemma 3.2 for Gaussian estimation (the arithmetic operations to calculate the responsibilities are absorbed). Let us analyze the error in the estimation of r ij. The responsibility r ij is a softmax function with arguments log(φ(v i |µ j, Σ j)) that are computed upto error 1 using Lemma 3.2. As the softmax function has a Lipschitz constant K ≤ √ 2 by Lemma 6.13, we choose precision for Lemma 3.2 to be 1 / √ 2k to get the guarantee |r ij − r ij | ≤ 1. Thus, the total cost of this step is T R1, 1 = k 1.5 T G, 1. Now let's see how to encode this information in the amplitudes, as stated in the second claim of the Lemma. We estimate the responsibilities r ij to some precision and perform a controlled rotation on an ancillary qubit to obtain, We then undo the circuit on the second register and perform amplitude amplification on the rightmost auxiliary qubit being |0 to get |R j:= Let us analyze the precision required to prepare |R j such that |R j − |R j ≤ 1. As we have estimates |r ij − r ij | < for all i, j, the 2 -norm error n. Applying Claim 6.4, the error for the normalized vector |R j can be bounded as Rj. By the Cauchy-Schwarz inequality we have that. We can use this to obtain a bound, using the dataset assumptions in section 2. If we choose such that Lemma 6.16 (Computing θ t+1). We assume quantum access to a GMM with parameters γ t and let δ θ > 0 be a precision parameter. There exists an algorithm that estimates θ t+1 ∈ R k such that θ t+1 − θ t+1 ≤ δ θ in time T, and compute a state |µ j t+1 along with an estimate for the norm V T R t j = µ j t+1 with error norm. The last step of the algorithm consists in estimating the unit vector |µ j t+1 with precision tom using tomography. Considering that the tomography depends on d, which we expect to be bigger than the precision required by the norm estimation, we can assume that the runtime of the norm estimation is absorbed. Thus, we obtain: O k Let's now analyze the total error in the estimation of the new centroids, which we want to be δ µ. For this purpose, we use Claim 6.5, and choose parameters such that 2 √ η(tom + norm) = δ µ. Since the error 3 for quantum linear algebra appears as a logarithmic factor in the running time, we can choose 3 tom without affecting the runtime. Let µ be the classical unit vector obtained after quantum tomography, and |µ be the state produced by the quantum linear algebra procedure starting with an approximation of |R t j. Using the triangle inequality we have |µ − µ < µ − |µ + |µ − |µ < tom + 1 < δ µ /2 √ η. The errors for the norm estimation procedure can be bounded We therefore choose parameters Since the amplitude estimation step we use for estimating the norms does not depends on d, which is expected to dominate the other parameters, we omit the amplitude estimation step. Substituting for T R2,δµ, we have the more concise expression for the running time of: Lemma 6.18 (Computing Σ t+1 j). We assume we have quantum access to a GMM with parameters γ t. We also have computed estimates µ j t+1 of all centroids such that µ j t+1 − µ t+1 j ≤ δ µ for precision parameter δ µ > 0. Then, there exists a quantum algorithm that outputs estimates for the new covariance matrices {Σ with high probability, in time, Proof. It is simple to check, that the update rule of the covariance matrix during the maximization step can be reduced to (, Exercise 11.2): First, note that we can use the estimates of the centroids to compute µ t+1 j (µ t+1 j) T with error δ µ µ ≤ δ µ √ η in the update rule for the Σ j. This follows from the fact that µ = µ + e where e is a vector of norm δ µ. Therefore does not depend on d, we consider it smaller than the runtime for performing tomography. Thus, the runtime for this operation is: Let's analyze the error of this procedure. We want a matrix Σ j that is √ ηδ µ -close to the correct one: Again, the error due to matrix multiplication can be taken as small as necessary, since is inside a logarithm. From Claim 6.5, we just need to fix the error of tomography and norm estimation such that η(unit + norms) < √ ηδ µ where we have used η as an upper bound on Σ j F. For the unit vectors, we require where |Σ j is the error due to tomography and |Σ j is the error due to Lemma 3.3. For this inequality to be true, we choose tom = 1 < δ µ /4 √ η. The same argument applies to estimating the norm Σ j with relative error: √ η (where here is the error of the amplitude estimation step used in Theorem 6.8 and 1 is the error in calling Lemma 3.3. Again, we choose . This can be derived from the fact that κ(A ⊗ B) = κ(A)κ(B), κ(AB) ≤ κ(A)κ(B), and Since the tomography is more costly than the amplitude estimation step, we can disregard the runtime for the norm estimation step. As this operation is repeated k times for the k different covariance matrices, the total runtime of the whole algorithm is given by O Let us also note that for each of new computed covariance matrices, we use Lemma 3.1 to compute an estimate for their log-determinant and this time can be absorbed in the time T Σ. Lemma 6.19 (Quantum estimation of likelihood). We assume we have quantum access to a GMM with parameters γ t. For τ > 0, there exists a quantum algorithm that estimates E[p(v i ; γ t)] with absolute error τ in time Proof. We obtain the likelihood from the ability to compute the value of a Gaussian distribution and quantum arithmetic. Using the mapping of Lemma 3.2 with precision 1, we can compute φ(v i |µ j, Σ j) for all the Gaussians, that is |i k−1 j=0 |j |p(v i |j; γ j). Then, by knowing θ, and by using quantum arithmetic we can compute in a register the mixture of Gaussian's: p(v i ; γ) = j∈[k] θ j p(v i |j; γ). We now drop the notation for the model γ and write p(v i) instead of p(v i ; γ). Doing the previous calculations quantumly leads to the creation of the state |i |p(v i). We perform the mapping |i |p(v i) → |i p(v i)| |0 + 1 − p(v i) |1 and estimate p(|0) E[p(v i)] with amplitude estimation on the ancilla qubit. To get a τ -estimate of p we need to decide the precision parameter we use for estimating p(v i |j; γ) and the precision required by amplitude estimation. Let p be the 1 -error introduced by using Lemma 3.2 and p the error introduced by amplitude estimation. Using triangle inequality we set p − p < p − p + p − p < τ. To have |p − p| < τ, we should set 1 such that |p − p| < τ /4, and we set the error in amplitude estimation and in the estimation of the probabilities to be τ /2. The runtime of this procedure is therefore: recognition community to extract the so-called Mel Frequency Cepstrum Coefficients (MFCCs) features . We selected d = 40 features for each speaker. Then, each speaker is modeled with a mixture of 16 different Gaussians. The test set consists of other 5 unseen utterances of the same 34 speakers (i.e. the training set and the test set have the same size). The task is to correctly label the unseen utterances with the name of the correct speaker. This is done by testing each of the GMM fitted during the training with the new voice sample, and selecting the model with the highest likelihood. Due to the differences in the speakers' audio data, the different dataset V 1... V 34 are made of a variable number of points which ranges from n = 2000 to 4000. In the vanilla configuration without thresholding, 169 among 170 utterances (5 utterances for all 34 speakers) were correctly labeled by EM with ML estimate, while all the elements in the test set were correctly recognized using the Bayesian (MAP) framework. During the training, we measured all the values of κ(Σ), κ(V), µ(V), µ(Σ). For almost all GMM fitted (choosing a diagonal covariance matrix), there is at least a Σ j (among the 16 used to model a single speaker) with bad condition number (i.e up to 2500 circa). As in we took a threshold on the matrix by discarding singular values smaller than a certain value m. Practically, we discarded any singular value smaller than 2 × 10 −2. Thresholding the covariance matrices did not impact the accuracy (the generalization error) significantly. In the MAP estimate, only one element is not correctly classified, while this number goes up to two in the case of ML estimates. The are in Table 1. To check the resistence to noise, we perturbed each of the GMM γ t, once fitted. Then, we measured variations of the accuracy on the test set. For each model, the perturbation consists of three things. First we add to θ a uniform vector (with randomly selected negative or positive entries) of 2 norm of δ θ = 0.035. Then we perturb each centroid µ j with a vector of norm smaller than δ µ = 0.5. In this error vector the sign of each noise component is chosen randomly, and the magnitude is sampled uniformly in the interval (0,). Then, we perturbed also the diagonal matrices Σ j with a vector of norm smaller than δ µ √ η, where η = 10. As we are using a diagonal GMM, this reduces to perturbing each singular value by some random noise upper bounded by ±δ µ √ η/ √ d, making sure that each of the singular value stays positive, as covariance matrices are SPD. Last, the matrices are thresholded. Since the representation of the model used by our software stores the Cholesky's decomposition of the inverse, we worked with that representation. Notably, thresholding the Σ j help to mitigate the error of noise and regularize the model. With these parameters, we were able to correctly label 167 utterances over 170. We leave for the future further experiments with bigger and different types of datasets or where the noise is also added during the training process. We used scikit-learn to run all the experiments. Maximum Likelihood is not the only way to estimate the parameters of a model, and in certain cases might not even be the best one. For instance, in high-dimensional spaces, it is pretty common for ML estimates to overfit. Moreover, it is often the case that we have prior information on the distribution of the parameters, and we would like our models to take this information into account. These issues are often addressed using a Bayesian approach, i.e. by using a so-called Maximum A Posteriori estimate (MAP) of a model (, Section 14.4.2.8). MAP estimates work by assuming the existence of a prior distribution over the parameters γ. The posterior distribution we use as objective function to maximize comes from the Bayes' rule applied on the likelihood, which gives the posterior as a product of the likelihood and the prior, normalized by the evidence. More simply, we use the Bayes' rule on the likelihood function, as p(γ; V) = p(V ;γ)p(γ) p(V). This allows us to treat the model γ as a random variable, and derive from the ML estimate a MAP estimate: Among the advantages of a MAP estimate over ML is that it avoids overfitting by having a kind of regularization effect on the model (, Section 6.5). Another feature consists in injecting into a maximum likelihood model some external information, perhaps from domain experts. This advantage comes at the cost of requiring "good" prior information on the problem, which might be non-trivial. In terms of labelling, a MAP estimates correspond to a hard
It's the quantum algorithm for Expectation Maximization. It's fast: the runtime depends only polylogarithmically on the number of elements in the dataset.
1,499
scitldr