source
sequence | source_labels
sequence | rouge_scores
sequence | paper_id
stringlengths 9
11
| ic
unknown | target
sequence |
---|---|---|---|---|---|
[
"Fine-grained Entity Recognition (FgER) is the task of detecting and classifying entity mentions to a large set of types spanning diverse domains such as biomedical, finance and sports. ",
"We observe that when the type set spans several domains, detection of entity mention becomes a limitation for supervised learning models. ",
"The primary reason being lack of dataset where entity boundaries are properly annotated while covering a large spectrum of entity types. ",
"Our work directly addresses this issue. ",
"We propose Heuristics Allied with Distant Supervision (HAnDS) framework to automatically construct a quality dataset suitable for the FgER task. ",
"HAnDS framework exploits the high interlink among Wikipedia and Freebase in a pipelined manner, reducing annotation errors introduced by naively using distant supervision approach. ",
"Using HAnDS framework, we create two datasets, one suitable for building FgER systems recognizing up to 118 entity types based on the FIGER type hierarchy and another for up to 1115 entity types based on the TypeNet hierarchy. ",
"Our extensive empirical experimentation warrants the quality of the generated datasets. ",
"Along with this, we also provide a manually annotated dataset for benchmarking FgER systems.",
"In the literature, the problem of recognizing a handful of coarse-grained types such as person, location and organization has been extensively studied BID18 Sekine, 2007, Marrero et al., 2013] .",
"We term this as Coarse-grained Entity Recognition (CgER) task.",
"For CgER, there exist several datasets, including manually annotated datasets such as CoNLL BID28 ] and automatically generated datasets such as WP2 BID21 .",
"Manually constructing a dataset for FgER task is an expensive and time-consuming process as an entity mention could be assigned multiple types from a set of thousands of types.In recent years, one of the subproblems of FgER, the Fine Entity Categorization or Typing (Fine-ET) problem has received lots of attention particularly in expanding its type coverage from a handful of coarse-grained types to thousands of fine-grained types BID17 BID6 .",
"The primary driver for this rapid expansion is exploitation of cheap but fairly accurate annotations from Wikipedia and Freebase BID4 via the distant supervision process BID7 .",
"The Fine-ET problem assumes that the entity boundaries are provided by an oracle.We observe that the detection of entity mentions at the granularity of Fine-ET is a bottleneck.",
"The existing FgER systems, such as FIGER BID12 , follow a two-step approach in which the first step is to detect entity mentions and the second step is to categorize detected entity mentions.",
"For the entity detection, it is assumed that all the fine-categories are subtypes of the following four categories: person, location, organization and miscellaneous.",
"Thus, a model trained on the CoNLL dataset BID28 ] which is annotated with these types can be used for entity detection.",
"Our analysis indicates that in the context of FgER, this assumption is not a valid assumption.",
"As a face value, the miscellaneous type should ideally cover all entity types other than person, location, and organization.",
"However, it only covers 68% of the remaining types of the FIGER hierarchy and 42% of the TypeNet hierarchy.",
"Thus, the models trained using CoNLL data are highly likely to miss a significant portion of entity mentions relevant to automatic knowledge bases construction applications.Our work bridges this gap between entity detection and Fine-ET.",
"We propose to automatically construct a quality dataset suitable for the FgER, i.e, both Fine-ED and Fine-ET using the proposed HAnDS framework.",
"HAnDS is a three-stage pipelined framework wherein each stage different heuristics are used to combat the errors introduced via naively using distant supervision paradigm, including but not limited to the presence of large false negatives.",
"The heuristics are data-driven and use information provided by hyperlinks, alternate names of entities, and orthographic and morphological features of words.Using the HAnDS framework and the two popular type hierarchies available for Fine-ET, the FIGER type hierarchy BID12 and TypeNet BID17 , we automatically generated two corpora suitable for the FgER task.",
"The first corpus contains around 38 million entity mentions annotated with 118 entity types.",
"The second corpus contains around 46 million entity mentions annotated with 1115 entity types.",
"Our extensive intrinsic and extrinsic evaluation of the generated datasets warrants its quality.",
"As compared with existing automatically generated datasets, supervised learning models trained on our induced training datasets perform significantly better (approx 20 point improvement on micro-F1 score).",
"Along with the automatically generated dataset, we provide a manually annotated corpora of around thousand sentences annotated with 117 entity types for benchmarking of FgER models.",
"Our contributions are highlighted as follows:• We analyzed that existing practice of using models trained on CoNLL dataset has poor recall for entity detection in the Fine-ET setting, where the type set spans several diverse domains.",
"(Section 3)• We propose HAnDS framework, a heuristics allied with the distant supervision approach to automatically construct datasets suitable for FgER problem, i.e., both fine entity detection and fine entity typing.",
"(Section 4)• We establish the state-of-the-art baselines on our new manually annotated corpus, which covers 2.7 times more finer-entity types than the FIGER gold corpus, the current de facto FgER evaluation corpus.",
"(Section 5)The rest of the paper is organized as follows.",
"We describe the related work in Section 2, followed by a case study on entity detection problem in the Fine-ET setting, in Section 3.",
"Section 4 describes our proposed HAnDS framework, followed by empirical evaluation of the datasets in Section 5.",
"In Section 6 we conclude our work.",
"In this work, we initiate a push towards moving from CgER systems to FgER systems, i.e., from recognizing entities from a handful of types to thousands of types.",
"We propose the HAnDS framework to automatically construct quality training dataset for different variants of FgER tasks.",
"The two datasets constructed in our work along with the evaluation resource are currently the largest available training and testing dataset for the entity recognition problem.",
"They are backed with empirical experimentation to warrants the quality of the constructed corpora.The datasets generated in our work opens up two new research directions related to the entity recognition problem.",
"The first direction is towards an exploration of sequence labeling approaches in the setting of FgER, where each entity mention can have more than one type.",
"The existing state-of-the-art sequence labeling models for the CgER task, can not be directly applied in the FgER setting due to state space explosion in the multi-label setting.",
"The second direction is towards noise robust sequence labeling models, where some of the entity boundaries are incorrect.",
"For example, in our induced datasets, there are still entity detection errors, which are inevitable in any heuristic approach.",
"There has been some work explored in BID8 assuming that it is a priori known which tokens have noise.",
"This information is not available in our generated datasets.Additionally, the generated datasets are much richer in entity types compared to any existing entity recognition datasets.",
"For example, the generated dataset contains entities from several domains such as biomedical, finance, sports, products and entertainment.",
"In several downstream applications where NER is used on a text writing style different from Wikipedia, the generated dataset is a good candidate as a source dataset for transfer learning to improve domain-specific performance."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.22641508281230927,
0.25,
0.17391303181648254,
0.060606058686971664,
0.3404255211353302,
0.15686273574829102,
0.3928571343421936,
0.21621620655059814,
0.09999999403953552,
0.14814814925193787,
0.05714285373687744,
0.08695651590824127,
0.17283950746059418,
0.07692307233810425,
0.20408162474632263,
0.11320754140615463,
0.08510638028383255,
0.1666666567325592,
0.1463414579629898,
0.17777776718139648,
0.19999998807907104,
0.1355932205915451,
0.2916666567325592,
0.1355932205915451,
0.20588235557079315,
0.05128204822540283,
0.05128204822540283,
0.1538461446762085,
0.11764705181121826,
0.20408162474632263,
0.16393442451953888,
0.28070175647735596,
0.1428571343421936,
0.1111111044883728,
0.21739129722118378,
0.1904761791229248,
0,
0.35999998450279236,
0.2790697515010834,
0.07999999821186066,
0.145454540848732,
0.15686273574829102,
0.07999999821186066,
0.13636362552642822,
0,
0.04444443807005882,
0.1702127605676651,
0.045454539358615875,
0.1428571343421936
] | HylHE-9p6m | true | [
"We initiate a push towards building ER systems to recognize thousands of types by providing a method to automatically construct suitable datasets based on the type hierarchy. "
] |
[
"Implementing correct method invocation is an important task for software developers.",
"However, this is challenging work, since the structure of method invocation can be complicated.",
"In this paper, we propose InvocMap, a code completion tool allows developers to obtain an implementation of multiple method invocations from a list of method names inside code context.",
"InvocMap is able to predict the nested method invocations which their names didn’t appear in the list of input method names given by developers.",
"To achieve this, we analyze the Method Invocations by four levels of abstraction.",
"We build a Machine Translation engine to learn the mapping from the first level to the third level of abstraction of multiple method invocations, which only requires developers to manually add local variables from generated expression to get the final code.",
"We evaluate our proposed approach on six popular libraries: JDK, Android, GWT, Joda-Time, Hibernate, and Xstream.",
"With the training corpus of 2.86 million method invocations extracted from 1000 Java Github projects and the testing corpus extracted from 120 online forums code snippets, InvocMap achieves the accuracy rate up to 84 in F1- score depending on how much information of context provided along with method names, that shows its potential for auto code completion.",
"Writing code is a challenge for non-experienced software developers.",
"To write the code that implements a specific task in a programming language, developers need to remember the syntax of that language and be familiar with how to implement method invocations.",
"While the syntax of the language is easier to learn since it contains a permanent set of words in the vocabulary, implementing Method Invocations (MI)s is more challenging due to the following reasons.",
"First of all, developers need to remember the structure and the combination of invocations depending on their purpose.",
"Secondly, the implementation of method invocation is also depending on the surrounding context of the code.",
"Thus, the code developed by non-experience developers may be in the risks of being semantic error.",
"To help developers with interacting and analyzing by a given Java source code snippet, Java Development Tool (JDT) library defines a list of Abstract Syntax Tree (AST) Node types (Eclipse, 2019) .",
"With the list of these AST Node types, JDT is able to interact with the structure of each elements inside the source code.",
"MI, which is defined as sub-type of Expression, is one of the fundamental AST Nodes that developers need to implement.",
"MI has been used to make Application Programming Interface (API) calls from other libraries or from other methods inside a Java project.",
"The structure of a syntactically correct MI contains method name, receiver and the list of arguments which could be empty.",
"Since receiver and arguments are types of expression (Eclipse, 2019) , the structure of an MI could be complicated as a deep AST tree.",
"The reason for this issue is that expression can be composed by different types of AST Node including MI.",
"An example of a complicated MI is shown in Listing 1.",
"Within this Listing, the outside MI contains four nested MI in its implementation.",
"Additionally, there are five positions that requires local variables inside the expression.",
"Type casting to integer is embedded to this MI to provide a semantically correct MI.",
"This MI is used along with other calculated MIs inside the body of method, providing the a specific surrounding context for this MI.",
"Without doubt, the outer method name set is just one word while the respected MI is a deep AST tree.",
"The representation of MI also relies on code context.",
"Consider examples 2A and 2B on Listing 2 and Listing 3.",
"These Listings show the implementation of API android.content.Intent.getBooleanExtra().",
"Although 2 MIs share the same information about context of using the same local variable Intent and the false boolean literal, they are differ in the structure of AST.",
"Since the MI in Listing 2 associates with the action of add or remove an application package from an android device, the MI on Listing 3 associates with actions of network status checking.",
"The difference in contexts brings 2 MIs, which represents in 2 static Field Accesses Intent.EXTRA REPLACING and ConnectivityManager.EXTRA NO CONNECTIVITY.",
"Listing 1: Example in Android (2019a) 1 p u b l i c v o i d s e t O f f s e t s ( i n t n e w H o r i z o n t a l O f f s e t , i n t n e w V e r t i c a l O f f s e t ) { 2 .",
". .",
". . . 5 i n v a l i d a t e R e c t f . o f f s e t (− x o f f s e t , −y o f f s e t ) ; 6 i n v a l i d a t e R e c t . s e t ( ( i n t ) Math . f l o o r ( i n v a l i d a t e R e c t f . l e f t ) , ( i n t ) Math . f l o o r ( i n v a l i d a t e R e c t f . t o p ) , ( i n t ) Math . c e i l ( i n v a l i d a t e R e c t f . r i g h t ) , ( i n t ) Math . c e i l ( i n v a l i d a t e R e c t f . b o t t o m ) ) ; 7 . . .",
"Listing 2: Example 2A in Android (2019b) 1 p u b l i c v o i d o n R e c e i v e ( C o n t e x t c o n t e x t , I n t e n t i n t e n t ) { 2 . . . 3 i f ( ( I n t e n t .",
"ACTION PACKAGE REMOVED .",
"e q u a l s ( a c t i o n ) | | 4 I n t e n t .",
"ACTION PACKAGE 5",
"ADDED .",
"e q u a l s ( a c t i o n ) ) 6 && ! i n t e n t .",
"g e t B o o l e a n E x t r a ( I n t e n t . EXTRA REPLACING , f a l s e ) ) { 7 .",
". .",
"Listing 3: Example 2B in Android (2019c) 1 p u b l i c v o i d o n R e c e i v e ( C o n t e x t c o n t e x t , I n t e n t i n t e n t ) { 2 . . . 3 i f ( a c t i v e N e t w o r k == n u l l ) { 4 . . . 5 } e l s e i f ( a c t i v e N e t w o r k .",
"g e t T y p e ( ) == n e t w o r k T y p e ) { 6 mNetworkUnmetered = f a l s e ; 7 mNetworkConnected = ! i n t e n t .",
"g e t B o o l e a n E x t r a ( C o n n e c t i v i t y M a n a g e r .",
"EXTRA NO CONNECTIVITY , f a l s e ) ; 8 .",
". .",
"From the examples above, we recognize that implementing an effective method invocation requires strong background and experiences of developers.",
"Even two MIs that belong to the same API and share the same context of local variables and literal still have ambiguous in the way of implementation like Listing 2 and Listing 3.",
"These challenges hinders the ability of writing a appropriate MI and as well as developers need to spend time to remember or identify the correct structure of AST in MI for software development.",
"With this work, we want to tackle this problem by providing InvocMap, a code completion tool for helping developers to achieve the implementation of method invocation efficiently.",
"InvocMap accepts input as a sequence of method names inside the code environment of a method declaration, then produce the output as the list of ASTs as translation results for each input method names.",
"The generated ASTs will only require developers to input information about local variables and literals in order to obtain the complete code.",
"For instance, in Listing 2, developer can write the list of method names including the name getBooleanExtra.",
"The output for the suggestion will be #.getBooleanExtra",
"( Intent.EXTRA REPLACING,#), which can be completed manually by a variable of type android.content.Intent in the first \"#\" and a boolean literal in the second \"#\".",
"Statistical Machine Translation (SMT) is a well-known approach in Natural Language Processing (NLP) for translating between languages (Green et al., 2014) .",
"For taking advantage from SMT, we propose a direction of code completion for Method Invocation by a Statistical approach, which learn the translation from the abstract information of MIs to the their detail information, which are represented by AST with complicate structure.",
"First and foremost, we analyze the information inside a typical MI.",
"We divide the MI by four levels of abstraction.",
"We also define information of context for each MI which can help to predict the AST structure.",
"Next, we build an SMT engine specified for our work to infer from the very abstract layer of MI, means Method Name, to the third level of MI, which is an AST tree that requires to be fulfill by local variables and literals.",
"In order to evaluate our approach, we do experiments to check the accuracy of our code completion technique in two data sets collected from Github and from online forums.",
"Resources of this paper can be found in (InvocMap, 2019) .",
"This research has following contributions: 2.",
"Designing rules for extracting code tokens for representing abstract level and details level for various types of AST nodes.",
"3. Proposing an algorithm for visiting a method invocation inside the code environment to abstract and encode their structure in AST as an object for statistical learning.",
"4. Building a SMT system for learning from the context of code environment, including MIs from large scale Github high quality projects.",
"This SMT system is able to predict the sequences of AST structure given sequences of method name and context.",
"In this work, we proposed InvocMap, a SMT engine for inferring the ASTs of method invocations from a list of method names and code context.",
"By the evaluation on corpus collected from Github projects and online forums, we demonstrated the potential of our approach for auto code completion.",
"A major advantage of InvocMap is that it is built on the idea of abstracting method invocations by four different levels.",
"We provided an algorithm to achieve AST of method invocations for the method invocations inference.",
"As future works, we will work on extending the SMT model to support inputs from multiple natural language descriptions of multiple method invocations, along with investigation of machine learning techniques for improving the accuracy."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1666666567325592,
0.1538461446762085,
0.2800000011920929,
0.17391303181648254,
0.31578946113586426,
0.24561403691768646,
0.09756097197532654,
0.21333332359790802,
0.1764705777168274,
0.23076923191547394,
0.19230768084526062,
0.1463414579629898,
0.21052631735801697,
0.14999999105930328,
0.18518517911434174,
0.13333332538604736,
0.09302324801683426,
0.13333332538604736,
0.1818181723356247,
0.1249999925494194,
0.1818181723356247,
0.1111111044883728,
0,
0,
0.10810810327529907,
0.17391303181648254,
0.1395348757505417,
0.11764705181121826,
0.05882352590560913,
0.0555555522441864,
0.08163265138864517,
0.07999999821186066,
0.045454539358615875,
0.03448275476694107,
0.0357142798602581,
0,
0,
0.04878048226237297,
0,
0.04878048226237297,
0.04347825422883034,
0.02985074184834957,
0.038461532443761826,
0.04651162400841713,
0.05405404791235924,
0.1818181723356247,
0.11764705181121826,
0.18867923319339752,
0.35999998450279236,
0.2083333283662796,
0.1304347813129425,
0.1463414579629898,
0.05882352590560913,
0.16326530277729034,
0.12765957415103912,
0.29999998211860657,
0.1111111044883728,
0.23529411852359772,
0.1428571343421936,
0.22580644488334656,
0.23529411852359772,
0.11428570747375488,
0.06451612710952759,
0.19512194395065308,
0.3199999928474426,
0.21739129722118378,
0.2857142686843872,
0.2978723347187042,
0.2978723347187042,
0.22727271914482117,
0.21052631735801697,
0.178571417927742
] | BJxOZ04Kvr | true | [
"This paper proposes a theory of classifying Method Invocations by different abstraction levels and conducting a statistical approach for code completion from method name to method invocation."
] |
[
"Adversaries in neural networks have drawn much attention since their first debut. \n",
"While most existing methods aim at deceiving image classification models into misclassification or crafting attacks for specific object instances in the object setection tasks, we focus on creating universal adversaries to fool object detectors and hide objects from the detectors. \n",
"The adversaries we examine are universal in three ways: \n",
"(1) They are not specific for specific object instances; \n",
"(2) They are image-independent; \n",
"(3) They can further transfer to different unknown models. \n",
"To achieve this, we propose two novel techniques to improve the transferability of the adversaries: \\textit{piling-up} and \\textit{monochromatization}. \n",
"Both techniques prove to simplify the patterns of generated adversaries, and ultimately result in higher transferability.",
"Despite the success of machine learning and deep learning models, recently it has been shown that these models are susceptible and sensitive to what is termed as adversarial examples, a.k.a. adversaries BID32 BID10 .",
"Adversaries are usually derived from ordinary data and retain the same semantic content, but can result in wrong predictions.",
"Previous studies have shown that adversarial examples can be crafted efficiently and successfully in some conditions, which poses significant security threats BID14 .",
"Formally speaking, given a model y = F (x), input X and original or ground-truth output Y = F (X), adversaries are modified versions of the original data, denoted as X + ∆X such that F (X + ∆X) = Y .",
"Generally, ∆X is constrained by its norm value (e.g. L ∞ ) or other metrics to preserve the original semantic meaning of input X.Existing studies on adversarial examples focus on (1) designing effective and efficient methods to craft ∆X, e.g. L-BFGS BID32 , FGSM BID10 , iterative methods BID13 ; (2) defense methods including defensive distillation BID24 , random transformation BID35 , JPEG-compression (Dziugaite et al., 2016) and etc.",
"; (3) how to improve the transferability of attacks crafted on one model to deceive another model, both for differently initialized and trained models, and models of different architecture BID19 BID23 BID33 BID34 .",
"Up till now, these efforts mainly focus on image classification models.More recent work has studied the robustness of object detectors and tried to fool these models BID21 BID3 BID6 BID16 a; BID28 .",
"However, most of these works only attack specific object instances.",
"Few proposed methods have attempted to attack multiple objects and images or verify the capacity to transfer to another model.In this work, we aim to craft universal and transferable adversaries to fool object detectors and conceal objects.",
"As far as we know, we are the first to carry out such large-scale attacks on object detectors.",
"Our target is three-fold: (1) The adversary should work for different objects, regardless of their types, positions, sizes, and etc..",
"(2) The adversary is not limited to one image only, i.e. achieving image-independence.",
"(3) The adversary should be able to attack detectors that they are not crafted on, i.e. achieving black-box attack.Specifically, we craft an adversarial mask of the same size as input image, denoted as ∆X ∈ [0, 1] Himage×Wimage×3 , and impose a norm-value constraint, ||∆X|| ∞ ≤ .",
"Such an adversarial mask is in fact similar to what the community has used to fool image classification models.",
"However, optimizing over it is a non-trivial task.",
"A full-sized mask would introduce a total amount of 0.5M parameters, putting our method on risk of overfitting.",
"Further, using the concept of Effective Receptive Field BID22 , we found that gradients obtained through back propagation are sparse in spatial positions, making optimization difficult.To achieve our objective, we propose to use the following techniques: (1) Optimizing ∆X over a set of images; (2) Using identical small patches that are piled-up to form the full-sized mask ∆X; (3) Crafting monochromatic masks instead of colorful ones as done in previous work.",
"Our motivation is that piling-up identical small patches in a grid can incorporate translation invariance in a similar way to Convolutional Neural Networks (CNNs), which is also connected with the intuition that any part of the mask should perform equally to attack an object in any position.",
"Constraining the adversarial mask to monochrome further forces the mask to learn coarse-grained patterns that may be universal.In experiments, we compare with decent baseline methods and found that our methods can consistently surpasses them.",
"While our adversarial mask can conceal as many as 80% objects from YOLO V3 BID25 , on which it is crafted, it can also hide more than 40% objects from the eyes of Faster-RCNN BID27 , in a black-box setting.",
"Further, we compare the patterns generated by different methods and carry out detailed analysis.",
"We found that our techniques did help in crafting more coarse-grained patterns.",
"These patterns have generic appearance, which we attribute as the key for good transferability.In conclusion, we make the following contributions in this work: (1) We successfully craft universal adversarial mask that can fool object detectors that are independent in object-level, image-level and model-level.",
"(2) We show that, with the proposed techniques, we can learn and generate masks that have generic and coarse-grained patterns.",
"The pattern we generate is different from those in previous works by large, which may be the key for better transferability.",
"In this section, we visually evaluate how the two techniques play their role in improving transferability.",
"Especially, we discuss about how pile-up helps in significantly improve transferability, as is shown in the experiments.",
"Further, we study a strong method for comparison to provide further insight into adversaries for object detectors."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.06896550953388214,
0.5660377144813538,
0.1599999964237213,
0.0833333283662796,
0,
0.1538461446762085,
0.23529411852359772,
0.1875,
0.1666666567325592,
0.17142856121063232,
0.052631575614213943,
0.12244897335767746,
0.1265822798013687,
0.17391303181648254,
0.3404255211353302,
0.07692307233810425,
0.3829787075519562,
0.3030303120613098,
0.0555555522441864,
0.06666666269302368,
0.1269841194152832,
0.1764705777168274,
0,
0.05882352590560913,
0.05063290894031525,
0.1090909019112587,
0.17391303181648254,
0.20000000298023224,
0.13333332538604736,
0.0714285671710968,
0.25,
0.17142856121063232,
0.10810810327529907,
0.0624999962747097,
0.0624999962747097,
0.25
] | H1Gnx2CqKQ | true | [
"We focus on creating universal adversaries to fool object detectors and hide objects from the detectors. "
] |
[
"This work presents the Poincaré Wasserstein Autoencoder, a reformulation of\n",
"the recently proposed Wasserstein autoencoder framework on a non-Euclidean\n",
"manifold, the Poincaré ball model of the hyperbolic space H n .",
"By assuming the\n",
"latent space to be hyperbolic, we can use its intrinsic hierarchy to impose structure\n",
"on the learned latent space representations.",
"We show that for datasets with latent\n",
"hierarchies, we can recover the structure in a low-dimensional latent space.",
"We\n",
"also demonstrate the model in the visual domain to analyze some of its properties\n",
"and show competitive results on a graph link prediction task.",
"Variational Autoencoders (VAE) (17; 28) are an established class of unsupervised machine learning models, which make use of amortized approximate inference to parametrize the otherwise intractable posterior distribution.",
"They provide an elegant, theoretically sound generative model used in various data domains.",
"Typically, the latent variables are assumed to follow a Gaussian standard prior, a formulation which allows for a closed form evidence lower bound formula and is easy to sample from.",
"However, this constraint on the generative process can be limiting.",
"Real world datasets often possess a notion of structure such as object hierarchies within images or implicit graphs.",
"This notion is often reflected in the interdependence of latent generative factors or multimodality of the latent code distribution.",
"The standard VAE posterior parametrizes a unimodal distribution which does not allow structural assumptions.",
"Attempts at resolving this limitation have been made by either \"upgrading\" the posterior to be more expressive (27) or imposing structure by using various structured priors (34) , (36) .",
"Furthermore, the explicit treatment of the latent space as a Riemannian manifold has been considered.",
"For instance, the authors of (5) show that the standard VAE framework fails to model data with a latent spherical structure and propose to use a hyperspherical latent space to alleviate this problem.",
"Similarly, we believe that for datasets with a latent tree-like structure, using a hyperbolic latent space, which imbues the latent codes with a notion of hierarchy, is beneficial.",
"There has recently been a number of works which explicitly make use of properties of non-Euclidean geometry in order to perform machine learning tasks.",
"The use of hyperbolic spaces in particular has been shown to yield improved results on datasets which either present a hierarchical tree-like structure such as word ontologies (24) or feature some form of partial ordering (4) .",
"However, most of these approaches have solely considered deterministic hyperbolic embeddings.",
"In this work, we propose the Poincaré Wasserstein Autoencoder (PWA), a Wasserstein autoencoder (33) model which parametrizes a Gaussian distribution in the Poincaré ball model of the hyperbolic space H n .",
"By treating the latent space as a Riemannian manifold with constant negative curvature, we can use the norm ranking property of hyperbolic spaces to impose a notion of hierarchy on the latent space representation, which is better suited for applications where the dataset is hypothesized to possess a latent hierarchy.",
"We demonstrate this aspect on a synthetic dataset and evaluate it using a distortion measure for Euclidean and hyperbolic spaces.",
"We derive a closed form definition of a Gaussian distribution in hyperbolic space H n and sampling procedures for the prior and posterior distributions, which are matched using the Maximum Mean Discrepancy (MMD) objective.",
"We also compare the PWA to the Euclidean VAE visually on an MNIST digit generation task as well quantitatively on a semi-supervised link prediction task.",
"The rest of this paper is structured as follows: we review related work in Section 2, give an overview of the mathematical tools required to work with Riemannian manifolds as well as define the notion of probability distributions on Riemannian manifolds in Section 3.",
"Section 4 describes the model architecture as well as the intuition behind the Wasserstein autoencoder approach.",
"Furthermore, we derive a method to obtain samples from prior and posterior distributions in order to estimate the PWA objective.",
"We present the performed experiments in and discuss the observed results in Section 5 and a summary of our results in Section 6.",
"We have presented an algorithm to perform amortized variational inference on the Poincaré ball model of the hyperbolic space.",
"The underlying geometry of the hyperbolic space allows for an improved performance on tasks which exhibit a partially hierarchical structure.",
"We have discovered certain issues related to the use of the MMD metric in hyperbolic space.",
"Future work will aim to circumvent these issues as well as extend the current results.",
"In particular, we hope to demonstrate the capabilities of our model on more tasks hypothesized to have a latent hyperbolic manifold and explore this technique for mixed curvature settings.",
"A PRIOR REJECTION SAMPLING H (r|0, 1) Result: n samples from prior p(z) while i < n do sampleφ ∼ N (0, I d ); compute direction on the unit sphereφ =φ ||φ|| ; sample u ∼ U(0, 1); get uniform radius samples r i ∈ [0, r max ] via ratio of hyperspheres;",
"where erfc is the complementary error function."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1249999925494194,
0.13333332538604736,
0.25,
0,
0.21052631735801697,
0.3333333134651184,
0.307692289352417,
0.23529411852359772,
0,
0,
0,
0,
0.060606058686971664,
0,
0,
0.09090908616781235,
0,
0,
0.20000000298023224,
0.1764705926179886,
0.20689654350280762,
0,
0.04878048598766327,
0.11764705181121826,
0.25806450843811035,
0.1818181723356247,
0.0833333283662796,
0.10810810327529907,
0,
0.04999999701976776,
0.10526315122842789,
0,
0,
0.1666666567325592,
0.1538461446762085,
0.1904761791229248,
0,
0.11764705926179886,
0,
0
] | BJgLpaEtDS | true | [
"Wasserstein Autoencoder with hyperbolic latent space"
] |
[
"In this paper, we introduce a method to compress intermediate feature maps of deep neural networks (DNNs) to decrease memory storage and bandwidth requirements during inference.",
"Unlike previous works, the proposed method is based on converting fixed-point activations into vectors over the smallest GF(2) finite field followed by nonlinear dimensionality reduction (NDR) layers embedded into a DNN.",
"Such an end-to-end learned representation finds more compact feature maps by exploiting quantization redundancies within the fixed-point activations along the channel or spatial dimensions.",
"We apply the proposed network architecture to the tasks of ImageNet classification and PASCAL VOC object detection.",
"Compared to prior approaches, the conducted experiments show a factor of 2 decrease in memory requirements with minor degradation in accuracy while adding only bitwise computations.",
"Recent achievements of deep neural networks (DNNs) make them an attractive choice in many computer vision applications including image classification BID6 and object detection BID9 .",
"The memory and computations required for DNNs can be excessive for low-power deployments.",
"In this paper, we explore the task of minimizing the memory footprint of DNN feature maps during inference and, more specifically, finding a network architecture that uses minimal storage without introducing a considerable amount of additional computations or on-the-fly heuristic encoding-decoding schemes.",
"In general, the task of feature map compression is tightly connected to an input sparsity.",
"The input sparsity can determine several different usage scenarios.",
"This may lead to substantial decrease in memory requirements and overall inference complexity.",
"First, a pen sketches are spatially sparse and can be processed efficiently by recently introduced submanifold sparse CNNs BID4 .",
"Second, surveillance cameras with mostly static input contain temporal sparsity that can be addressed by Sigma-Delta networks BID15 .",
"A more general scenario presumes a dense input e.g. video frames from a high-resolution camera mounted on a moving autonomous car.",
"In this work, we address the latter scenario and concentrate on feature map compression in order to minimize memory footprint and bandwidth during DNN inference which might be prohibitive for high-resolution cameras.We propose a method to convert intermediate fixed-point feature map activations into vectors over the smallest finite field called the Galois field of two elements (GF(2)) or, simply, binary vectors followed by compression convolutional layers using a nonlinear dimensionality reduction (NDR) technique embedded into DNN architecture.",
"The compressed feature maps can then be projected back to a higher cardinality representation over a fixed-point (integer) field using decompression convolutional layers.",
"Using a layer fusion method, only the compressed feature maps need to be kept for inference while adding only computationally inexpensive bitwise operations.",
"Compression and decompression layers over GF(2) can be repeated within the proposed network architecture and trained in an end-to-end fashion.",
"In brief, the proposed method resembles autoencoder-type BID7 structures embedded into a base network that work over GF(2).",
"Binary conversion and compression-decompression layers are implemented in the Caffe BID12 framework and publicly available 1 .The",
"rest of the paper is organized as follows. Section",
"2 reviews related work. Section",
"3 gives notation for convolutional layers, describes conventional fusion and NDR methods, and explains the proposed method including details about network training and the derived architecture. Section",
"4 presents experimental results on ImageNet classification and PASCAL VOC object detection using SSD BID13 , memory requirements, and obtained compression rates.",
"We introduced a method to decrease memory storage and bandwidth requirements for DNNs.",
"Complementary to conventional approaches that use fused layer computation and quantization, we presented an end-to-end method for learning feature map representations over GF(2) within DNNs.",
"Such a binary representation allowed us to compress network feature maps in a higher-dimensional space using autoencoder-inspired layers embedded into a DNN along channel and spatial dimensions.",
"These compression-decompression layers can be implemented using conventional convolutional layers with bitwise operations.",
"To be more precise, the proposed representation traded cardinality of the finite field with the dimensionality of the vector space which makes possible to learn features at the binary level.",
"The evaluated compression strategy for inference can be adopted for GPUs, CPUs or custom accelerators.",
"Alternatively, existing binary networks can be extended to achieve higher accuracy for emerging applications such as object detection and others."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08888888359069824,
0.5306122303009033,
0.09302324801683426,
0,
0.04444443807005882,
0,
0,
0.10344827175140381,
0.11428570747375488,
0,
0,
0.10526315122842789,
0.10526315122842789,
0.04999999329447746,
0.3720930218696594,
0.0952380895614624,
0.0476190410554409,
0.051282044500112534,
0.2631579041481018,
0.0555555522441864,
0,
0,
0.045454539358615875,
0.04878048226237297,
0.12121211737394333,
0.13333332538604736,
0.2666666507720947,
0.0624999962747097,
0.08888888359069824,
0.05882352590560913,
0.04999999329447746
] | SJmAXkgCb | true | [
"Feature map compression method that converts quantized activations into binary vectors followed by nonlinear dimensionality reduction layers embedded into a DNN"
] |
[
"Adversarial training is one of the strongest defenses against adversarial attacks, but it requires adversarial examples to be generated for every mini-batch during optimization. ",
"The expense of producing these examples during training often precludes adversarial training from use on complex image datasets. \n",
"In this study, we explore the mechanisms by which adversarial training improves classifier robustness, and show that these mechanisms can be effectively mimicked using simple regularization methods, including label smoothing and logit squeezing. \n",
"Remarkably, using these simple regularization methods in combination with Gaussian noise injection, we are able to achieve strong adversarial robustness -- often exceeding that of adversarial training -- using no adversarial examples.",
"Deep Neural Networks (DNNs) have enjoyed great success in many areas of computer vision, such as classification BID7 , object detection BID4 , and face recognition BID11 .",
"However, the existence of adversarial examples has raised concerns about the security of computer vision systems BID16 BID1 .",
"For example, an attacker may cause a system to mistake a stop sign for another object BID3 or mistake one person for another BID14 .",
"To address security concerns for high-stakes applications, researchers are searching for ways to make models more robust to attacks.Many defenses have been proposed to combat adversarial examples.",
"Approaches such as feature squeezing, denoising, and encoding BID19 BID13 BID15 BID10 have had some success at pre-processing images to remove adversarial perturbations.",
"Other approaches focus on hardening neural classifiers to reduce adversarial susceptibility.",
"This includes specialized non-linearities BID20 , modified training processes BID12 , and gradient obfuscation BID0 .Despite",
"all of these innovations, adversarial training BID5 , one of the earliest defenses, still remains among the most effective and popular strategies. In its",
"simplest form, adversarial training minimizes a loss function that measures performance of the model on both clean and adversarial data as follows DISPLAYFORM0 where L is a standard (cross entropy) loss function, (x i , y i ) is an input image/label pair, θ contains the classifier's trainable parameters, κ is a hyper-parameter, and x i,adv is an adversarial example for image x. BID9 pose",
"adversarial training as a game between two players that similarly requires computing adversarial examples on each iteration.A key drawback to adversarial training methods is their computational cost; after every mini-batch of training data is formed, a batch of adversarial examples must be produced. To train",
"a network that resists strong attacks, one needs to train with the strongest adversarial examples possible. For example",
", networks hardened against the inexpensive Fast Gradient Sign Method (FGSM, Goodfellow et al. (2014) ) can be broken by a simple two-stage attack BID17 . Current state-of-theart",
"adversarial training results on MNIST and CIFAR-10 use expensive iterative adversaries BID9 , such as the Projected Gradient Descent (PGD) method, or the closely related Basic Iterative Method (BIM) BID8 . Adversarial training using",
"strong attacks may be 10-100 times more time consuming than standard training methods. This prohibitive cost makes",
"it difficult to scale adversarial training to larger datasets and higher resolutions.In this study, we show that it is possible to achieve strong robustness -comparable to or greater than the robustness of adversarial training with a strong iterative attack -using fast optimization without adversarial examples. We achieve this using standard",
"regularization methods, such as label smoothing BID18 and the more recently proposed logit squeezing BID6 . While it has been known for some",
"time that these tricks can improve the robustness of models, we observe that an aggressive application of these inexpensive tricks, combined with random Gaussian noise, are enough to match or even surpass the performance of adversarial training on some datasets. For example, using only label smoothing",
"and augmentation with random Gaussian noise, we produce a CIFAR-10 classifier that achieves over 73% accuracy against black-box iterative attacks, compared to 64% for a state-of-the-art adversarially trained classifier BID9 . In the white-box case, classifiers trained",
"with logit squeezing and label smoothing get ≈ 50% accuracy on iterative attacks in comparison to ≈ 47% for adversarial training. Regularized networks without adversarial training",
"are also more robust against non-iterative attacks, and more accurate on non-adversarial examples.Our goal is not just to demonstrate these defenses, but also to dig deep into what adversarial training does, and how it compares to less expensive regularization-based defenses. We begin by dissecting adversarial training, and",
"examining ways in which it achieves robustness. We then discuss label smoothing and logit squeezing",
"regularizers, and how their effects compare to those of adversarial training. We then turn our attention to random Gaussian data",
"augmentation, and explore the importance of this technique for adversarial robustness. Finally, we combine the regularization methods with",
"random Gaussian augmentation, and experimentally compare the robustness achievable using these simple methods to that achievable using adversarial training.",
"We studied the robustness of adversarial training, label smoothing, and logit squeezing through a linear approximation L that relates the magnitude of adversarial perturbations to the logit gap and the difference between the adversarial directions for different labels.",
"Using this simple model, we observe how adversarial training achieves robustness and try to imitate this robustness using label smoothing and logit squeezing.",
"The resulting methods perform well on MNIST, and can get results on CIFAR-10 and CIFAR-100 that can excel over adversarial training in both robustness and accuracy on clean examples.",
"By demonstrating the effectiveness of these simple regularization methods, we hope this work can help make robust training easier and more accessible to practitioners."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.23529411852359772,
0.2857142686843872,
0.0952380895614624,
0.31578946113586426,
0,
0.1538461446762085,
0.06666666269302368,
0.17142856121063232,
0.12121211737394333,
0.2857142686843872,
0.07999999821186066,
0.1249999925494194,
0.09677419066429138,
0.21276594698429108,
0.2857142686843872,
0,
0.1463414579629898,
0.14814814925193787,
0.2800000011920929,
0,
0.19607843458652496,
0.04651162400841713,
0.3030303120613098,
0.19607843458652496,
0.07999999821186066,
0.20689654350280762,
0.14814814925193787,
0.29629629850387573,
0.1538461446762085,
0.2666666507720947,
0.29411762952804565,
0.11764705181121826
] | BJlr0j0ctX | true | [
"Achieving strong adversarial robustness comparable to adversarial training without training on adversarial examples"
] |
[
"A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training.",
"Typically, this involves minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect.",
"In this work, we propose instead to directly target later desired tasks by meta-learning an unsupervised learning rule which leads to representations useful for those tasks. ",
"Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.",
"Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to different neural network architectures, datasets, and data modalities.",
"We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques.",
"We further show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities.",
"It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.",
"Supervised learning has proven extremely effective for many problems where large amounts of labeled training data are available.",
"There is a common hope that unsupervised learning will prove similarly powerful in situations where labels are expensive, impractical to collect, or where the prediction target is unknown during training.",
"Unsupervised learning however has yet to fulfill this promise.",
"One explanation for this failure is that unsupervised representation learning algorithms are typically mismatched to the target task.",
"Ideally, learned representations should linearly expose high level attributes of data (e.g. object identity) and perform well in semi-supervised settings.",
"Many current unsupervised objectives, however, optimize for objectives such as log-likelihood of a generative model or reconstruction error, producing useful representations only as a side effect.Unsupervised representation learning seems uniquely suited for meta-learning BID0 Schmidhuber, 1995) .",
"Unlike most tasks where meta-learning is applied, unsupervised learning does not define an explicit objective, which makes it impossible to phrase the task as a standard optimization problem.",
"It is possible, however, to directly express a meta-objective that captures the quality of representations produced by an unsupervised update rule by evaluating the usefulness of the representation for candidate tasks.",
"In this work, we propose to meta-learn an unsupervised update rule by meta-training on a meta-objective that directly optimizes the utility of the unsupervised representation.",
"Unlike hand-designed unsupervised learning rules, this meta-objective directly targets the usefulness of a representation generated from unlabeled data for later supervised tasks.By recasting unsupervised representation learning as meta-learning, we treat the creation of the unsupervised update rule as a transfer learning problem.",
"Instead of learning transferable features, we learn a transferable learning rule which does not require access to labels and generalizes across both data domains and neural network architectures.",
"Although we focus on the meta-objective of semi-supervised classification here, in principle a learning rule could be optimized to generate representations for any subsequent task.",
"In this work we meta-learn an unsupervised representation learning update rule.",
"We show performance that matches or exceeds existing unsupervised learning on held out tasks.",
"Additionally, the update rule can train models of varying widths, depths, and activation functions.",
"More broadly, we demonstrate an application of meta-learning for learning complex optimization tasks where no objective is explicitly defined.",
"Analogously to how increased data and compute have powered supervised learning, we believe this work is a proof of principle that the same can be done with algorithm design-replacing hand designed techniques with architectures designed for learning and learned from data via metalearning.Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei.",
"On the optimization of a synaptic learning rule.In Figure App.1: Schematic for meta-learning an unsupervised learning algorithm.",
"We show the hierarchical nature of both the meta-training procedure and update rule.",
"a) Meta-training, where the meta-parameters, θ, are updated via our meta-optimizer (SGD).",
"b) The gradients of the MetaObjective with respect to θ are computed by backpropagation through the unrolled application of the UnsupervisedUpdate.",
"c) UnsupervisedUpdate updates the base model parameters (φ) using a minibatch of unlabeled data.",
"d) Each application of UnsupervisedUpdate involves computing a forward and \"backward\" pass through the base model.",
"The base model itself is a fully connected network producing hidden states x l for each layer l.",
"The \"backward\" pass through the base model uses an error signal from the layer above, δ, which is generated by a meta-learned function.",
"e.",
") The weight updates ∆φ are computed using a convolutional network, using δ and x from the pre-and post-synaptic neurons, along with several other terms discussed in the text."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3333333432674408,
0.23728813230991364,
0.3214285671710968,
0.37735849618911743,
0.178571417927742,
0.3333333432674408,
0.23999999463558197,
0.15686273574829102,
0.08163265138864517,
0.16949151456356049,
0.14999999105930328,
0.20408162474632263,
0.11538460850715637,
0.1846153736114502,
0.20338982343673706,
0.27586206793785095,
0.29629629850387573,
0.28125,
0.25,
0.25,
0.2380952388048172,
0.31111109256744385,
0.08888888359069824,
0.19999998807907104,
0.2857142686843872,
0.2448979616165161,
0.1860465109348297,
0,
0.08163265138864517,
0.08888888359069824,
0.12765957415103912,
0.0416666604578495,
0.11320754140615463,
0.10344827175140381
] | HkNDsiC9KQ | true | [
"We learn an unsupervised learning algorithm that produces useful representations from a set of supervised tasks. At test-time, we apply this algorithm to new tasks without any supervision and show performance comparable to a VAE."
] |
[
"There is significant recent evidence in supervised learning that, in the over-parametrized setting, wider networks achieve better test error.",
"In other words, the bias-variance tradeoff is not directly observable when increasing network width arbitrarily.",
"We investigate whether a corresponding phenomenon is present in reinforcement learning.",
"We experiment on four OpenAI Gym environments, increasing the width of the value and policy networks beyond their prescribed values.",
"Our empirical results lend support to this hypothesis.",
"However, tuning the hyperparameters of each network width separately remains as important future work in environments/algorithms where the optimal hyperparameters vary noticably across widths, confounding the results when the same hyperparameters are used for all widths.",
"A longstanding notion in supervised learning is that, as model complexity increases, test error decreases initially and, eventually, increases again.",
"Intuitively, the idea is that as the size of your hypothesis class grows, the closer you can approximate the ground-truth function with some function in your hypothesis class.",
"At the same time, the larger amount of functions to choose from in your hypothesis class leads to higher estimation error (overfitting) from fitting the finite data sample too closely.",
"This is the essential bias-variance tradeoff in supervised learning.",
"We discuss these tradeoffs in more depth in Section 2.2.However, BID20 found that increasing the width of a single hidden layer neural network leads to decreasing test error on MNIST and CIFAR-10.",
"Since then, there has been a large amount of evidence that wider networks generalize better in a variety of different architectures and hyperparameter settings BID27 BID21 BID15 BID19 BID0 BID24 BID17 , once in the over-parametrized setting BID24 BID0 .",
"In other words, the biasvariance tradeoff is not observed in this over-parametrized setting, as network width grows BID19 .How",
"far can we inductively infer from this? Is",
"this phenomenon also present in deep reinforcement learning or do we eventually see a degradation in performance as we increase network width? In",
"this paper, we present preliminary evidence that this phenomenon is also present in reinforcement learning. For",
"example, using default hyperparameters, we can already see performance increase well past the default width that is commonly used (64) in FIG0 . We",
"test the hypothesis that wider networks (both policy and value) perform monotonically better than their smaller counterparts in policy gradients methods. Of",
"course, we will hit diminishing returns as the network width gets very large, but this is very different from the competing hypothesis that larger networks will overfit more.",
"The phenomenon in supervised learning that motivated this work is that, in the over-parametrized setting, increasing network width leads to monotonically lower test error (no U curve).",
"We find a fair amount of evidence of this phenomenon extending to reinforcement learning in our preliminary experiments (namely CartPole, Acrobot, and Pendulum).However",
", we also saw that performance did consistently degrade in the MountainCar experiments. We believe",
"this to be because that environment is more sensitive to hyperparameters; since the hyperparameters were chosen using width 64 and then used for all of the other widths, the hyperparameters are likely not optimal for the other widths like they are for width 64. The MountainCar",
"environment exaggerates this lack suboptimality more than the other 3 environments.The main next experiments we plan to run will use an automated tuning procedure that chooses the hyperparameters for each width individually. We believe this",
"protocol will yield MountainCar results that look much more like the CartPole and Acrobot results. We then plan to",
"replicate these findings across more learning algorithms and more environments."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.06666666269302368,
0.23076923191547394,
0.05882352590560913,
0.08695651590824127,
0.1304347813129425,
0.22857142984867096,
0.1111111044883728,
0.09756097197532654,
0.25,
0.12765957415103912,
0.04081632196903229,
0.1764705777168274,
0,
0.277777761220932,
0.20689654350280762,
0.10810810327529907,
0.0555555522441864,
0.09999999403953552,
0.24390242993831635,
0.21052631735801697,
0.06666666269302368,
0.07999999821186066,
0.08163265138864517,
0.060606054961681366,
0.0833333283662796
] | SkeMPEH32N | true | [
"Over-parametrization in width seems to help in deep reinforcement learning, just as it does in supervised learning."
] |
[
"Learning disentangled representations of data is one of the central themes in unsupervised learning in general and generative modelling in particular. ",
"In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate. ",
"To this end, we present a hierarchical model and a training method (CZ-GEM) that leverages some of the recent developments in likelihood-based and likelihood-free generative models. ",
"We show that by formulation, CZ-GEM introduces the right inductive biases that ensure the disentanglement of the control from the noise variables, while also keeping the components of the control variate disentangled.",
"This is achieved without compromising on the quality of the generated samples.",
"Our approach is simple, general, and can be applied both in supervised and unsupervised settings.",
"Consider the following scenario: a hunter-gatherer walking in the African Savannah some 50,000 years ago notices a lioness sprinting out of the bush towards her.",
"In a split second, billions of photons reaching her retinas carrying an enormous amount of information: the shade of the lioness' fur, the angle of its tail, the appearance of every bush in her field of view, the mountains in the background and the clouds in the sky.",
"Yet at this point there is a very small number of attributes which are of importance: the type of the charging animal, its approximate velocity and its location.",
"The rest are just details.",
"The significance of the concept that the world, despite its complexity, can be described by a few explanatory factors of variation, while ignoring the small details, cannot be overestimated.",
"In machine learning there is a large body of work aiming to extract low-dimensional, interpretable representations of complex, often visual, data.",
"Interestingly, many of the works in this area are associated with developing generative models.",
"The intuition is that if a model can generate a good approximation of the data then it must have learned something about its underlying representation.",
"This representation can then be extracted either by directly inverting the generative process (Srivastava et al., 2019b) or by extracting intermediate representations of the model itself (Kingma & Welling, 2014; Higgins et al., 2017) .",
"Clearly, just learning a representation, even if it is low-dimensional, is not enough.",
"The reason is that while there could be many ways to compress the information captured in the data, allowing good enough approximations, there is no reason to a priori assume that such a representation is interpretable and disentangled in the sense that by manipulating certain dimensions of the representation one can control attributes of choice, say the pose of a face, while keeping other attributes unchanged.",
"The large body of work on learning disentangled representations tackles this problem in several settings; fully supervised, weakly supervised and unsupervised, depending on the available data (Tran et al., 2018; Reed et al., 2014; Jha et al., 2018; Mathieu et al., 2016; Higgins et al., 2017; Kim & Mnih, 2018; Nguyen-Phuoc et al., 2019; Narayanaswamy et al., 2017) .",
"Ideally, we would like to come up with an unsupervised generative model that can generate samples which approximate the data to a high level of accuracy while also giving rise to a disentangled and interpretable representation.",
"In the last decade two main approaches have captured most of the attention; Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs).",
"In their original versions, both GANs (Goodfellow et al., 2014) and VAEs (Kingma & Welling, 2014) were trained in an unsupervised manner and",
"(a) Chair rotation generated by CGAN",
"(b) Chair rotation generated by CZ-GEM Figure 1 : Changing the azimuth of chairs in CGAN and CZ-GEM while holding Z constant.",
"Unlike CZ-GEM, C and Z are clearly entangled in CGAN as changing C also changes the type of chair even though Z is held constant.",
"gave rise to entangled representations.",
"Over the years, many methods to improve the quality of the generated data as well as the disentanglement of the representations have been suggested (Brock et al., 2018; Kingma & Dhariwal, 2018; Nguyen-Phuoc et al., 2019; Jeon et al., 2018) .",
"By and large, GANs are better than VAEs in the quality of the generated data while VAEs learn better disentangled representations, in particular in the unsupervised setting.",
"In this paper, we present a framework for disentangling a small number of control variables from the rest of the latent space which accounts for all the additional details, while maintaining a high quality of the generated data.",
"We do that by combining VAE and GAN approaches thus enjoying the best of both worlds.",
"The framework is general and works in both the supervised and unsupervised settings.",
"Let us start with the supervised case.",
"We are provided with paired examples (x, c) where x is the observation and c is a control variate.",
"Crucially, there exists a one-to-many map from c to the space of observations, and there are other unobserved attributes z (or noise) that together completely define x.",
"For instance, if x were an image of a single object, c controls the orientation of the object relative to the camera and z could represent object identity, texture or background.",
"Our goal is to learn a generative model p θ (x|c, z) that fulfills two criteria:",
"If we were learning models of images, we would like the generated images to look realistic and match the true conditional distribution p(x|c).",
"We present a simple yet effective method of learning representations in deep generative models in the setting where the observation is determined by control variate C and noise variate Z. Our method ensures that in the learned representation both C and Z are disentangled as well as the components of C themselves.",
"This is done without compromising the quality of the generated samples.",
"In future work, we would like to explore how this method can be applied to input with multiple objects."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3243243098258972,
0.17777776718139648,
0.3255814015865326,
0.1904761791229248,
0.3448275923728943,
0.0624999962747097,
0.14999999105930328,
0.15686273574829102,
0.1904761791229248,
0,
0.1860465109348297,
0.15789473056793213,
0.1875,
0.3333333432674408,
0.20408162474632263,
0.06666666269302368,
0.2153846174478531,
0.16129031777381897,
0.3921568691730499,
0.15789473056793213,
0.04999999329447746,
0,
0.1538461446762085,
0.1463414579629898,
0,
0.1666666567325592,
0.307692289352417,
0.2083333283662796,
0.29411762952804565,
0.13333332538604736,
0.07999999821186066,
0.1666666567325592,
0.22727271914482117,
0.17777776718139648,
0.23529411852359772,
0.1538461446762085,
0.28070175647735596,
0.3571428656578064,
0
] | r1e74a4twH | true | [
"Hierarchical generative model (hybrid of VAE and GAN) that learns a disentangled representation of data without compromising the generative quality."
] |
[
"Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition (NER).",
"However, this typically requires large amounts of labeled data.",
"In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning.",
"While active learning is sample-efficient, it can be computationally expensive since it requires iterative retraining.",
"To speed this up, we introduce a lightweight architecture for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and word encoders and a long short term memory (LSTM) tag decoder.",
"The model achieves nearly state-of-the-art performance on standard datasets for the task while being computationally much more efficient than best performing models.",
"We carry out incremental active learning, during the training process, and are able to nearly match state-of-the-art performance with just 25\\% of the original training data.",
"Over the past few years, papers applying deep neural networks (DNNs) to the task of named entity recognition (NER) have successively advanced the state-of-the-art BID7 BID17 BID24 BID6 BID48 .",
"However, under typical training procedures, the advantages of deep learning diminish when working with small datasets.",
"For instance, on the OntoNotes-5.0 English dataset, whose training set contains 1,088,503 words, a DNN model outperforms the best shallow model by 2.24% as measured by F1 score BID6 .",
"However, on the comparatively small CoNLL-2003 English dataset, whose training set contains 203,621 words, the best DNN model enjoys only a 0.4% advantage.",
"To make deep learning more broadly useful, it is crucial to reduce its training data requirements.Generally, the annotation budget for labeling is far less than the total number of available (unlabeled) samples.",
"For NER, getting unlabeled data is practically free, owing to the large amount of content that can be efficiently scraped off the web.",
"On the other hand, it is especially expensive to obtain annotated data for NER since it requires multi-stage pipelines with sufficiently well-trained annotators BID19 BID5 .",
"In such cases, active learning offers a promising approach to efficiently select the set of samples for labeling.",
"Unlike the supervised learning setting, in which examples are drawn and labeled at random, in the active learning setting, the algorithm can choose which examples to label.Active learning aims to select a more informative set of examples in contrast to supervised learning, which is trained on a set of randomly drawn examples.",
"A central challenge in active learning is to determine what constitutes more informative and how the active learner can recognize this based on what it already knows.",
"The most common approach is uncertainty sampling, in which the model preferentially selects examples for which it's current prediction is least confident.",
"Other approaches include representativeness-based sampling where the model selects a diverse set that represent the input space without adding too much redundancy.In this work, we investigate practical active learning algorithms on lightweight deep neural network architectures for the NER task.",
"Training with active learning proceeds in multiple rounds.",
"Traditional active learning schemes are expensive for deep learning since they require complete retraining of the classifier with newly annotated samples after each round.",
"In our experiments, for example, the model must be retrained 54 times.",
"Because retraining from scratch is not practical, we instead carry out incremental training with each batch of new labels: we mix newly annotated samples with the older ones, and update our neural network weights for a small number of epochs, before querying for labels in a new round.",
"This modification drastically reduces the computational requirements of active learning methods and makes it practical to deploy them.We further reduce the computational complexity by selecting a lightweight architecture for NER.",
"We propose a new CNN-CNN-LSTM architecture for NER consisting of a convolutional character-level encoder, convolutional word-level encoder, and long short term memory (LSTM) tag decoder.",
"This model handles out-of-vocabulary words gracefully and, owing to the greater reliance on convolutions (vs recurrent layers), trains much faster than other deep models while performing competitively.We introduce a simple uncertainty-based heuristic for active learning with sequence tagging.",
"Our model selects those sentences for which the length-normalized log probability of the current prediction is the lowest.",
"Our experiments with the Onto-Notes 5.0 English and Chinese datasets demonstrate results comparable to the Bayesian active learning by disagreement method .",
"Moreover our heuristic is faster to compute since it does not require multiple forward passes.",
"On the OntoNotes-5.0 English dataset, our approach matches 99% of the F1 score achieved by the best deep models trained in a standard, supervised fashion despite using only a 24.9% of the data.",
"On the OntoNotes-5.0 Chinese dataset, we match 99% performance with only 30.1% of the data.",
"Thus, we are able to achieve state of art performance with drastically lower number of samples."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.21276594698429108,
0.10256409645080566,
0.2641509473323822,
0.09090908616781235,
0.2666666507720947,
0.1538461446762085,
0.7037037014961243,
0.24561403691768646,
0.17391303181648254,
0.10344827175140381,
0.11320754140615463,
0.2295081913471222,
0.19230768084526062,
0.2222222238779068,
0.25,
0.28125,
0.1818181723356247,
0.1599999964237213,
0.14492753148078918,
0.10526315122842789,
0.18867923319339752,
0.0952380895614624,
0.3055555522441864,
0.33898305892944336,
0.23076923191547394,
0.23188404738903046,
0.21739129722118378,
0.19607841968536377,
0.08888888359069824,
0.13333332538604736,
0.260869562625885,
0.2222222238779068
] | ry018WZAZ | true | [
"We introduce a lightweight architecture for named entity recognition and carry out incremental active learning, which is able to match state-of-the-art performance with just 25% of the original training data."
] |
[
"Network quantization is a model compression and acceleration technique that has become essential to neural network deployment.",
"Most quantization methods per- form fine-tuning on a pretrained network, but this sometimes results in a large loss in accuracy compared to the original network.",
"We introduce a new technique to train quantization-friendly networks, which can be directly converted to an accurate quantized network without the need for additional fine-tuning.",
"Our technique allows quantizing the weights and activations of all network layers down to 4 bits, achieving high efficiency and facilitating deployment in practical settings.",
"Com- pared to other fully quantized networks operating at 4 bits, we show substantial improvements in accuracy, for example 66.68% top-1 accuracy on ImageNet using ResNet-18, compared to the previous state-of-the-art accuracy of 61.52% Louizos et al. (2019) and a full precision reference accuracy of 69.76%.",
"We performed a thorough set of experiments to test the efficacy of our method and also conducted ablation studies on different aspects of the method and techniques to improve training stability and accuracy.",
"Our codebase and trained models are available on GitHub.",
"Neural network quantization is a technique to reduce the size of deep networks and to bypass computationally and energetically expensive floating-point arithmetic operations in favor of efficient integer arithmetic on quantized versions of model weights and activations.",
"Network quantization has been the focus of intensive research in recent years (Rastegari et al., 2016; Zhou et al., 2016; Jacob et al., 2018; Krishnamoorthi, 2018; Jung et al., 2018; Louizos et al., 2019; Nagel et al., 2019; Gong et al., 2019) , with most works belonging to one of two categories.",
"The first line of work quantizes parts of the network while leaving a portion of its operations, e.g. computations in the first and last network layers in floating point.",
"While such networks can be highly efficient, using bitwidths down to 5 or 4 bits with minimal loss in network accuracy (Zhang et al., 2018; Jung et al., 2018) , they may be difficult to deploy in certain practical settings, due to the complexity of extra floating point hardware needed to execute the non-quantized portions of the network.",
"Another line of work aims for ease of real world deployment by quantizing the entire network, including all weights and activations in all convolutional and fully connected layers; we term this scheme strict quantization.",
"Maintaining accuracy under strict quantization is considerably more challenging.",
"While nearly lossless 8-bit strictly quantized networks have been proposed (Jacob et al., 2018) , to date state-of-the-art 4 bit networks incur large losses in accuracy compared to full precision reference models.",
"For example, the strict 4-bit ResNet-18 model in Louizos et al. (2019) has 61.52% accuracy, compared to 69.76% for the full precision model, while the strict 4-bit MobileNet-v2 model in Krishnamoorthi (2018) has 62.00% accuracy, compared to 71.88% accuracy in full precision.",
"To understand the difficulty of training accurate low-bitwidth strictly quantized networks, consider a common training procedure which begins with a pre-trained network, quantizes the model, then applies fine-tuning using straight-through estimators (STE) for gradient updates until the model achieves sufficient quantized accuracy.",
"This process faces two problems.",
"First, as the pre-trained model was not initially trained with the task of being subsequently quantized in mind, it may not be \"quantization-friendly\".",
"That is, the fine-tuning process may need to make substantial changes to the initial model in order to transform it to an accurate quantized model.",
"Second, fine-tuning a model, especially at low bitwidths, is difficult due to the lack of accurate gradient information provided Figure 1 : Architecture of the proposed GQ-Net.",
"Input x 0 follows the top and bottom paths to produce the full precision and quantized outputs x L andx L , resp.",
"These are combined through loss functions L f and L q to form the overall loss L, which is optimized by backpropagation.",
"For more details please refer to Section 3.",
"by STE.",
"In particular, fine-tuning using STE is done by updating a model represented internally with floating point values using gradients computed at the nearest quantizations of the floating point values.",
"Thus for example, if we apply 4 bit quantization to floating point model parameters in the range [0, 1], a random parameter will incur an average round-off error of 1/32, which will be incorporated into the error in the STE gradient for this parameter, leading to possibly ineffective fine-tuning.",
"To address these problems, we propose GQ-Net, a guided quantization training algorithm.",
"The main goal of GQ-Net is to produce an accurate and quantization-friendly full precision model, i.e. a model whose quantized version, obtained by simply rounding each full precision value to its nearest quantized point, has nearly the same accuracy as itself.",
"To do this, we design a loss function for the model which includes two components, one to minimize error with respect to the training labels, and another component to minimize the distributional difference between the model's outputs and the outputs of the model's quantized version.",
"This loss function has the effect of guiding the optimization process towards a model which is both accurate, by virtue of minimizing the first loss component, and which is also similar enough to its quantized version due to minimization of the second component to ensure that the quantized model is also accurate.",
"In addition, because the first component of the loss function deals only with floating point values, it provides accurate gradient information during optimization, in contrast to STE-based optimization which uses biased gradients at rounded points, which further improves the accuracy of the quantized model.",
"Since GQ-Net directly produces a quantized model which does not require further fine-tuning, the number of epochs required to train GQ-Net is substantially less than the total number of epochs needed to train and fine-tune a model using the traditional quantization approach, leading to significantly reduced wall-clock training time.",
"We note that GQ-Net's technique is independent of and can be used in conjunction with other techniques for improving quantization accuracy, as we demonstrate in Section 4.3.",
"Finally, we believe that the guided training technique we propose can also be applied to other neural network structural optimization problems such as network pruning.",
"We implemented GQ-Net in PyTorch and our codebase and trained models are publicly available 1 .",
"We validated GQ-Net on the ImageNet classification task with the widely used ResNet-18 and compact MobileNet-v1/v2 models, and also performed a thorough set of ablation experiments to study different aspects of our technique.",
"In terms of quantization accuracy loss compared to reference floating point models, GQ-Net strictly quantized using 4-bit weights and activations surpasses existing state-of-the-art strict methods by up to 2.7×, and also improves upon these methods even when they use higher bitwidths.",
"In particular, 4-bit GQ-Net applied to ResNet-18 achieves 66.68% top-1 accuracy, compared to 61.52% accuracy in Louizos et al. (2019) and a reference floating point accuracy of 69.76%, while on MobileNet-v2 GQ-Net achieves 66.15% top-1 accuracy compared to 62.00% accuracy in Krishnamoorthi (2018) and a reference floating point accuracy of 71.88%.",
"Additionally, GQ-Net achieves these results using layer-wise quantization, as opposed to channel-wise quantization in Krishnamoorthi (2018) , which further enhances the efficiency and practicality of the technique.",
"In this paper we presented GQ-Net, a novel method for training accurate quantized neural networks.",
"GQ-Net uses a loss function balancing full precision accuracy as well as similarity between the full precision and quantized models to guide the optimization process.",
"By properly tuning the weights of these two factors, we obtained fully quantized networks whose accuracy significantly exceeds the state of the art.",
"We are currently studying additional ways to adjust GQ-Net components to further improve accuracy.",
"We are also interested in combining GQ-Net with complementary quantization techniques, and in applying similar methodologies to other neural network optimization problems."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.1621621549129486,
0.1860465109348297,
0.27272728085517883,
0.09090908616781235,
0.307692289352417,
0.21739129722118378,
0.06896550953388214,
0.23529411852359772,
0.0363636314868927,
0.13636362552642822,
0.1492537260055542,
0.11764705181121826,
0.06896550953388214,
0.19999998807907104,
0.18867923319339752,
0.24561403691768646,
0,
0.1463414579629898,
0.19999998807907104,
0.13333332538604736,
0.25641024112701416,
0.14999999105930328,
0,
0.1818181723356247,
0.09677419066429138,
0.0624999962747097,
0.3103448152542114,
0.3333333432674408,
0.31578946113586426,
0.23728813230991364,
0.24137930572032928,
0.08510638028383255,
0.04651162400841713,
0.11764705181121826,
0.1599999964237213,
0.16949151456356049,
0.10169491171836853,
0.1304347813129425,
0.22857142984867096,
0.4878048598766327,
0.25,
0.12121211737394333,
0.09756097197532654
] | Hkx3ElHYwS | true | [
"We train accurate fully quantized networks using a loss function maximizing full precision model accuracy and minimizing the difference between the full precision and quantized networks."
] |
[
"While much of the work in the design of convolutional networks over the last five years has revolved around the empirical investigation of the importance of depth, filter sizes, and number of feature channels, recent studies have shown that branching, i.e., splitting the computation along parallel but distinct threads and then aggregating their outputs, represents a new promising dimension for significant improvements in performance.",
"To combat the complexity of design choices in multi-branch architectures, prior work has adopted simple strategies, such as a fixed branching factor, the same input being fed to all parallel branches, and an additive combination of the outputs produced by all branches at aggregation points. \n\n",
"In this work we remove these predefined choices and propose an algorithm to learn the connections between branches in the network.",
"Instead of being chosen a priori by the human designer, the multi-branch connectivity is learned simultaneously with the weights of the network by optimizing a single loss function defined with respect to the end task.",
"We demonstrate our approach on the problem of multi-class image classification using four different datasets where it yields consistently higher accuracy compared to the state-of-the-art ``ResNeXt'' multi-branch network given the same learning capacity.",
"Deep neural networks have emerged as one of the most prominent models for problems that require the learning of complex functions and that involve large amounts of training data.",
"While deep learning has recently enabled dramatic performance improvements in many application domains, the design of deep architectures is still a challenging and time-consuming endeavor.",
"The difficulty lies in the many architecture choices that impact-often significantly-the performance of the system.",
"In the specific domain of image categorization, which is the focus of this paper, significant research effort has been invested in the empirical study of how depth, filter sizes, number of feature maps, and choice of nonlinearities affect performance BID8 BID17 BID24 BID19 Zeiler & Fergus, 2014; .",
"Recently, several authors have proposed to simplify the architecture design by defining convolutional neural networks (CNNs) in terms of combinations of basic building blocks.",
"This strategy was arguably first popularized by the VGG networks BID25 which were built by stacking a series of convolutional layers having identical filter size (3 × 3).",
"The idea of modularized CNN design was made even more explicit in residual networks (ResNets) BID13 , which are constructed by combining residual blocks of fixed topology.",
"While in ResNets residual blocks are stacked one on top of each other to form very deep networks, the recently introduced ResNeXt models BID31 have shown that it is also beneficial to arrange these building blocks in parallel to build multi-branch convolutional networks.",
"The modular component of ResNeXt then consists of C parallel branches, corresponding to residual blocks with identical topology but distinct parameters.",
"Network built by stacking these multi-branch components have been shown to lead to better results than single-thread ResNets of the same capacity.While the principle of modularized design has greatly simplified the challenge of building effective architectures for image analysis, the choice of how to combine and aggregate the computations of these building blocks still rests on the shoulders of the human designer.",
"In order to avoid a combinatorial explosion of options, prior work has relied on simple, uniform rules of aggregation (1) DISPLAYFORM0 Submitted to 31st Conference on Neural Information Processing Systems (NIPS 2017).",
"Do not distribute.",
"In this paper we introduced an algorithm to learn the connectivity of deep multi-branch networks.",
"The problem is formulated as a single joint optimization over the weights and the branch connections of the model.",
"We tested our approach on challenging image categorization benchmarks where it led to significant accuracy improvements over the state-of-the-art ResNeXt model.",
"An added benefit of our approach is that it can automatically identify superfluous blocks, which can be pruned without impact on accuracy for more efficient testing and for reducing the number of parameters to store.While our experiments were focused on a particular multi-branch architecture (ResNeXt) and a specific form of building block (residual block), we expect the benefits of our approach to extend to other modules and network structures.",
"For example, it could be applied to learn the connectivity of skip-connections in DenseNets BID14 , which are currently based on predefined connectivity rules.",
"In this paper, our masks perform non-parametric additive aggregation of the branch outputs.",
"It would be interesting to experiment with learnable (parametric) aggregations of the outputs from the individual branches.",
"Our approach is limited to learning connectivity within a given, fixed architecture.",
"Future work will explore the use of learnable masks for architecture discovery.",
"Normalize the real-valued mask to sum up to 1:m DISPLAYFORM0 Set active binary mask based on drawn samples: DISPLAYFORM1 j of the mask, given branch activations y DISPLAYFORM2 and y DISPLAYFORM3 The CIFAR-10 dataset consists of color images of size 32x32.",
"The training set contains 50,000 images, the testing set 10,000 images.",
"Each image in CIFAR-10 is categorized into one of 10 possible classes.",
"In Table 3 , we report the performance of different models trained on CIFAR-10.",
"From these results we can observe that our models using learned connectivity achieve consistently better performance over the equivalent models trained with the fixed connectivity BID31 .",
"Table 3 : CIFAR-10 accuracies (single crop) achieved by different multi-branch architectures trained using the predefined connectivity of ResNeXt (Fixed-Full) versus the connectivity learned by our algorithm (Learned).",
"Each model was trained 4 times, using different random initializations.",
"For each model we report the best test performance as well as the mean test performance computed from the 4 runs."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11235954612493515,
0.15789473056793213,
0.29629629850387573,
0.19672130048274994,
0.4000000059604645,
0.16949151456356049,
0.13793103396892548,
0.1666666567325592,
0.1599999964237213,
0.14035087823867798,
0.09836065024137497,
0.1355932205915451,
0.3287671208381653,
0.1111111044883728,
0.1463414579629898,
0.1269841194152832,
0,
0.6122449040412903,
0.19607841968536377,
0.4000000059604645,
0.24719101190567017,
0.24561403691768646,
0.1702127605676651,
0.11999999731779099,
0.21739129722118378,
0.1304347813129425,
0.14492753148078918,
0.09090908616781235,
0.1304347813129425,
0.25,
0.28070175647735596,
0.16949151456356049,
0,
0.07999999821186066
] | Sy3fJXbA- | true | [
"In this paper we introduced an algorithm to learn the connectivity of deep multi-branch networks. The approach is evaluated on image categorization where it consistently yields accuracy gains over state-of-the-art models that use fixed connectivity."
] |
[
"Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret.",
"Especially, little is known about how they represent language in their intermediate layers.",
"In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns.",
"In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text.",
"We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language.",
"Understanding and interpreting how deep neural networks process natural language is a crucial and challenging problem.",
"While deep neural networks have achieved state-of-the-art performances in neural machine translation (NMT) BID18 Cho et al., 2014; Kalchbrenner et al., 2016; BID21 , sentiment classification tasks BID23 Conneau et al., 2017) and many more, the sequence of non-linear transformations makes it difficult for users to make sense of any part of the whole model.",
"Because of their lack of interpretability, deep models are often regarded as hard to debug and unreliable for deployment, not to mention that they also prevent the user from learning about how to make better decisions based on the model's outputs.An important research direction toward interpretable deep networks is to understand what their hidden representations learn and how they encode informative factors when solving the target task.",
"Some studies including Bau et al. (2017); Fong & Vedaldi (2018) ; BID8 have researched on what information is captured by individual or multiple units in visual representations learned for image recognition tasks.",
"These studies showed that some of the individual units are selectively responsive to specific visual concepts, as opposed to getting activated in an uninterpretable manner.",
"By analyzing individual units of deep networks, not only were they able to obtain more fine-grained insights about the representations than analyzing representations as a whole, but they were also able to find meaningful connections to various problems such as generalization of network BID5 , generating explanations for the decision of the model BID25 BID9 BID26 and controlling the output of generative model (Bau et al., 2019) .Since",
"these studies of unit-level representations have mainly been conducted on models learned for computer vision-oriented tasks, little is known about the representation of models learned from natural language processing (NLP) tasks. Several",
"studies that have previously analyzed individual units of natural language representations assumed that they align a predefined set of specific concepts, such as sentiment present in the text BID12 ), text lengths, quotes and brackets (Karpathy et al., 2015) . They discovered",
"the emergence of certain units that selectively activate to those specific concepts. Building upon these",
"lines of research, we consider the following question: What natural language concepts are captured by each unit in the representations learned from NLP tasks? FIG11 : We discover",
"the most activated sentences and aligned concepts to the units in hidden representations of deep convolutional networks. Aligned concepts appear",
"frequently in most activated sentences, implying that those units respond selectively to specific natural language concepts.To answer this question, we newly propose a simple but highly effective concept alignment method that can discover which natural language concepts are aligned to each unit in the representation. Here we use the term unit",
"to refer to each channel in convolutional representation, and natural language concepts to refer to the grammatical units of natural language that preserve meanings; i.e. morphemes, words, and phrases. Our approach first identifies",
"the most activated sentences per unit and breaks those sentences into these natural language concepts. It then aligns specific concepts",
"to each unit by measuring activation value of replicated text that indicates how much each concept contributes to the unit activation. This method also allows us to systematically",
"analyze the concepts carried by units in diverse settings, including depth of layers, the form of supervision, and dataspecific or task-specific dependencies.The contributions of this work can be summarized as follows:• We show that the units of deep CNNs learned in NLP tasks could act as a natural language concept detector. Without any additional labeled data or re-training",
"process, we can discover, for each unit of the CNN, natural language concepts including morphemes, words and phrases that are present in the training data.• We systematically analyze what information is captured",
"by units in representation across multiple settings by varying network architectures, tasks, and datasets. We use VD-CNN (Conneau et al., 2017) for sentiment and topic",
"classification tasks on Yelp Reviews, AG News BID23 , and DBpedia ontology dataset BID4 and ByteNet (Kalchbrenner et al., 2016) for translation tasks on Europarl BID3 and News Commentary BID20 datasets.• We also analyze how aligned natural language concepts evolve",
"as they get represented in deeper layers. As part of our analysis, we show that our interpretation of learned",
"representations could be utilized at designing network architectures with fewer parameters but with comparable performance to baseline models.",
"We proposed a simple but highly effective concept alignment method for character-level CNNs to confirm that each unit of the hidden layers serves as detectors of natural language concepts.",
"Using this method, we analyzed the characteristics of units with multiple datasets on classification and translation tasks.",
"Consequently, we shed light on how deep representations capture the natural language, and how they vary with various conditions.An interesting future direction is to extend the concept coverage from natural language to more abstract forms such as sentence structure, nuance, and tone.",
"Another direction is to quantify the properties of individual units in other models widely used in NLP tasks.",
"In particular, combining our definition of concepts with the attention mechanism (e.g. Bahdanau et al. FORMULA1 ) could be a promising direction, because it can reveal how the representations are attended by the model to capture concepts, helping us better understand the decision-making process of popular deep models."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2380952388048172,
0.12903225421905518,
0.38461539149284363,
0.10256409645080566,
0.1904761791229248,
0.12121211737394333,
0.0923076868057251,
0.10666666179895401,
0.23529411852359772,
0.380952388048172,
0.11267605423927307,
0.21276594698429108,
0.24561403691768646,
0.3030303120613098,
0.4000000059604645,
0.277777761220932,
0.2950819730758667,
0.31111109256744385,
0.1666666567325592,
0.09756097197532654,
0.3235294222831726,
0.2800000011920929,
0.1428571343421936,
0.18518517911434174,
0.22857142984867096,
0.11428570747375488,
0.260869562625885,
0.11428570747375488,
0.1428571343421936,
0.34285715222358704,
0.1269841194152832
] | S1EERs09YQ | true | [
"We show that individual units in CNN representations learned in NLP tasks are selectively responsive to natural language concepts."
] |
[
"Applying reinforcement learning (RL) to real-world problems will require reasoning about action-reward correlation over long time horizons.",
"Hierarchical reinforcement learning (HRL) methods handle this by dividing the task into hierarchies, often with hand-tuned network structure or pre-defined subgoals.",
"We propose a novel HRL framework TAIC, which learns the temporal abstraction from past experience or expert demonstrations without task-specific knowledge.",
"We formulate the temporal abstraction problem as learning latent representations of action sequences and present a novel approach of regularizing the latent space by adding information-theoretic constraints.",
"Specifically, we maximize the mutual information between the latent variables and the state changes.\n",
"A visualization of the latent space demonstrates that our algorithm learns an effective abstraction of the long action sequences.",
"The learned abstraction allows us to learn new tasks on higher level more efficiently.",
"We convey a significant speedup in convergence over benchmark learning problems.",
"These results demonstrate that learning temporal abstractions is an effective technique in increasing the convergence rate and sample efficiency of RL algorithms.",
"Reinforcement learning (RL) has been successfully applied to many different tasks (Mnih et al., 2015; Zhu et al., 2017) .",
"However, applying it to real-world tasks remains a challenging problem, mainly due to the large search space and sparse reward signals.",
"In order to solve this, many research efforts have been focused on the hierarchical reinforcement learning (HRL), which decomposes an RL problem into sub-goals.",
"By solving the sub-goals, low-level actions are composed into high-level temporal abstractions.",
"In this way, the size of the searching space is decreased exponentially.",
"However, the HRL often requires explicitly specifying task structures or sub-goals (Barto & Mahadevan, 2003; Arulkumaran et al., 2017) .",
"How to learn those task structures or temporal abstractions automatically is still an active studying area.",
"Many different strategies are proposed for automatically discovering the task hierarchy or learning the temporal abstraction.",
"Some early studies try to find sub-goals or critical states based on statistic methods (Hengst, 2002; Jonsson, 2006; Kheradmandian & Rahmati, 2009 ).",
"More recent work seeks to learn the temporal abstraction with deep learning (Florensa et al., 2017; Tessler et al., 2017; Haarnoja et al., 2018a) .",
"However, many of these methods still require a predefined hierarchical policy structure (e.g. the number of sub-policies), or need some degree of task-specific knowledge (e.g. hand-crafted reward function).",
"We present a general HRL framework TAIC (Temporal Abstraction with Information-theoretic Constraints), which allows an agent to learn the temporal abstraction from past experiences or expert demonstrations without task-specific knowledge.",
"Built upon the ideas of options framework (Sutton et al., 1999) and motor skills (Lin, 1993) , we formulate the temporal abstraction problem as learning a latent representation of action sequences.",
"In order to obtain good latent representations, we propose a novel approach to regularize the latent space by using information-theoretic constraints.",
"The learned abstract representations of action sequences (we called options) allow us to do RL at a higher level, and easily transfer the knowledge between different tasks.",
"Our contributions are: 1) We formulate the temporal abstraction problem as learning a latent representation of action sequences.",
"Motivated by works using Recurrent Variational AutoEncoders (RVAE) to model sequential data in neural language processing (NLP) and other areas (Bowman et al., 2015; Ha & Eck, 2017) , we employ RVAE to perform temporal abstraction in RL.",
"2) We propose a regularization approach on the option space.",
"It constrains the option to encode more information about its consequence (how the option changes the states).",
"We present both theoretical derivations and practical solutions.",
"3) We show in the experiments that our learned temporal abstraction conveys meaningful information and benefit the RL training.",
"In addition, the proposed framework provides an efficient tool for transferring knowledge between tasks.",
"This paper presented a general HRL framework TAIC for learning temporal abstraction from action sequences.",
"We formulate the temporal abstraction problem as learning latent representations (called options) over action sequences.",
"In order to learn a better representation, we derive theoretically on how to regularize the option space and give an applicable solution of adding constraints to option space.",
"In the experiments, we try to reveal the underlying structure of the option space by visualizing the correlation between options and state changes.",
"We showed qualitatively and quantitatively that our options encode meaningful information and benefit the RL training.",
"Furthermore, the TAIC framework provides an efficient tool to transfer the knowledge learned from one task to another.",
"Our framework can be applied together with all kinds of RL optimization algorithms, and can be applied to both discrete and continuous problems.",
"This work brings many new directions for future studies.",
"As we currently learn the RL task and the option separately, the option could not be improved with the improvement of the policy.",
"In theory, it is entirely feasible to jointly optimize the two parts, or at least train them alternately.",
"As mentioned above, the current sub-policy acts like an open-loop controller.",
"So learning a close-loop sub-policy beyond the RNN decoder will be one of the focus areas of our future studies.",
"We would also like to apply the TAIC framework to discrete problems and with other RL algorithms such as DQN and SAC.",
"This could bring more insights to further improve the framework."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.052631575614213943,
0.0952380895614624,
0.4285714328289032,
0.5777777433395386,
0.1764705777168274,
0.2631579041481018,
0.05714285373687744,
0.25,
0.23255813121795654,
0.051282044500112534,
0.09756097197532654,
0.17777776718139648,
0.12121211737394333,
0.1249999925494194,
0.09756097197532654,
0.05405404791235924,
0.2222222238779068,
0,
0.19512194395065308,
0.12765957415103912,
0.2745097875595093,
0.5199999809265137,
0.29999998211860657,
0.1666666567325592,
0.6666666865348816,
0.14035087823867798,
0.25806450843811035,
0.05714285373687744,
0.06896550953388214,
0.25641024112701416,
0.05714285373687744,
0.3333333432674408,
0.555555522441864,
0.17777776718139648,
0.1463414579629898,
0.1111111044883728,
0.05405404791235924,
0.04999999329447746,
0,
0.1538461446762085,
0.051282044500112534,
0.0624999962747097,
0.20512819290161133,
0.1463414579629898,
0.06451612710952759
] | HkeUDCNFPS | true | [
"We propose a novel HRL framework, in which we formulate the temporal abstraction problem as learning a latent representation of action sequence."
] |
[
"Recurrent neural networks (RNNs) have achieved state-of-the-art performance on many diverse tasks, from machine translation to surgical activity recognition, yet training RNNs to capture long-term dependencies remains difficult.",
"To date, the vast majority of successful RNN architectures alleviate this problem using nearly-additive connections between states, as introduced by long short-term memory (LSTM).",
"We take an orthogonal approach and introduce MIST RNNs, a NARX RNN architecture that allows direct connections from the very distant past.",
"We show that MIST RNNs",
"1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs;",
"2) are far more efficient than previously-proposed NARX RNN architectures, requiring even fewer computations than LSTM; and",
"3) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies.",
"Recurrent neural networks BID33 Werbos, 1988; BID35 ) are a powerful class of neural networks that are naturally suited to modeling sequential data.",
"For example, in recent years alone, RNNs have achieved state-of-the-art performance on tasks as diverse as machine translation , speech recognition BID29 , generative image modeling BID30 , and surgical activity recognition BID8 .These",
"successes, and the vast majority of other RNN successes, rely on a mechanism introduced by long short-term memory BID20 BID14 , which was designed to alleviate the so called vanishing gradient problem (Hochreiter, 1991; BID3 . The problem",
"is that gradient contributions from events at time t − τ to a loss at time t diminish exponentially fast with τ , thus making it extremely difficult to learn from distant events (see FIG0 . LSTM alleviates",
"the problem using nearly-additive connections between adjacent states, which help push the base of the exponential decay toward 1. However LSTM in",
"no way solves the problem, and in many cases still fails to learn long-term dependencies (see, e.g., BID0 ).",
"In this work we analyzed NARX RNNs and introduced a variant which we call MIST RNNs, which",
"1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs;",
"2) improve performance substantially over LSTM on tasks requiring very long-term dependencies; and",
"3) require even fewer parameters and computation than LSTM.",
"One obvious direction for future work is the exploration of other NARX RNN architectures with non-contiguous delays.",
"In addition, many recent techniques that have focused on LSTM are immediately transferable to NARX RNNs, such as variational dropout BID11 , layer normalization BID1 , and zoneout BID23 , and it will be interesting to see if such enhancements can improve MIST RNN performance further."
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14492753148078918,
0,
0.21875,
0.12765957415103912,
0.4000000059604645,
0.37931033968925476,
0.45614033937454224,
0.06451612710952759,
0.1666666567325592,
0.10526315122842789,
0.0833333283662796,
0.09677419066429138,
0.1269841194152832,
0.21052631735801697,
0.4000000059604645,
0.4363636374473572,
0.23529411852359772,
0.06779660284519196,
0.24096384644508362
] | r1pW0WZAW | true | [
"We introduce MIST RNNs, which a) exhibit superior vanishing-gradient properties in comparison to LSTM; b) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies; and c) are much more efficient than previously-proposed NARX RNNs, with even fewer parameters and operations than LSTM."
] |
[
"A well-trained model should classify objects with unanimous score for every category.",
"This requires the high-level semantic features should be alike among samples, despite a wide span in resolution, texture, deformation, etc.",
"Previous works focus on re-designing the loss function or proposing new regularization constraints on the loss.",
"In this paper, we address this problem via a new perspective.",
"For each category, it is assumed that there are two sets in the feature space: one with more reliable information and the other with less reliable source.",
"We argue that the reliable set could guide the feature learning of the less reliable set during training - in spirit of student mimicking teacher’s behavior and thus pushing towards a more compact class centroid in the high-dimensional space.",
"Such a scheme also benefits the reliable set since samples become more closer within the same category - implying that it is easilier for the classifier to identify.",
"We refer to this mutual learning process as feature intertwiner and embed the spirit into object detection.",
"It is well-known that objects of low resolution are more difficult to detect due to the loss of detailed information during network forward pass.",
"We thus regard objects of high resolution as the reliable set and objects of low resolution as the less reliable set.",
"Specifically, an intertwiner is achieved by minimizing the distribution divergence between two sets.",
"We design a historical buffer to represent all previous samples in the reliable set and utilize them to guide the feature learning of the less reliable set.",
"The design of obtaining an effective feature representation for the reliable set is further investigated, where we introduce the optimal transport (OT) algorithm into the framework.",
"Samples in the less reliable set are better aligned with the reliable set with aid of OT metric.",
"Incorporated with such a plug-and-play intertwiner, we achieve an evident improvement over previous state-of-the-arts on the COCO object detection benchmark.",
"Classifying complex data in the high-dimensional feature space is the core of most machine learning problems, especially with the emergence of deep learning for better feature embedding (Krizhevsky et al., 2012; BID3 BID10 .",
"Previous methods address the feature representation problem by the conventional cross-entropy loss, l 1 / l 2 loss, or a regularization constraint on the loss term to ensure small intra-class variation and large inter-class distance (Janocha & Czarneck, 2017; BID16 BID29 BID15 .",
"The goal of these works is to learn more compact representation for each class in the feature space.",
"In this paper, we also aim for such a goal and propose a new perspective to address the problem.Our observation is that samples can be grouped into two sets in the feature space.",
"One set is more reliable, while the other is less reliable.",
"For example, visual samples may be less reliable due to low resolution, occlusion, adverse lighting, noise, blur, etc.",
"The learned features for samples from the reliable set are easier to classify than those from the less reliable one.",
"Our hypothesis is that the reliable set can guide the feature learning of the less reliable set, in the spirit of a teacher supervising the student.",
"We refer to this mutual learning process as a feature intertwiner.In this paper, a plug-and-play module, namely, feature intertwiner, is applied for object detection, which is the task of classifying and localizing objects in the wild.",
"An object of lower resolution will inevitably lose detailed information during the forward pass in the network.",
"Therefore, it is well-known that the detection accuracy drops significantly as resolutions of objects decrease.",
"We can treat samples with high resolution (often corresponds to large objects or region proposals) as the reliable set and samples with low resolution (small instances) as the less reliable set 1 .",
"Equipped with these two 'prototypical' sets, we can apply the feature intertwiner where the reliable set is leveraged to help the feature learning of the less reliable set.",
"Without intertwiner in",
"(a), samples are more scattered and separated from each other.",
"Note there are several samples that are far from its own class and close to the samples in other categories (e.g., class person in blue), indicating a potential mistake in classification.",
"With the aid of feature intertwiner in",
"(b), there is barely outlier sample outside each cluster.",
"the features in the lower resolution set approach closer to the features in the higher resolution set -achieving the goal of compact centroids in the feature space.",
"Empirically, these two settings correspond to the baseline and intertwiner experiments (marked in gray) in TAB3 .",
"The overall mAP metric increases from 32.8% to 35.2%, with an evident improvement of 2.6% for small instances and a satisfying increase of 0.8% for large counterparts.",
"This suggests the proposed feature intertwiner could benefit both sets.Two important modifications are incorporated based on the preliminary intertwiner framework.",
"The first is the use of class-dependent historical representative stored in a buffer.",
"Since there might be no large sample for the same category in one mini-batch during training, the record of all previous features of a given category for large instances is recorded by a representative, of which value gets updated dynamically as training evolves.",
"The second is an inclusion of the optimal transport (OT) divergence as a deluxe regularization in the feature intertwiner.",
"OT metric maps the comparison of two distributions on high-dimensional feature space onto a lower dimension space so that it is more sensible to measure the similarity between two distributions.",
"For the feature intertwiner, OT is capable of enforcing the less reliable set to be better aligned with the reliable set.We name the detection system equipped with the feature intertwiner as InterNet.",
"Full code suite is available at https://github.com/hli2020/feature intertwiner.",
"For brevity, we put the descriptions of dividing two sets in the detection task, related work (partial), background knowledge on OT theory and additional experiments in the appendix.",
"In this paper, we propose a feature intertwiner module to leverage the features from a more reliable set to help guide the feature learning of another less reliable set.",
"This is a better solution for generating a more compact centroid representation in the high-dimensional space.",
"It is assumed that the high-level semantic features within the same category should resemble as much as possible among samples with different visual variations.",
"The mutual learning process helps two sets to have closer distance within the cluster in each class.",
"The intertwiner is applied on the object detection task, where a historical buffer is proposed to address the sample missing problem during one mini-batch and the optimal transport (OT) theory is introduced to enforce the similarity among the two sets.",
"Since the features in the reliable set serve as teacher in the feature learning, careful preparation of such features is required so that they would match the information in the small-object set.",
"This is why we design different options for the large set and finally choose OT as a solution.",
"With aid of the feature intertwiner, we improve the detection performance by a large margin compared to previous state-of-the-arts, especially for small instances.Feature intertwiner is positioned as a general alternative to feature learning.",
"As long as there exists proper division of one reliable set and the other less reliable set, one can apply the idea of utilizing the reliable set guide the feature learning of another, based on the hypothesis that these two sets share similar distribution in some feature space.",
"One direction in the future work would be applying feature intertwiner into other domains, e.g., data classification, if proper set division are available."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0624999962747097,
0.09999999403953552,
0.060606054961681366,
0,
0.22727271914482117,
0.26923075318336487,
0.17391303181648254,
0.2702702581882477,
0.1428571343421936,
0.29411762952804565,
0.12121211737394333,
0.380952388048172,
0.22727271914482117,
0.29411762952804565,
0.04999999329447746,
0.16326530277729034,
0.10344827175140381,
0.21052631735801697,
0.11538460850715637,
0.2666666507720947,
0.15789473056793213,
0.4324324131011963,
0.3499999940395355,
0.23076923191547394,
0.1111111044883728,
0.11428570747375488,
0.2222222238779068,
0.4761904776096344,
0.08695651590824127,
0.06666666269302368,
0.1249999925494194,
0.29629629850387573,
0,
0.3243243098258972,
0.17142856121063232,
0.1249999925494194,
0.1538461446762085,
0.12121211737394333,
0.1428571343421936,
0.21052631735801697,
0.17391303181648254,
0.35555556416511536,
0.06896550953388214,
0.08888888359069824,
0.6976743936538696,
0.05714285373687744,
0.0952380895614624,
0.1621621549129486,
0.15094339847564697,
0.27272728085517883,
0.10526315122842789,
0.23999999463558197,
0.28070175647735596,
0.17777776718139648
] | SyxZJn05YX | true | [
"(Camera-ready version) A feature intertwiner module to leverage features from one accurate set to help the learning of another less reliable set."
] |
[
"Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner.",
"Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario.",
"A key modelling decision is to what extent the architecture should be shared across tasks.",
"On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models.",
"On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur.",
"Ideally, the network should adaptively identify which parts of the network to share in a data driven way.",
"Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference.",
"Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting.",
"Continual learning (CL), sometimes called lifelong or incremental learning, refers to an online framework where the knowledge acquired from learning tasks in the past is kept and accumulated so that it can be reused in the present and future.",
"Data belonging to different tasks could potentially be non i.i.d. (Schlimmer & Fisher, 1986; Sutton & Whitehead, 1993; Ring, 1997; Schmidhuber, 2013; Nguyen et al., 2018; Schmidhuber, 2018) .",
"A continual learner must be able to learn a new task, crucially, without forgetting previous tasks (Ring, 1995; Srivastava et al., 2013; Serra et al., 2018; Hu et al., 2019) .",
"In addition, CL frameworks should continually adapt to any domain shift occurring across tasks.",
"The learning updates must be incremental -i.e, the model is updated at each task only using the new data and the old model, without access to all previous data (from earlier tasks) -due to speed, security and privacy constraints.",
"A compromise must be found between adapting to new tasks and enforcing stability to preserve knowledge from previous tasks.",
"Excessive adaptation could lead to inadvertent forgetting of how to perform earlier tasks.",
"Indeed, catastrophic forgetting is one of the main pathologies in continual learning (McCloskey & Cohen, 1989; Ratcliff, 1990; Robins, 1993; French, 1999; Pape et al., 2011; Goodfellow et al., 2014a; Achille et al., 2018; Diaz-Rodriguez et al., 2018; Zeno et al., 2018; Ahn et al., 2019; Parisi et al., 2019; Pfulb & Gepperth, 2019; Rajasegaran et al., 2019) .",
"Many approaches to continual learning employ an architecture which is divided a priori into",
"(i) a slowly evolving, global part; and",
"(ii) a quickly evolving, task-specific, local part.",
"This is one way to enable multi-task transfer whilst mitigating catastrophic forgetting, which has proven to be effective (Rusu et al., 2016b; Fernando et al., 2017; Yoon et al., 2018) , albeit with limitations.",
"Specifying a priori the shared global, and task-specific local parts in the architecture restricts flexibility.",
"As more complex and heterogeneous tasks are considered, one would like a more flexible, data-driven approach to determine the appropriate amount of sharing across tasks.",
"Here, we aim at automating the architecture adaptation process so that each neuron of the network can either be kept intact, i.e. acting as global, or adapted to the new task locally.",
"Our proposed variational inference framework is flexible enough to learn the range within which the adaptation parameters can vary.",
"We introduce for each neuron one binary parameter controlling whether or not to adapt, and two parameters to control the magnitude of adaptation.",
"All parameters are learnt via variational inference.",
"We introduce our framework as an expansion of the variational continual learning algorithm (Nguyen et al., 2018) , whose variational and sequential Bayesian nature makes it convenient for our modelling and architecture adaptation procedure.",
"Our modelling ideas can also be applied to other continual learning frameworks, see the Appendix for a brief discussion.",
"We highlight the following contributions: (1) A modelling framework which flexibly automates the adaptation of local and global parts of the (multi-task) continual architecture.",
"This optimizes the tradeoff between mitigating catastrophic forgetting and improving task transfer.",
"(2) A probabilistic variational inference algorithm which supports incremental updates with adaptively learned parameters.",
"( 3)",
"The ability to combine our modelling and inference approaches without any significant augmentation of the architecture (no new neurons are needed).",
"(4) State-of-the-art results in six experiments on five datasets, which demonstrate the effectiveness of our framework in terms of overall accuracy and reducing catastrophic forgetting.",
"We introduced a continual learning framework which learns how to adapt its architecture from the tasks and data at hand, based on variational inference.",
"Rather than rigidly dividing the architecture into shared and task-specific parts, our approach adapts the contributions of each neuron.",
"We achieve The impact of learning previous tasks on a specific task (the last task) is inspected and used as a proxy for evaluating forward transfer.",
"This is performed by evaluating the relative performance achieved on a unique task after learning a varying number of previous tasks.",
"This means that the value at x-axis = 1 refers to the learning accuracy of the last task after having learnt solely one task (only itself), the value at 2 refers to the learning accuracy of the last task after having learnt two tasks (an additional previous task), etc.",
"Overall, CLAW achieves state-of-the-art results in 4 out of the 5 experiments (at par in the fifth) in terms of avoiding negative transfer.",
"Best viewed in colour.",
"that without having to expand the architecture with new layers or new neurons.",
"Results of six different experiments on five datasets demonstrate the strong empirical performance of the introduced framework, in terms of the average overall continual learning accuracy and forward transfer, and also in terms of effectively alleviating catastrophic forgetting."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.21621620655059814,
0.17142856121063232,
0.1764705777168274,
0.0952380895614624,
0.07843136787414551,
0.17142856121063232,
0.24390242993831635,
0.13333332538604736,
0.11320754140615463,
0.04255318641662598,
0.17391303181648254,
0.12121211737394333,
0.07407406717538834,
0.1111111044883728,
0.06451612710952759,
0.06779660284519196,
0.3636363446712494,
0.07692307233810425,
0.07692307233810425,
0.0833333283662796,
0.12121211737394333,
0.0952380895614624,
0.07999999821186066,
0.3243243098258972,
0.04878048226237297,
0.1538461446762085,
0.23999999463558197,
0.21052631735801697,
0.25,
0,
0.3030303120613098,
0.14999999105930328,
0.1428571343421936,
0.6511628031730652,
0.05405404791235924,
0.13636362552642822,
0.1538461446762085,
0.07999999821186066,
0,
0,
0.12903225421905518,
0.12244897335767746
] | Hklso24Kwr | true | [
"A continual learning framework which learns to automatically adapt its architecture based on a proposed variational inference algorithm. "
] |
[
"High-dimensional data often lie in or close to low-dimensional subspaces.",
"Sparse subspace clustering methods with sparsity induced by L0-norm, such as L0-Sparse Subspace Clustering (L0-SSC), are demonstrated to be more effective than its L1 counterpart such as Sparse Subspace Clustering (SSC).",
"However, these L0-norm based subspace clustering methods are restricted to clean data that lie exactly in subspaces.",
"Real data often suffer from noise and they may lie close to subspaces.",
"We propose noisy L0-SSC to handle noisy data so as to improve the robustness.",
"We show that the optimal solution to the optimization problem of noisy L0-SSC achieves subspace detection property (SDP), a key element with which data from different subspaces are separated, under deterministic and randomized models.",
"Our results provide theoretical guarantee on the correctness of noisy L0-SSC in terms of SDP on noisy data.",
"We further propose Noisy-DR-L0-SSC which provably recovers the subspaces on dimensionality reduced data.",
"Noisy-DR-L0-SSC first projects the data onto a lower dimensional space by linear transformation, then performs noisy L0-SSC on the dimensionality reduced data so as to improve the efficiency.",
"The experimental results demonstrate the effectiveness of noisy L0-SSC and Noisy-DR-L0-SSC.",
"Clustering is an important unsupervised learning procedure for analyzing a broad class of scientific data in biology, medicine, psychology and chemistry.",
"On the other hand, high-dimensional data, such as facial images and gene expression data, often lie in low-dimensional subspaces in many cases, and clustering in accordance to the underlying subspace structure is particularly important.",
"For example, the well-known Principal Component Analysis (PCA) works perfectly if the data are distributed around a single subspace.",
"The subspace learning literature develops more general methods that recover multiple subspaces in the original data, and subspace clustering algorithms Vidal (2011) aim to partition the data such that data belonging to the same subspace are identified as one cluster.",
"Among various subspace clustering algorithms, the ones that employ sparsity prior, such as Sparse Subspace Clustering (SSC) Elhamifar & Vidal (2013) and 0 -Sparse Subspace Clustering ( 0 -SSC) Yang et al. (2016) , have been proven to be effective in separating the data in accordance with the subspaces that the data lie in under certain assumptions.",
"Sparse subspace clustering methods construct the sparse similarity matrix by sparse representation of the data.",
"Subspace detection property (SDP) defined in Section 4.1 ensures that the similarity between data from different subspaces vanishes in the sparse similarity matrix, and applying spectral clustering Ng et al. (2001) on such sparse similarity matrix leads to compelling clustering performance.",
"Elhamifar and Vidal Elhamifar & Vidal (2013) prove that when the subspaces are independent or disjoint, SDP can be satisfied by solving the canonical sparse linear representation problem using data as the dictionary, under certain conditions on the rank, or singular value of the data matrix and the principle angle between the subspaces.",
"SSC has been successfully applied to a novel deep neural network architecture, leading to the first deep sparse subspace clustering method Peng et al. (2016) .",
"Under the independence assumption on the subspaces, low rank representation Liu et al. (2010; is also proposed to recover the subspace structures.",
"Relaxing the assumptions on the subspaces to allowing overlapping subspaces, the Greedy Subspace Clustering Park et al. (2014) and the LowRank Sparse Subspace Clustering achieve subspace detection property with high probability.",
"The geometric analysis in Soltanolkotabi & Cands (2012) shows the theoretical results on subspace recovery by SSC.",
"In the following text, we use the term SSC or 1 -SSC exchangeably to indicate the Sparse Subspace Clustering method in Elhamifar & Vidal (2013) .",
"Real data often suffer from noise.",
"Noisy SSC proposed in handles noisy data that lie close to disjoint or overlapping subspaces.",
"While 0 -SSC Yang et al. (2016) has guaranteed clustering correctness via subspace detection property under much milder assumptions than previous subspace clustering methods including SSC, it assumes that the observed data lie in exactly in the subspaces and does not handle noisy data.",
"In this paper, we present noisy 0 -SSC, which enhances 0 -SSC by theoretical guarantee on the correctness of clustering on noisy data.",
"It should be emphasized that while 0 -SSC on clean data Yang et al. (2016) empirically adopts a form of optimization problem robust to noise, it lacks theoretical analysis on the correctness of 0 -SSC on noisy data.",
"In this paper, the correctness of noisy 0 -SSC on noisy data in terms of the subspace detection property is established.",
"Our analysis is under both deterministic model and randomized models, which is also the model employed in the geometric analysis of SSC Soltanolkotabi & Cands (2012) .",
"Our randomized analysis demonstrates potential advantage of noisy 0 -SSC over its 1 counterpart as more general assumption on data distribution can be adopted.",
"Moreover, we present Noisy Dimensionality Reduced 0 -Sparse Subspace Clustering (Noisy-DR-0 -SSC), an efficient version of noisy 0 -SSC which also enjoys robustness to noise.",
"Noisy-DR-0 -SSC first projects the data onto a lower dimensional space by random projection, then performs noisy 0 -SSC on the dimensionality reduced data.",
"Noisy-DR-0 -SSC provably recovers the underlying subspace structure in the original data from the dimensionality reduced data under deterministic model.",
"Experimental results demonstrate the effectiveness of both noisy 0 -SSC and Noisy-DR-0 -SSC.",
"We use bold letters for matrices and vectors, and regular lower letter for scalars throughout this paper.",
"The bold letter with superscript indicates the corresponding column of a matrix, e.g. A i is the i-th column of matrix A, and the bold letter with subscript indicates the corresponding element of a matrix or vector.",
"· F and · p denote the Frobenius norm and the vector p -norm or the matrix p-norm, and diag(·) indicates the diagonal elements of a matrix.",
"H T ⊆ R d indicates the subspace spanned by the columns of T, and A I denotes a submatrix of A whose columns correspond to the nonzero elements of I (or with indices in I without confusion).",
"σ t (·) denotes the t-th largest singular value of a matrix, and σ min (·) indicates the smallest singular value of a matrix.",
"supp(·) is the support of a vector, P S is an operator indicating projection onto the subspace S .",
"We present provable noisy 0 -SSC that recovers subspaces from noisy data through 0 -induced sparsity in a robust manner, with the theoretical guarantee on its correctness in terms of subspace detection property under both deterministic and randomized models.",
"Experimental results shows the superior performance of noisy 0 -SSC.",
"We also propose Noisy-DR-0 -SSC which performs noisy 0 -SSC on dimensionality reduced data and still provably recovers the subspaces in the original data.",
"Experiment results demonstrate the effectiveness of both noisy 0 -SSC and Noisy-DR-0 -SSC.",
"β = 0.",
"Perform the above analysis for all 1 ≤ i ≤ n, we can prove that the subspace detection property holds for all 1 ≤ i ≤ n."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.20000000298023224,
0.17391303181648254,
0.21621620655059814,
0.12121211737394333,
0.3125,
0.18867923319339752,
0.17142856121063232,
0.24242423474788666,
0.17777776718139648,
0.12903225421905518,
0.09756097197532654,
0.2448979616165161,
0.10526315122842789,
0.18867923319339752,
0.17910447716712952,
0.12121211737394333,
0.1428571343421936,
0.03278687968850136,
0.09302324801683426,
0.09999999403953552,
0.1304347813129425,
0.10810810327529907,
0.1395348757505417,
0.07692307233810425,
0.22857142984867096,
0.1355932205915451,
0.09999999403953552,
0.11538460850715637,
0.21052631735801697,
0.0476190410554409,
0.09090908616781235,
0.13636362552642822,
0.09756097197532654,
0.2702702581882477,
0.0624999962747097,
0.05714285373687744,
0,
0,
0.11999999731779099,
0,
0.05714285373687744,
0.178571417927742,
0.06666666269302368,
0.24390242993831635,
0.0624999962747097,
0,
0.051282044500112534
] | H1gjM1SFDr | true | [
"We propose Noisy-DR-L0-SSC (Noisy Dimension Reduction L0-Sparse Subspace Clustering) to efficiently partition noisy data in accordance to their underlying subspace structure."
] |
[
"Mode connectivity provides novel geometric insights on analyzing loss landscapes and enables building high-accuracy pathways between well-trained neural networks.",
"In this work, we propose to employ mode connectivity in loss landscapes to study the adversarial robustness of deep neural networks, and provide novel methods for improving this robustness. ",
"Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.",
"When network models are tampered with backdoor or error-injection attacks, our results demonstrate that the path connection learned using limited amount of bonafide data can effectively mitigate adversarial effects while maintaining the original accuracy on clean data.",
"Therefore, mode connectivity provides users with the power to repair backdoored or error-injected models. ",
"We also use mode connectivity to investigate the loss landscapes of regular and robust models against evasion attacks.",
"Experiments show that there exists a barrier in adversarial robustness loss on the path connecting regular and adversarially-trained models. ",
"A high correlation is observed between the adversarial robustness loss and the largest eigenvalue of the input Hessian matrix, for which theoretical justifications are provided. ",
"Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.",
"Recent studies on mode connectivity show that two independently trained deep neural network (DNN) models with the same architecture and loss function can be connected on their loss landscape using a high-accuracy/low-loss path characterized by a simple curve (Garipov et al., 2018; Gotmare et al., 2018; Draxler et al., 2018) .",
"This insight on the loss landscape geometry provides us with easy access to a large number of similar-performing models on the low-loss path between two given models, and Garipov et al. (2018) use this to devise a new model ensembling method.",
"Another line of recent research reveals interesting geometric properties relating to adversarial robustness of DNNs (Fawzi et al., 2017; 2018; Wang et al., 2018b; Yu et al., 2018 ).",
"An adversarial data or model is defined to be one that is close to a bonafide data or model in some space, but exhibits unwanted or malicious behavior.",
"Motivated by these geometric perspectives, in this study, we propose to employ mode connectivity to study and improve adversarial robustness of DNNs against different types of threats.",
"A DNN can be possibly tampered by an adversary during different phases in its life cycle.",
"For example, during the training phase, the training data can be corrupted with a designated trigger pattern associated with a target label to implant a backdoor for trojan attack on DNNs (Gu et al., 2019; Liu et al., 2018) .",
"During the inference phase when a trained model is deployed for task-solving, prediction-evasive attacks are plausible (Biggio & Roli, 2018; Goodfellow et al., 2015) , even when the model internal details are unknown to an attacker (Chen et al., 2017; Ilyas et al., 2018) .",
"In this research, we will demonstrate that by using mode connectivity in loss landscapes, we can repair backdoored or error-injected DNNs.",
"We also show that mode connectivity analysis reveals the existence of a robustness loss barrier on the path connecting regular and adversarially-trained models.",
"We motivate the novelty and benefit of using mode connectivity for mitigating training-phase adversarial threats through the following practical scenario: as training DNNs is both time-and resource-consuming, it has become a common trend for users to leverage pre-trained models released in the public domain 1 .",
"Users may then perform model fine-tuning or transfer learning with a small set of bonafide data that they have.",
"However, publicly available pre-trained models may carry an unknown but significant risk of tampering by an adversary.",
"It can also be challenging to detect this tampering, as in the case of a backdoor attack 2 , since a backdoored model will behave like a regular model in the absence of the embedded trigger.",
"Therefore, it is practically helpful to provide tools to users who wish to utilize pre-trained models while mitigating such adversarial threats.",
"We show that our proposed method using mode connectivity with limited amount of bonafide data can repair backdoored or error-injected DNNs, while greatly countering their adversarial effects.",
"Our main contributions are summarized as follows:",
"• For backdoor and error-injection attacks, we show that the path trained using limited bonafide data connecting two tampered models can be used to repair and redeem the attacked models, thereby resulting in high-accuracy and low-risk models.",
"The performance of mode connectivity is significantly better than several baselines including fine-tuning, training from scratch, pruning, and random weight perturbations.",
"We also provide technical explanations for the effectiveness of our path connection method based on model weight space exploration and similarity analysis of input gradients for clean and tampered data.",
"• For evasion attacks, we use mode connectivity to study standard and adversarial-robustness loss landscapes.",
"We find that between a regular and an adversarially-trained model, training a path with standard loss reveals no barrier, whereas the robustness loss on the same path reveals a barrier.",
"This insight provides a geometric interpretation of the \"no free lunch\" hypothesis in adversarial robustness (Tsipras et al., 2019; Dohmatob, 2018; Bubeck et al., 2019) .",
"We also provide technical explanations for the high correlation observed between the robustness loss and the largest eigenvalue of the input Hessian matrix on the path.",
"• Our experimental results on different DNN architectures (ResNet and VGG) and datasets (CIFAR-10 and SVHN) corroborate the effectiveness of using mode connectivity in loss landscapes to understand and improve adversarial robustness.",
"We also show that our path connection is resilient to the considered adaptive attacks that are aware of our defense.",
"To the best of our knowledge, this is the first work that proposes using mode connectivity approaches for adversarial robustness.",
"2 BACKGROUND AND RELATED WORK",
"This paper provides novel insights on adversarial robustness of deep neural networks through the lens of mode connectivity in loss landscapes.",
"Leveraging mode connectivity between model optima, we show that path connection trained by a limited number of clean data can successfully repair backdoored or error-injected models and significantly outperforms several baseline methods.",
"Moreover, we use mode connectivity to uncover the existence of robustness loss barrier on the path trained by standard loss against evasion attacks.",
"We also provide technical explanations for the effectiveness of our proposed approach and theoretically justify the empirically observed high correlation between robustness loss and the largest eigenvalue of input Hessian.",
"Our findings are consistent and validated on different network architectures and datasets.",
"The performance of regular path connection of untampered models on SVHN with ResNet is presented in Figure A1 .",
"Inference on training set Inference on test set Legend Figure A1 : Loss and error rate on the path connecting two untampered ResNet models trained on SVHN.",
"The path connection is trained using different settings as indicated by the curve colors.",
"The inference results on test set are evaluated using 5000 samples, which are separate from what are used for path connection."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2631579041481018,
0.43478259444236755,
0.1764705777168274,
0.18518517911434174,
0.29411762952804565,
0.37837836146354675,
0.307692289352417,
0.23255813121795654,
0.2702702581882477,
0.19354838132858276,
0.1428571343421936,
0.13636362552642822,
0.1463414579629898,
0.3181818127632141,
0.17142856121063232,
0.038461532443761826,
0.0363636314868927,
0.307692289352417,
0.2926829159259796,
0.26229506731033325,
0,
0.05714285373687744,
0.0833333283662796,
0.15789473056793213,
0.21739129722118378,
0,
0.26923075318336487,
0.14999999105930328,
0.08695651590824127,
0.3529411852359772,
0.1395348757505417,
0.1395348757505417,
0.1463414579629898,
0.4166666567325592,
0.05405404791235924,
0.2631579041481018,
0,
0.41025641560554504,
0.19607841968536377,
0.25,
0.17777776718139648,
0.06666666269302368,
0.1111111044883728,
0.09756097197532654,
0.060606054961681366,
0.052631575614213943
] | SJgwzCEKwH | true | [
"A novel approach using mode connectivity in loss landscapes to mitigate adversarial effects, repair tampered models and evaluate adversarial robustness"
] |
[
"Generative adversarial networks (GANs) learn to map samples from a noise distribution to a chosen data distribution.",
"Recent work has demonstrated that GANs are consequently sensitive to, and limited by, the shape of the noise distribution.",
"For example, a single generator struggles to map continuous noise (e.g. a uniform distribution) to discontinuous output (e.g. separate Gaussians) or complex output (e.g. intersecting parabolas).",
"We address this problem by learning to generate from multiple models such that the generator's output is actually the combination of several distinct networks.",
"We contribute a novel formulation of multi-generator models where we learn a prior over the generators conditioned on the noise, parameterized by a neural network.",
"Thus, this network not only learns the optimal rate to sample from each generator but also optimally shapes the noise received by each generator.",
"The resulting Noise Prior GAN (NPGAN) achieves expressivity and flexibility that surpasses both single generator models and previous multi-generator models.",
"Learning generative models of high-dimensional data is of perpetual interest, as its wide suite of applications include synthesizing conversations, creating artwork, or designing biological agents (Bollepalli et al., 2017; Tan et al., 2017; Blaschke et al., 2018) .",
"Deep models, especially generative adversarial networks (GANs), have significantly improved the state of the art at modeling these complex distributions, thus encouraging further research (Goodfellow et al., 2014) .",
"Whether implicitly or explicitly, works that use GANs make a crucial modeling decision known as the manifold assumption (Zhu et al., 2016; Schlegl et al., 2017; Reed et al., 2016) .",
"This is the assumption that high-dimensional data lies on a single low-dimensional manifold which smoothly varies and where local Euclidean distances in the low-dimensional space correspond to complex transformations in the high-dimensional space.",
"While generally true in many applications, this assumption does not always hold (Khayatkhoei et al., 2018) .",
"For example, recent work has emphasized situations where the data lies not on one single manifold, but on multiple, disconnected manifolds (Khayatkhoei et al., 2018; Gurumurthy et al., 2017; Hoang et al., 2018) .",
"In this case, GANs must attempt to learn a continuous cover of the multiple manifolds, which inevitably leads to the generation of off-manifold points which lie in between (Kelley, 2017) .",
"The generator tries to minimize the number of these off-manifold points, and thus they are generally just a small fraction of the total generated distribution.",
"As such, they barely affect the typical GAN evaluation measures (like Inception and FID scores for images), which measure the quality of the generated distribution as a whole.",
"Thus, this problem is usually ignored, as other aspects are prioritized.",
"However, in some applications, the presence of these bad outliers is more catastrophic than slight imperfections in modeling the most dense regions of the space.",
"For example, consider the goal of an artificial agent acting indistinguishably from a human: the famous Turing Test.",
"Incorrectly modeling sentence density by using a given sentence structure 60% of the time instead of 40% of the time is relatively harmless.",
"However, generating a single gibberish sentence will give away the identity of the artificial agent.",
"Moreover, there are serious concerns about the implications this has for proofs of GAN convergence (Mescheder et al., 2018) .",
"These works address the problem of disconnected manifolds by Figure 1 : The Noise-Prior GAN (NPGAN) architecture.",
"Unlike previous work, the NP network learns a prior over the generators conditioned on the noise distribution z.",
"This allows it to both control the sampling frequency of the generators and shape the input appropriate to each one, in an end-to-end differentiable framework.",
"simultaneously training multiple generators and using established regularizations to coax them into dividing up the space and learning separate manifolds.",
"Methods for getting multiple generators to generate disconnected manifolds can be divided into two categories:",
"(i) imposing information theoretic losses to encourage output from different generators to be distinguishable (Khayatkhoei et al., 2018; Hoang et al., 2018)",
"(ii) changing the initial noise distribution to be disconnected (Gurumurthy et al., 2017) .",
"Our approach falls into the second category.",
"Previous efforts to change the noise distribution to handle disconnectedness has exclusively taken the form of sampling from a mixture of Gaussians rather than the typical single Gaussian (with sampling fixed and uniform over the mixture).",
"Our approach differs significantly from those previously.",
"We use multiple generators as before, but instead of dividing up the noise space into factorized Gaussians and sending one to each generator, we let an additional neural network determine how best to divide up the noise space and dispatch it to each generator.",
"This network learns a prior over the generators, conditioned on the noise space.",
"Thus, we call our additional third network a noise-prior (NP) network.",
"Previous methods have modeled the data with noise z and generators",
"We instead propose a framework to incorporate a richer p(G i |z) into the generator.",
"This framework is entirely differentiable, allowing us to optimize the NP network along with the generators during training.",
"We note that with this strategy, we significantly increase the expressivity of each generator over the previous disconnected manifold models.",
"By dividing up the space into four slices s i and sending s 1 , s 3 to the first generator and s 2 , s 4 to the second generator, we can generate four disconnected manifolds with just two generators.",
"Previous work would have to devote precisely four generators to this task, with degradation in performance if fewer or more generators are chosen for the hyperparameter.",
"Here, the prior network learns to divide the noise space appropriately for whatever number of generators is chosen, and is thus more expressive as well as more robust than previous models.",
"Moreover, much existing work has exclusively framed the problem as, and tailored solutions for, the disconnected manifold problem.",
"Our approach is more generalized, addressing any misspecification between noise distribution and the target distribution.",
"This means that our approach does not become redundant or unnecessary in the case of single complex manifolds, for example.",
"Our contributions can be summarized as:",
"1. We introduce the first multi-generator ensemble to learn a prior over the noise space, using a novel soft, differentiable loss formulation.",
"2. We present a multi-generator method that can learn to sample generators in proportion to the relative density of multiple manifolds.",
"3. We show how our model not only improves performance on disconnected manifolds, but also on complex-but-connected manifolds, which are more likely to arise in real situations.",
"We introduced a novel formulation of multiple-generator models with a prior over the generators, conditioned on the noise input.",
"This results in improved expressivity and flexibility by shaping each generator's input specifically to best perform that generator's task.",
"In this section, we elaborate on the CIFAR experiment from the main text.",
"We use a more complicated architecture here with spectral normalization, self-attention, and ResNet connections, per the best achieving models to-date.",
"We experimented using two, three, four, and five generators in the NPGAN architecture.",
"Figure A .1 shows images generated by the NPGAN with each number of generators.",
"With just two generators, each one creates a wide diversity of images.",
"On the other hand, when increasing the number of generators, each one more homogeneous.",
"For example, in the two generator model, one of them creates dogs, cars, and frogs, while in the five-generator model each generator has specialized to just birds in the sky or just cars.",
"Qualitatively, the noise prior is obviously learning a sensible split of the data across generators and each generator is outputting quality images.",
"However, when comparing the two-generator, threegenerator, four-generator, and five-generator versions of NPGAN to the baseline one-generator of the same model, we do not observe any improvement in FID score.",
"This is unsurprising for the reasons mentioned in the main text.",
"The FID scores treat all points equally across a generated dataset, and thus will be most strongly influenced by where the most points are.",
"A relatively small number of outliers barely register by this metric.",
"Even current state-of-the-art image generation on CIFAR10 is no where close to perfectly modeling the data.",
"When GANs are able to perfectly model the dataset except for trailing outliers between modes, we expect the NPGAN's improvements to be visible in FID scores on this dataset.",
"Until then, the detection of a few bad outliers needs to be done with other evaluation techniques on this dataset.",
"With this caveat, we note that we could achieve an FID score of 26.4 with our NPGAN, compared to 25.8 with our code and one generator, which demonstrates that the NPGAN can scale to stateof-the-art architecture without suffering in quality.",
"The NPGAN is robust to a connected dataset while simultaneously being able to automatically solve the problems of a disconnected dataset.",
"Furthermore, this motivated the creation of our new outlier manifold distance metric, designed to be more sensitive to the creation of outliers than the FID score.",
"Using this metric, we see NPGAN outperform all other models.",
"Relation to Machine Teaching In (Zhu, 2013) , an analogous question is posed: if a teacher network knows the function its student network is supposed to learn, what are the optimal training points to teach it as efficiently as possible?",
"For students following a Bayesian learning approach, this is thought of as finding the best data points D to make the desired model θ * , or minimizing with respect to D: −log(p(θ * |D)) .",
"In our framework, the teacher network NP does not know the function its students should learn ahead-of-time, because this target is changing continually as the discriminator improves simultaneously.",
"Nevertheless, the NP network is still learning to form the optimal curriculum for each individual student such that the collection of students best models the target function given the current parameters of the discriminator.",
"Relation to knowledge distillation Our NP network also has links to the field of knowledge distillation (Kim & Rush, 2016; Chen et al., 2017; Furlanello et al., 2018; Wang et al., 2018) , where a teacher network is trying to compress or distill the knowledge it has about a particular distribution into one or several (Hinton et al., 2015) smaller models.",
"In the case of multiple smaller models, the teacher can be thought of as a generalist whose job it is to find the right specialist for a specific problem."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2666666507720947,
0.11764705181121826,
0.15789473056793213,
0.10256409645080566,
0.3684210479259491,
0.21621620655059814,
0.11764705181121826,
0,
0.045454539358615875,
0.09302324801683426,
0.1395348757505417,
0,
0.04444443807005882,
0.1904761791229248,
0.1538461446762085,
0.1428571343421936,
0,
0.05405404791235924,
0.1818181723356247,
0.11764705181121826,
0.13333332538604736,
0.1111111044883728,
0.12121211737394333,
0.375,
0.2631579041481018,
0.11428570747375488,
0.06451612710952759,
0.0555555522441864,
0.19999998807907104,
0.08695651590824127,
0.21739129722118378,
0,
0.23076923191547394,
0.4285714328289032,
0.23076923191547394,
0.2222222238779068,
0.2666666507720947,
0.3030303120613098,
0.17142856121063232,
0.12765957415103912,
0.14999999105930328,
0.23255813121795654,
0.0624999962747097,
0.13333332538604736,
0.0555555522441864,
0,
0.4444444477558136,
0.277777761220932,
0.04878048226237297,
0.42424240708351135,
0.11764705181121826,
0.0714285671710968,
0.1666666567325592,
0.06896550953388214,
0.19999998807907104,
0.0714285671710968,
0.06896550953388214,
0.09302324801683426,
0.2222222238779068,
0.0952380895614624,
0.07692307233810425,
0.10526315122842789,
0.07407406717538834,
0.1249999925494194,
0.0952380895614624,
0.2222222238779068,
0.15094339847564697,
0.1764705777168274,
0.10810810327529907,
0,
0.20000000298023224,
0.1666666567325592,
0.1428571343421936,
0.13636362552642822,
0.13114753365516663,
0.1463414579629898
] | HJlISCEKvB | true | [
"A multi-generator GAN framework with an additional network to learn a prior over the input noise."
] |
[
"Recent advances in Generative Adversarial Networks (GANs) – in architectural design, training strategies, and empirical tricks – have led nearly photorealistic samples on large-scale datasets such as ImageNet. ",
"In fact, for one model in particular, BigGAN, metrics such as Inception Score or Frechet Inception Distance nearly match those of the dataset, suggesting that these models are close to match-ing the distribution of the training set. ",
"Given the quality of these models, it is worth understanding to what extent these samples can be used for data augmentation, a task expressed as a long-term goal of the GAN research project. ",
"To that end, we train ResNet-50 classifiers using either purely BigGAN images or mixtures of ImageNet and BigGAN images, and test on the ImageNet validation set.",
"Our preliminary results suggest both a measured view of state-of-the-art GAN quality and highlight limitations of current metrics.",
"Using only BigGAN images, we find that Top-1 and Top-5 error increased by 120% and 384%, respectively, and furthermore, adding more BigGAN data to the ImageNet training set at best only marginally improves classifier performance.",
"Finally, we find that neither Inception Score, nor FID, nor combinations thereof are predictive of classification accuracy. ",
"These results suggest that as GANs are beginning to be deployed in downstream tasks, we should create metrics that better measure downstream task performance. ",
"We propose classification performance as one such metric that, in addition to assessing per-class sample quality, is more suited to such downstream tasks.",
"Recent years have witnessed a marked improvement in sample quality in Deep Generative Models.",
"One model class in particular, Generative Adversarial Networks BID7 , has begun to generate nearly photorealistic images.",
"While applications of adversarial training have found their way into image translation BID16 and style transfer BID5 , a typically discussed goal for such models, and in particular conditional ones, is data augmentation.",
"Such models have enjoyed limited success in these tasks thus far for large-scale datasets such as ImageNet, likely because existing models did not generate sufficiently high-quality samples.",
"Recently, however, BigGANs BID4 have generated photorealistic images of ImageNet data up to 512×512 resolution, and moreover, achieve Inception Scores and Frechet Inception Distances similar to the dataset on which they were trained.",
"Such results suggest, though do not prove, that BigGANs are indeed capturing the data distribution.",
"If this were true, then it seems plausible that these samples can be used in downstream tasks, especially in situations in which limited labelled data are available.In this work, we test the rather simple hypothesis that BigGANs are indeed useful for data augmentation, or more drastically, data replacement of the original data distribution.",
"To that end, we use BigGANs for two simple experiments.",
"First, we train ImageNet classifiers, replacing the original training set with one produced by BigGAN.",
"Second, we augment the original ImageNet training set with samples from BigGAN.",
"Our working hypothesis is that if BigGANs were indeed capturing the data distribution, then we could use those samples, instead of or in addition to the original training set, to improve performance on classification.",
"That it does not -on replacement, Top-5 classification Though a negative result, a more positive byproduct of the work is the introduction of a new metric that can better identify issues with GAN and other generative models.",
"In particular, training a classifier allows us to identify, for conditional generative models, which classes are particularly poor, either due to low quality samples or underrepresentation of dataset diversity.",
"In this work, we investigated to what extent BigGAN, the state-of-the-art GAN on ImageNet, captures the data distribution, and to what extent those samples can be used for data augmentation.",
"Our results demonstrate that despite excellent scores on traditional GAN metrics such as Inception Score and Frechet Inception Distance, current state-of-the-art GAN models do not capture the distribution for large-scale datasets such as ImageNet.",
"Moreover, we found only a modest improvement in classifier performance when the training set was augmented with BigGAN samples.",
"Finally, through classifier metrics outlined in the work, we can identify on which classes BigGAN performed well, and on which ones researchers should focus their future efforts.An open question in this work is how to create metrics predictive of performance on downstream tasks.",
"Even for the classifier metric, results on data replacement did not necessarily correlate with those on data augmentation.",
"Better evaluation metrics will help us understand to what extent GANs, or any other Deep Generative Models, can be used for downstream tasks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0952380895614624,
0.12244897335767746,
0.13333332538604736,
0.15789473056793213,
0.0624999962747097,
0.21739129722118378,
0.0624999962747097,
0.052631575614213943,
0,
0,
0,
0.1702127605676651,
0.09756097197532654,
0.2222222238779068,
0.4000000059604645,
0.1666666567325592,
0.1599999964237213,
0.13333332538604736,
0.14814814925193787,
0.12765957415103912,
0.1249999925494194,
0.09302324801683426,
0.25,
0.31111109256744385,
0.11764705181121826,
0.07407406717538834,
0.32258063554763794,
0.052631575614213943
] | rJMw747l_4 | true | [
"BigGANs do not capture the ImageNet data distributions and are only modestly successful for data augmentation."
] |
[
"Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.",
"This wealth of data can help to learn models that can improve the user experience on each device.",
"However, the scale and heterogeneity of federated data presents new challenges in research areas such as federated learning, meta-learning, and multi-task learning.",
"As the machine learning community begins to tackle these challenges, we are at a critical time to ensure that developments made in these areas are grounded with realistic benchmarks.",
"To this end, we propose Leaf, a modular benchmarking framework for learning in federated settings.",
"Leaf includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments.",
"With data increasingly being generated on federated networks of remote devices, there is growing interest in empowering on-device applications with models that make use of such data BID25 .",
"Learning on data generated in federated networks, however, introduces several new obstacles:Statistical: Data is generated on each device in a heterogeneous manner, with each device associated with a different (though perhaps related) underlying data generating distribution.",
"Moreover, the number of data points typically varies significantly across devices.",
"We present LEAF, a modular benchmarking framework for learning in federated settings, or ecosystems marked by massively distributed networks of devices.",
"Learning paradigms applicable in such settings include federated learning, metalearning, multi-task learning, and on-device learning.",
"LEAF allows researchers and practitioners in these domains to reason about new proposed solutions under more realistic assumptions than previous benchmarks.",
"We intend to keep LEAF up to date with new datasets, metrics and opensource solutions in order to foster informed and grounded progress in this field TAB1"
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1395348757505417,
0.05128204822540283,
0.4285714328289032,
0.2083333283662796,
0.4864864945411682,
0.1304347813129425,
0.2083333283662796,
0.1599999964237213,
0,
0.4651162624359131,
0.4444444477558136,
0.1395348757505417,
0.2222222238779068
] | BJf7N5Ho2N | true | [
"We present Leaf, a modular benchmarking framework for learning in federated data, with applications to learning paradigms such as federated learning, meta-learning, and multi-task learning."
] |
[
"Understanding object motion is one of the core problems in computer vision.",
"It requires segmenting and tracking objects over time.",
"Significant progress has been made in instance segmentation, but such models cannot track objects, and more crucially, they are unable to reason in both 3D space and time.\n",
"We propose a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation.",
"Our model includes a temporal network that learns to model temporal context and motion, which is essential to produce smooth embeddings over time.",
"Further, our model also estimates monocular depth, with a self-supervised loss, as the relative distance to an object effectively constrains where it can be next, ensuring a time-consistent embedding.",
"Finally, we show that our model can accurately track and segment instances, even with occlusions and missed detections, advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset.",
"Explicitly predicting the motion of actors in a dynamic scene is a critical component of intelligent systems.",
"Humans can seamlessly track moving objects in their environment by using cues such as appearance, relative distance, and temporal consistency.",
"The world is rarely experienced in a static way: motion (or its absence) provides essential information to understand a scene.",
"Similarly, incorporating past context through a temporal model is essential to segment and track objects consistently over time and through occlusions.",
"From a computer vision perspective, understanding object motion involves segmenting instances, estimating depth, and tracking instances over time.",
"Instance segmentation, which requires segmenting individual objects at the pixel level, has gained traction with challenging datasets such as COCO (Lin et al., 2014) , Cityscapes (Cordts et al., 2016) and Mapillary Vistas (Neuhold et al., 2017) .",
"Such datasets, which only contain single-frame annotations, do not allow the training of video models with temporally consistent instance segmentation, nor does it allow self-supervised monocular depth estimation, that necessitates consecutive frames.",
"Yet, navigating in the real-world involves a three-dimensional understanding of the other agents with consistent instance segmentation and depth over time.",
"More recently, a new dataset containing video instance segmentation annotations was released, the KITTI Multi-Object and Tracking Dataset (Voigtlaender et al., 2019) .",
"This dataset contains pixel-level instance segmentation on more than 8,000 video frames which effectively enables the training of video instance segmentation models.",
"In this work, we propose a new spatio-temporal embedding loss that learns to map video-pixels to a high-dimensional space 1 .",
"This space encourages video-pixels of the same instance to be close together and distinct from other instances.",
"We show that this spatio-temporal embedding loss, jointly with a deep temporal convolutional neural network and self-supervised depth loss, produces consistent instance segmentations over time.",
"The embedding accumulates temporal context thanks to the temporal model, as otherwise, the loss would only be based on appearance.",
"The temporal model is a causal 3D convolutional network, which is only conditioned on past frames to predict the current embedding and is capable of real-time operation.",
"Finally, we show that predicting depth improves the quality of the embedding as knowing the distance to an instance constrains its future location given that objects move smoothly in space.",
"To summarise our novel contributions, we:",
"• introduce a new spatio-temporal embedding loss for video instance segmentation, • show that having a temporal model improves embedding consistency over time,",
"• improve how the embedding disambiguates objects with a self-supervised monocular depth loss, • handle occlusions, contrary to previous IoU based instance correspondence.",
"We demonstrate the efficacy of our method by advancing the state-of-the-art on the KITTI MultiObject and Tracking Dataset (Voigtlaender et al., 2019 ).",
"An example of our model's output is given by Section 1.",
"We proposed a new spatio-temporal embedding loss that generates consistent instance segmentation over time.",
"The temporal network models the past temporal context and the depth network constrains the embedding to aid disambiguation between objects.",
"We demonstrated that our model could effectively track occluded instances or instances with missed detections, by leveraging the temporal context.",
"Our method advanced the state-of-the-art at video instance segmentation on the KITTI Multi-Object and Tracking Dataset.",
"Encoder.",
"The encoder is a ResNet-18 convolutional layer (He et al., 2016) , with 128 output channels.",
"Temporal model.",
"The temporal model contains 12 residual 3D convolutional blocks, with only the first and last block convolving over time.",
"Each residual block is the succession of: projection layer of kernel size 1×1×1 to halve the number of channels, 3D causal convolutional layer t×3×3, projection layer 1 × 1 × 1 to double the number of channels.",
"We set the temporal kernel size to t = 2, and the number of output channels to 128.",
"Decoders.",
"The decoders for instance embedding and depth estimation are identical and consist of 7 convolutional layers with channels [64, 64, 32, 32, 16, 16 ] and 3 upsampling layers.",
"The final convolutional layer contains p channels for instance embedding and 1 channel for depth.",
"Depth Masking.",
"During training, we remove from the photometric reprojection loss the pixels that violate the rigid scene assumption, i.e. the pixels whose appearance do not change between adjacents frames.",
"We set the mask M to only include pixels where the reprojection error is lower with the warped imageÎ s→t than the unwarped source image I s : M = min s e(I t ,Î s→t ) < min s e(I t , I s )",
"Pose Network.",
"The pose network is the succession of a ResNet-18 model followed by 4 convolutions with [256, 256, 256, 6 ] channels.",
"The last feature map is averaged to output a single 6-DoF transformation matrix.",
"Mask Network.",
"The mask network is trained separately to mask the background and is the succession of the Encoder and Decoder described above."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.05714285373687744,
0.1111111044883728,
0.6511628031730652,
0.21276594698429108,
0.1090909019112587,
0.307692289352417,
0.0476190447807312,
0.1702127605676651,
0.04347825422883034,
0.21739129722118378,
0.08888888359069824,
0.09999999403953552,
0.24137930572032928,
0.21276594698429108,
0.19999998807907104,
0.1304347813129425,
0.2666666507720947,
0.09090908616781235,
0.3921568691730499,
0.2222222238779068,
0.19230768084526062,
0.1111111044883728,
0,
0.4680851101875305,
0.16326530277729034,
0.12244897335767746,
0,
0.4878048598766327,
0.1860465109348297,
0.30434781312942505,
0.1904761791229248,
0.09302324801683426,
0.1304347813129425,
0,
0.1395348757505417,
0.1538461446762085,
0.1463414579629898,
0.07692307233810425,
0.06666666269302368,
0.08510638028383255,
0.04999999701976776,
0.04651162400841713
] | HyxTJxrtvr | true | [
"We introduce a new spatio-temporal embedding loss on videos that generates temporally consistent video instance segmentation, even with occlusions and missed detections, using appearance, geometry, and temporal context."
] |
[
"As reinforcement learning continues to drive machine intelligence beyond its conventional boundary, unsubstantial practices in sparse reward environment severely limit further applications in a broader range of advanced fields.",
"Motivated by the demand for an effective deep reinforcement learning algorithm that accommodates sparse reward environment, this paper presents Hindsight Trust Region Policy Optimization (HTRPO), a method that efficiently utilizes interactions in sparse reward conditions to optimize policies within trust region and, in the meantime, maintains learning stability.",
"Firstly, we theoretically adapt the TRPO objective function, in the form of the expected return of the policy, to the distribution of hindsight data generated from the alternative goals.",
"Then, we apply Monte Carlo with importance sampling to estimate KL-divergence between two policies, taking the hindsight data as input.",
"Under the condition that the distributions are sufficiently close, the KL-divergence is approximated by another f-divergence.",
"Such approximation results in the decrease of variance and alleviates the instability during policy update. ",
"Experimental results on both discrete and continuous benchmark tasks demonstrate that HTRPO converges significantly faster than previous policy gradient methods.",
"It achieves effective performances and high data-efficiency for training policies in sparse reward environments.",
"Reinforcement Learning has been a heuristic approach confronting a great many real-world problems from playing complex strategic games (Mnih et al., 2015; Silver et al., 2016; Justesen et al., 2019) to the precise control of robots (Levine et al., 2016; Mahler & Goldberg, 2017; Quillen et al., 2018) , in which policy gradient methods play very important roles (Sutton et al., 2000; Deisenroth et al., 2013) .",
"Among them, the ones based on trust region including Trust Region Policy Optimization (Schulman et al., 2015a) and Proximal Policy Optimization (Schulman et al., 2017) have achieved stable and effective performances on several benchmark tasks.",
"Later on, they have been verified in a variety of applications including skill learning (Nagabandi et al., 2018) , multi-agent control (Gupta et al., 2017) , imitation learning (Ho et al., 2016) , and have been investigated further to be combined with more advanced techniques (Nachum et al., 2017; Houthooft et al., 2016; Heess et al., 2017) .",
"One unresolved core issue in reinforcement learning is efficiently training the agent in sparse reward environments, in which the agent is given a distinctively high feedback only upon reaching the desired final goal state.",
"On one hand, generalizing reinforcement learning methods to sparse reward scenarios obviates designing delicate reward mechanism, which is known as reward shaping (Ng et al., 1999) ; on the other hand, receiving rewards only when precisely reaching the final goal states also guarantees that the agent can focus on the intended task itself without any deviation.",
"Despite the extensive use of policy gradient methods, they tend to be vulnerable when dealing with sparse reward scenarios.",
"Admittedly, policy gradient may work in simple and sufficiently rewarding environments through massive random exploration.",
"However, since it relies heavily on the expected return, the chances in complex and sparsely rewarding scenarios become rather slim, which often makes it unfeasible to converge to a policy by exploring randomly.",
"Recently, several works have been devoted to solving the problem of sparse reward, mainly applying either hierarchical reinforcement learning (Kulkarni et al., 2016; Vezhnevets et al., 2017; Le et al., 2018; Marino et al., 2019) or a hindsight methodology, including Hindsight Experience Replay (Andrychowicz et al., 2017) , Hindsight Policy Gradient (Rauber et al., 2019) and their extensions (Fang et al., 2019; Levy et al., 2019) .",
"The idea of Hindsight Experience Replay(HER) is to regard the ending states obtained through the interaction under current policy as alternative goals, and therefore generate more effective training data comparing to that with only real goals.",
"Such augmentation overcomes the defects of random exploration and allows the agent to progressively move towards intended goals.",
"It is proven to be promising when dealing with sparse reward reinforcement learning problems.",
"For Hindsight Policy Gradient(HPG), it introduces hindsight to policy gradient approach and improves sample efficiency in sparse reward environments.",
"Yet, its learning curve for policy update still oscillates considerably.",
"Because it inherits the intrinsic high variance of policy gradient methods which has been widely studied in Schulman et al. (2015b) , Gu et al. (2016) and Wu et al. (2018) .",
"Furthermore, introducing hindsight to policy gradient methods would lead to greater variance (Rauber et al., 2019) .",
"Consequently, such exacerbation would cause obstructive instability during the optimization process.",
"To design an advanced and efficient on-policy reinforcement learning algorithm with hindsight experience, the main problem is the contradiction between on-policy data needed by the training process and the severely off-policy hindsight experience we can get.",
"Moreover, for TRPO, one of the most significant property is the approximated monotonic converging process.",
"Therefore, how these advantages can be preserved when the agent is trained with hindsight data also remains unsolved.",
"In this paper, we propose a methodology called Hindsight Trust Region Policy Optimization (HTRPO).",
"Starting from TRPO, a hindsight form of policy optimization problem within trust region is theoretically derived, which can be approximately solved with the Monte Carlo estimator using severely off-policy hindsight experience data.",
"HTRPO extends the effective and monotonically iterative policy optimization procedure within trust region to accommodate sparse reward environments.",
"In HTRPO, both the objective function and the expectation of KL divergence between policies are estimated using generated hindsight data instead of on-policy data.",
"To overcome the high variance and instability in KL divergence estimation, another f -divergence is applied to approximate KL divergence, and both theoretically and practically, it is proved to be more efficient and stable.",
"We demonstrate that on several benchmark tasks, HTRPO can significantly improve the performance and sample efficiency in sparse reward scenarios while maintains the learning stability.",
"From the experiments, we illustrate that HTRPO can be neatly applied to not only simple discrete tasks but continuous environments as well.",
"Besides, it is verified that HTRPO can be generalized to different hyperparameter settings with little impact on performance level.",
"We have extended the monotonically converging on-policy algorithm TRPO to accommodate sparse reward environments by adopting the hindsight methodology.",
"The optimization problem in TRPO is scrupulously derived into hindsight formulation and, when the KL-divergence in the constraint function is small enough, it can be tactfully approximated by another f -divergence in order to reduce estimation variance and improve learning stability.",
"Experimental results on a variety of environments demonstrate effective performances of HTRPO, and validate its sample efficiency and stable policy update quality in both discrete and continuous scenarios.",
"Therefore, this work reveals HTRPO's vast potential in solving sparse reward reinforcement learning problems."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.22727271914482117,
0.27586206793785095,
0.052631575614213943,
0.1111111044883728,
0,
0.06451612710952759,
0.0555555522441864,
0.19999998807907104,
0.029411761090159416,
0,
0.10526315122842789,
0.1818181723356247,
0.12121211737394333,
0.22857142984867096,
0.06451612710952759,
0.043478257954120636,
0.1230769157409668,
0.07999999821186066,
0,
0.3333333134651184,
0.22857142984867096,
0.23076923191547394,
0.04651162400841713,
0.1249999925494194,
0.07407406717538834,
0.30434781312942505,
0.06666666269302368,
0.11764705181121826,
0,
0.21276594698429108,
0.23529411852359772,
0.05405404791235924,
0,
0.14999999105930328,
0,
0.05714285373687744,
0.1764705777168274,
0.11320754140615463,
0.04878048226237297,
0.2666666507720947
] | rylCP6NFDB | true | [
"This paper proposes an advanced policy optimization method with hindsight experience for sparse reward reinforcement learning."
] |
[
"Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques.",
"Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction.",
"In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams.",
"These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance.",
"We present a system that reformulates a given question into queries that are used to retrieve supporting text from a large corpus of science-related text.",
"Our rewriter is able to incorporate background knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results -- outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question.",
"We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer.",
"By combining query reformulation, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset.",
"The recently released AI2 Reasoning Challenge (ARC) and accompanying ARC Corpus is an ambitious test for AI systems that perform open-domain question answering (QA).",
"This dataset consists of 2590 multiple choice questions authored for grade-school science exams; the questions are partitioned into an Easy set and a Challenge set.",
"The Challenge set comprises questions that cannot be answered correctly by either a Pointwise Mutual Information (PMI-based) solver, or by an Information Retrieval (IR-based) solver.",
"also note that the simple information retrieval (IR) methodology (Elasticsearch) that they use is a key weakness of current systems, and conjecture that 95% of the questions can be answered using ARC corpus sentences.ARC has proved to be a difficult dataset to perform well on, particularly its Challenge partition: existing systems like KG 2 achieve 31.70% accuracy on the test partition.",
"Older models such as DecompAttn BID27 and BiDAF that have shown good performance on other datasets -e.g. SQuAD BID29 ] -perform only 1-2% above random chance.",
"1 The seeming intractability of the ARC Challenge dataset has only very recently shown signs of yielding, with the newest techniques attaining an accuracy of 42.32% on the Challenge set BID35 .",
"2 An important avenue of attack on ARC was identified in Boratko et al. [2018a,b] , which examined the knowledge and reasoning requirements for answering questions in the ARC dataset.",
"The authors note that \"simple reformulations to the query can greatly increase the quality of the retrieved sentences\".",
"They quantitatively measure the effectiveness of such an approach by demonstrating a 42% increase in score on ARC-Easy using a pre-trained version of the DrQA model BID7 .",
"Another recent tack that many top-performing systems for ARC have taken is the use of natural language inference (NLI) models to answer the questions .",
"The NLI task -also sometimes known as recognizing textual entailment -is to determine whether a given natural language hypothesis h can be inferred from a natural language premise p.",
"The NLI problem is often cast as a classification problem: given a hypothesis and premise, classify their relationship as either entailment, contradiction, or neutral.",
"NLI models have improved state of the art performance on a number of important NLP tasks BID27 and have gained recent popularity due to the release of large datasets BID4 BID46 BID43 .",
"In addition to the NLI models, other techniques applied to ARC include using pre-trained graph embeddings to capture commonsense relations between concepts BID51 ; as well as the current state-of-theart approach that recasts multiple choice question answering as a reading comprehension problem that can also be used to fine-tune a pre-trained language model BID35 .ARC",
"Challenge represents a unique obstacle in the open domain QA world, as the questions are specifically selected to not be answerable by merely using basic techniques augmented with a high quality corpus. Our",
"approach combines current best practices: it retrieves highly salient evidence, and then judges this evidence using a general NLI model. While",
"other recent systems for ARC have taken a similar approach BID26 BID25 , our extensive analysis of both the rewriter module as well as our decision rules sheds new light on this unique dataset.In order to overcome some of the limitations of existing retrieval-based systems on ARC and other similar corpora, we present an approach that uses the original question to produce a set of reformulations. These",
"reformulations are then used to retrieve additional supporting text which can then be used to arrive at the correct answer. We couple",
"this with a textual entailment model and a robust decision rule to achieve good performance on the ARC dataset. We discuss",
"important lessons learned in the construction of this system, and key issues that need to be addressed in order to move forward on the ARC dataset.",
"Of the systems above ours on the leaderboard, only BID26 report their accuracy on both the dev set (43.29%) and the test set (36.36%).",
"We suffer a similar loss in performance from 36.37% to 33.20%, demonstrating the risk of overfitting to a (relatively small) development set in the multiplechoice setting even when a model has few learnable parameters.",
"As in this paper, BID26 pursue the approach suggested by Boratko et al. [2018a,b] in learning how to transform a naturallanguage question into a query for which an IR system can return a higher-quality selection of results.",
"Both of these systems use entailment models similar to our match-LSTM BID41 model, but also incorporate additional co-attention between questions, candidate answers, and the retrieved evidence.",
"BID35 present an an encouraging result for combating the IR bottleneck in opendomain QA.",
"By concatenating the top-50 results of a single (joint) query and feeding the result into a neural reader optimized by several lightly-supervised 'reading strategies', they achieve an accuracy of 37.4% on the test set even without optimizing for single-answer selection.",
"Integrating this approach with our query rewriting module is left for future work.",
"In this paper, we present a system that answers science exam questions by retrieving supporting evidence from a large, noisy corpus on the basis of keywords extracted from the original query.",
"By combining query rewriting, background knowledge, and textual entailment, our system is able to outperform several strong baselines on the ARC dataset.",
"Our rewriter is able to incorporate background knowledge from ConceptNet and -in tandem with a generic entailment model trained on SciTail -achieves near state of the art performance on the end-to-end QA task despite only being trained to identify essential terms in the original source question.There are a number of key takeaways from our work: first, researchers should be aware of the impact that Elasticsearch (or a similar tool) can have on the performance of their models.",
"Answer candidates should not be discarded based on the relevance score of their top result; while (correct) answers are likely critical to retrieving relevant results, the original AI2 Rule is too aggressive in pruning candidates.",
"Using an entailment model that is capable of leveraging background knowledge in a more principled way would likely help in filtering unproductive search results.",
"Second, our results corroborate those of BID26 and show that tuning to the dev partition of the Challenge set (299 questions) is extremely sensitive.",
"Though we are unable to speculate on whether this is an artifact of the dataset or a more fundamental concern in multiple-choice QA, it is an important consideration for generating significant and reproducible improvements on the ARC dataset."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.04444443807005882,
0,
0.09302324801683426,
0.08888888359069824,
0.1463414579629898,
0.0923076868057251,
0.1111111044883728,
0.0952380895614624,
0.045454539358615875,
0.09302324801683426,
0.04651162400841713,
0.0810810774564743,
0,
0.04255318641662598,
0.12765957415103912,
0.1111111044883728,
0.045454539358615875,
0.04651162400841713,
0.04347825422883034,
0,
0,
0.0923076868057251,
0.11764705181121826,
0.09756097197532654,
0,
0.20512819290161133,
0.09999999403953552,
0,
0,
0.07843136787414551,
0.1111111044883728,
0.04347825422883034,
0,
0.035087715834379196,
0.12121211737394333,
0.2083333283662796,
0.0952380895614624,
0.09638553857803345,
0,
0.1395348757505417,
0,
0.037735845893621445
] | HJxYZ-5paX | true | [
"We explore how using background knowledge with query reformulation can help retrieve better supporting evidence when answering multiple-choice science questions."
] |
[
"Deep CNNs have achieved state-of-the-art performance for numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret.",
"Machine learning theory implies that such networks are highly over-parameterised and that it should be possible to reduce their size without sacrificing accuracy, and indeed many recent studies have begun to highlight specific redundancies that can be exploited to achieve this.",
"In this paper, we take a further step in this direction by proposing a filter-sharing approach to compressing deep CNNs that reduces their memory footprint by repeatedly applying a single convolutional mapping of learned filters to simulate a CNN pipeline.",
"We show, via experiments on CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet that this allows us to reduce the parameter counts of networks based on common designs such as VGGNet and ResNet by a factor proportional to their depth, whilst leaving their accuracy largely unaffected.",
"At a broader level, our approach also indicates how the scale-space regularities found in visual signals can be leveraged to build neural architectures that are more parsimonious and interpretable.",
"Deep CNNs have achieved state-of-the-art results on a wide range of tasks, from image understanding (Redmon & Farhadi, 2017; Jetley et al., 2017; Kim et al., 2018; Oktay et al., 2018) to natural language processing (Oord et al., 2016; Massiceti et al., 2018) .",
"However, these network architectures are often highly overparameterised (Zhang et al., 2016) , and thus require the supervision of a large number of input-output mappings and significant training time to adapt their parameters to any given task.",
"Recent studies have discovered several different redundancies in these network architectures (Garipov et al., 2016; Hubara* et al., 2018; Wu et al., 2018; Frankle & Carbin, 2019; Yang et al., 2019a; b) and certain simplicities (Pérez et al., 2018; Jetley et al., 2018) in the functions that they implement.",
"For instance, Frankle & Carbin (2019) showed that a large classification network can be distilled down to a small sub-network that, owing to its lucky initialisation, is trainable in isolation without compromising the original classification accuracy.",
"Jetley et al. (2018) observed that deep classification networks learn simplistic non-linearities for class identification, a fact that might well underlie their adversarial vulnerability, whilst challenging the need for complex architectures.",
"Attempts at knowledge distillation have regularly demonstrated that it is possible to train small student architectures to mimic larger teacher networks by using ancillary information extracted from the latter, such as their attention patterns (Zagoruyko & Komodakis, 2017) , predicted soft-target distributions (Hinton et al., 2014) or other kinds of meta-data (Lopes et al., 2017) .",
"These works and others continue to expose the high level of parameter redundancy in deep CNNs, and comprise a foundational body of work towards studying and simplifying networks for safe and practical use.",
"Our paper experiments with yet another scheme for simplifying CNNs, in the hope that it will not only shrink the effective footprint of these networks, but also open up new pathways for network understanding and redesign.",
"In particular, we propose the use of a common set of convolutional filters at different levels of a convolutional hierarchy to achieve class disentanglement.",
"Mathematically, we formulate a classification CNN as an iterative function in which a small set of learned convolutional mappings are applied repeatedly as different layers of a CNN pipeline (see Figure 1) .",
"In doing so, we are able to reduce the parameter count of the network by a factor proportional to its depth, whilst leaving its accuracy largely unaffected.",
"We also investigate the introduction of non-shared linear widths n of the shared convolutional layer, compared to the baseline VGGNet (Simonyan & Zisserman, 2015) , for CIFAR-10 (a) and .",
"The compression factor is plotted on a logarithmic scale.",
"layers before certain shared convolutional layers to enhance the flexibility of the model by allowing it to linearly combine shared filter maps for the disentanglement task.",
"Earlier, Fig. 2 showed the accuracy vs. compression trade-off for S-VGGNet, relative to the original VGGNet (Simonyan & Zisserman, 2015) , for different widths n of the shared convolutional layer.",
"Here, Fig. 3 illustrates the improvements in accuracy due to the learned linear layers (i.e. the blend- The compression factor C is plotted on a log scale. Simonyan & Zisserman, 2015) and (for CIFAR-10) variants of LegoNet (Yang et al., 2019b) , another state-of-the-art compression method.",
"Baseline models marked with a * were retrained for this study.",
"ing layers) on CIFAR-10, CIFAR-100 and Tiny ImageNet.",
"Observably, the use of the linear layers provides greater benefit for datasets that involve discriminating between a larger number of classes, such as CIFAR-100 and Tiny ImageNet.",
"For CIFAR-10, CIFAR-100 and Tiny ImageNet we compare the accuracies of the best-performing 'SL' variants of VGGNet with those of the baseline architecture (and competing compression methods for these datasets, where available) in Table 1 .",
"For CIFAR-10 (see Table 1b ), we are able to achieve comparable classification accuracy to the VGGNet baseline using only n = 256 channels for our shared convolutional layer, which yields a compression factor of ≈ 17×.",
"For CIFAR-100 (Table 1c) , which has 10× more classes, we had to use n = 512 channels to achieve comparable accuracy, but this still yields a significant compression factor of 4.3.",
"Higher compression factors can be achieved by reducing the number of channels, in exchange for some loss in accuracy.",
"Evaluating our shared architecture on Tiny ImageNet (in Table 1d ) evidences a similar trend in the results, with SL2-VGGNet (n = 512 channels) achieving an accuracy comparable to the non-shared baseline, whilst using only 23% of its parameters.",
"Detailed accuracy and memory usage numbers for E-VGGNet, S-VGGNet and SL-VGGNet, for CIFAR-10, are in Table 1a , while the results for CIFAR-100 and Tiny Imagenet can be found in the appendix (see Table 6 in §A.5)",
"We also evaluate our shared ResNet architecture (SL-ResNet) on Tiny ImageNet and ImageNet, with the results shown in Table 2 (the corresponding results for CIFAR-10 and CIFAR-100 can be found in the appendix, see Table 7 in §A.5).",
"For Tiny ImageNet, our SL-ResNet34 (n = 512) variant is able to achieve a compression rate of 8.4 with only a negligible loss in accuracy.",
"For ImageNet, the same variant similarly achieves a compression rate of 8.",
"(Boulch, 2018) , LegoNet (Yang et al., 2019b) , FSNet (Yang et al., 2019a) and Shared Wide ResNet (SWRN) (Savarese & Maire, 2019) .",
"Baseline models marked with a * were retrained for this study.",
"Figure 4: A visual depiction of the linear layers used to blend the input channels in our approach.",
"We show the layers for the two variants in the order (left to right) in which they appear in the networks.",
"For each layer, the input channels are ordered along the x-axis, and the output channels along the y-axis.",
"For each output channel (row), we highlight the lowest 32 weights (in terms of absolute value) in blue, and the highest 32 in red.",
"an accuracy trade-off, we achieve a greater compression rate than competing methods that achieve similar accuracies.",
"Note that SWRN is able to achieve state-of-the-art levels of accuracy, but does not provide savings in the number of parameters.",
"In this paper, we leverage the regularities in visual signals across different scale levels to successfully extend the filter-sharing paradigm to an entire convolutional pipeline for feature extraction.",
"In particular, we instantiate a single convolutional layer and apply it iteratively to simulate conventional VGGNet-like and ResNet-like architectures.",
"We evaluate our shared architectures on four standard benchmarks -CIFAR-10, CIFAR-100, Tiny ImageNet and ImageNet -and achieve compression rates that are higher than existing sharing-based methods that have equivalent performance.",
"We further show that even higher compression rates, with little additional loss in performance, can be achieved by combining our method with the magnitude-based weight pruning approach of Han et al. (2015) .",
"Study of our complementarity to more structured pruning techniques targeting complete filters and channels is reserved for future work.",
"We conclude with two final observations.",
"Firstly, our use of blending layers and a parameter to tune the width of the shared convolutional layer n makes it easy to adjust the architecture so as to achieve a desired trade-off between compression rate C and accuracy.",
"Secondly, there are interesting connections between our work and the idea of energy-based pruning explored in (Yang et al., 2017) , where the authors note that a significant fraction of the energy demands of deep network processing come from transferring weights to and from the file system.",
"Our approach bypasses this bottleneck by using the same compact set of weights in an iterative manner.",
"We aim to further investigate this aspect of our method in subsequent work.",
"A APPENDIX"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0882352888584137,
0.0634920597076416,
0.29032257199287415,
0.4117647111415863,
0.10526315122842789,
0.09677419066429138,
0.09677419066429138,
0.0307692252099514,
0.09836065024137497,
0.14035087823867798,
0.07594936341047287,
0.17543859779834747,
0.032258059829473495,
0.1249999925494194,
0.1818181723356247,
0.42307692766189575,
0.1111111044883728,
0.10810810327529907,
0.12244897335767746,
0.1090909019112587,
0.1111111044883728,
0.05128204822540283,
0,
0.037735845893621445,
0.06779660284519196,
0.1249999925494194,
0.09999999403953552,
0.1304347813129425,
0.1515151411294937,
0.03448275476694107,
0.06557376682758331,
0.11320754140615463,
0.04999999701976776,
0,
0.05128204822540283,
0.08888888359069824,
0.13636362552642822,
0,
0.04081632196903229,
0.1395348757505417,
0.0833333283662796,
0.14814814925193787,
0.21739129722118378,
0.0357142798602581,
0.10169491171836853,
0.04255318641662598,
0.05882352590560913,
0.1666666567325592,
0.11764705181121826,
0.17777776718139648,
0.1463414579629898
] | S1xRxgSFvH | true | [
"We compress deep CNNs by reusing a single convolutional layer in an iterative manner, thereby reducing their parameter counts by a factor proportional to their depth, whilst leaving their accuracies largely unaffected"
] |
[
"We extend the recent results of (Arora et al., 2019) by a spectral analysis of representations corresponding to kernel and neural embeddings.",
"They showed that in a simple single layer network, the alignment of the labels to the eigenvectors of the corresponding Gram matrix determines both the convergence of the optimization during training as well as the generalization properties.",
"We generalize their result to kernel and neural representations and show that these extensions improve both optimization and generalization of the basic setup studied in (Arora et al., 2019).",
"The well-known work of BID8 highlighted intriguing experimental phenomena about deep net trainingspecifically, optimization and generalization -and called for a rethinking of generalization in statistical learning theory.",
"In particular, two fundamental questions that need understanding are: Optimization.",
"Why do true labels give faster convergence rate than random labels for gradient descent?",
"Generalization.",
"What property of properly labeled data controls generalization?",
"BID0 have recently tried to answer this question in a simple model by conducting a spectral analysis of the associated Gram matrix.",
"They show that both training and generalization are better if the label vector aligns with the top eigenvectors.However, their analysis applies only to a simple two layer network.",
"How could their insights be extended to deeper networks?A",
"widely held intuitive view is that deep layers generate expressive representations of the raw input data. Adopting",
"this view, one may consider a model where a representation generated by successive neural network layers is viewed as a kernel embedding which is then fed into the two-layer model of BID0 . The connection",
"between neural networks and kernel machines has long been studied; BID2 ) introduced kernels that mimic deep networks and BID6 showed kernels equivalent to certain feed-forward neural networks. Recently, BID1",
") also make the case that progress on understanding deep learning is unlikely to move forward until similar phenomena in classical kernel machines are recognized and understood. Very recently,",
"BID4 showed that the evolution of a neural network during training can be related to a new kernel, the Neural Tangent Kernel (NTK) which is central to describe the generalization properties of the network.Here we pursue this approach by studying the effect of incorporating embeddings in the simple two layer model and we perform a spectral analysis of these embeddings along the lines of BID0 . We can obtain",
"embeddings in several ways: i. We can use an",
"unbiased kernel such as Gaussian kernel. This choice is",
"consistent with the maximum entropy principle and makes no prior assumption about the data. Or use a kernel",
"which mimics or approximates deep networks ii. We could use data",
"driven embeddings explicitly produced by the hidden layers in neural networks: either use a subset of the same training data to compute such an embedding, or transfer the inferred embedding from a different (but similar) domain.While a general transformation g(x) of the input data may have arbitrary effects, one would expect kernel and neural representations to improve performance. The interplay of",
"kernels and data labellings has been addressed before, for example in the work of kernel-target alignment BID3 ).We do indeed observe",
"a significant beneficial effect: Optimization. Using kernel methods",
"such as random Fourier features (RFF) to approximate the Gaussian kernel embedding BID5 and neural embeddings, we obtain substantially better convergence in training. Generalization. We also",
"achieve significantly",
"lower test error and we confirm that the data dependent spectral measure introduced in BID0 significantly improves with kernel and neural embeddings.Thus this work shows empirically that kernel and neural embeddings improve the alignment of target labels to the eigenvectors of the Gram matrix and thus help training and generalization. This suggests a way to extend",
"the insights of BID0 ) to deeper networks, and possible theoretical results in this direction.",
"We extended the recent results of BID0 by a spectral analysis of the representations corresponding to kernel and neural embeddings and showed that such representations benefit both optimization and generalization.",
"By combining recent results connecting kernel embeddings to neural networks such as BID6 BID4 , one may be able to extend the fine-grained theoretical results of BID0 for two layer networks to deeper networks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.1764705777168274,
0.09999999403953552,
0.25,
0.21621620655059814,
0.09090908616781235,
0.07999999821186066,
0,
0.060606054961681366,
0.14999999105930328,
0,
0.06896550953388214,
0,
0.05405404791235924,
0.09756097197532654,
0.125,
0.0952380895614624,
0,
0.06896550953388214,
0,
0.125,
0.11764705181121826,
0,
0.052631575614213943,
0.1071428507566452,
0.07407406717538834,
0.2702702581882477,
0.04878048226237297
] | rkgcikhcT4 | true | [
"Spectral analysis for understanding how different representations can improve optimization and generalization."
] |
[
"Obtaining high-quality uncertainty estimates is essential for many applications of deep neural networks.",
"In this paper, we theoretically justify a scheme for estimating uncertainties, based on sampling from a prior distribution.",
"Crucially, the uncertainty estimates are shown to be conservative in the sense that they never underestimate a posterior uncertainty obtained by a hypothetical Bayesian algorithm.",
"We also show concentration, implying that the uncertainty estimates converge to zero as we get more data.",
"Uncertainty estimates obtained from random priors can be adapted to any deep network architecture and trained using standard supervised learning pipelines.",
"We provide experimental evaluation of random priors on calibration and out-of-distribution detection on typical computer vision tasks, demonstrating that they outperform deep ensembles in practice.",
"Deep learning has achieved huge success in many applications.",
"In particular, increasingly often, it is used as a component in decision-making systems.",
"In order to have confidence in decisions made by such systems, it is necessary to obtain good uncertainty estimates, which quantify how certain the network is about a given output.",
"In particular, if the cost of failure is large, for example where the automated system has the capability to accidentally hurt humans, the availability and quality of uncertainty estimates can determine whether the system is safe to deploy at all (Carvalho, 2016; Leibig et al., 2017; Michelmore et al., 2018) .",
"Moreover, when decisions are made sequentially, good uncertainty estimates are crucial for achieving good performance quickly (Bellemare et al., 2016; Houthooft et al., 2016; Ostrovski et al., 2017; Burda et al., 2018) .",
"Because any non-Bayesian inference process is potentially sub-optimal (De Finetti, 1937) , these uncertainty estimates should ideally be relatable to Bayesian inference with a useful prior.",
"Deep ensembles (Lakshminarayanan et al., 2017) , one of the most popular methods available for uncertainty estimation in deep networks today, struggle with this requirement.",
"While deep ensembles can be related (Rubin, 1981) to Bayesian inference in settings where the individual models are trained on subsets of the data, this is not how they are used in practice.",
"In order to improve data efficiency, all ensembles are typically trained using the same data (Lakshminarayanan et al., 2017) , resulting in a method which does not have a theoretical justification.",
"Moreover, deep ensembles can give overconfident uncertainty estimates in practice.",
"On the other hand, Monte-Carlo dropout can be viewed (Gal & Ghahramani, 2016) as a certain form of Bayesian inference.",
"However, doing so requires requires either a limit to be taken or a generalization of variational inference to a quasi-KL divergence .",
"In practice, MC dropout can give arbitrarily overconfident estimates (Foong et al., 2019) .",
"More broadly, a category of approaches, known as Bayesian Neural Networks (Blundell et al., 2015; Welling & Teh, 2011; Neal, 1996) , maintains a distribution over the weights of the neural network.",
"These methods have a sound Bayesian justification, but training them is both difficult and carries an accuracy penalty, particularly for networks with convolutional architectures (Osawa et al., 2019) .",
"Moreover, tuning BNNs is hard and achieving a good approximation to the posterior is difficult (Brosse et al., 2018) .",
"We use another way of obtaining uncertainties for deep networks, based on fitting random priors (Osband et al., 2018; 2019) .",
"Random priors are easy to train and were found to work very well in practice (Burda et al., 2018) .",
"To obtain the uncertainty estimates, we first train a predictor network to fit a prior.",
"Two examples of prior-predictor pairs are shown in the top two plots of Figure 1 .",
"On top, two predictors (green) were trained to fit two randomlygenerated priors (red).",
"On the bottom, we obtain uncertainties from the difference between predictors and priors.",
"Dots correspond to training points x i .",
"Faced with a novel input point, we obtain an uncertainty ( Figure 1 , bottom plot) by measuring the error of the predictor network against this pattern.",
"Intuitively, these errors will be small close to the training points, but large far from them.",
"The patterns themselves are drawn from randomly initialized (and therefore untrained) neural networks.",
"While this way of estimating uncertainties was known before (Osband et al., 2019) , it did not have a theoretical justification beyond Bayesian linear regression, which is too limiting for modern applications.",
"Contributions We provide a sound theoretical framework for obtaining uncertainty estimates by fitting random priors, a method previously lacking a principled justification.",
"Specifically, we justify estimates in the uncertainty of the output of neural networks with any architecture.",
"In particular, we show in Lemma 1 and Proposition 1 that these uncertainty estimates are conservative, meaning they are never more certain than a Bayesian algorithm would be.",
"Moreover, in Proposition 2 we show concentration, i.e. that the uncertainties become zero with infinite data.",
"Empirically, we evaluate the calibration and out-of-distribution performance of our uncertainty estimates on typical computer vision tasks, showing a practical benefit over deep ensembles and MC dropout.",
"We now re-visit the algorithm we defined in Section 3, with the aim of using the theory above to obtain practical improvements in the quality of the uncertainty estimates.",
"Architecture and Choosing the Number of Bootstraps Our conservatism guarantee in Proposition 1 holds for any architecture for the predictor h Xf .",
"In theory, the predictor could be completely arbitrary and does not even have to be a deep network.",
"In particular, there is no formal requirement for the predictor architecture to be the same as the prior.",
"On the other hand, to show concentration in Proposition 2, we had to ensure that the prior networks are representable by the predictor.",
"In practice, we use the architecture shown in Figure 2 , where the predictor mirrors the prior, but has additional layers, giving it more representational power.",
"Moreover, the architecture requires choosing the number of bootstraps B. Our experiments in Section 7 show that even using B = 1, i.e. one bootstrap, produces uncertainty estimates of high quality in practice.",
"Modeling Epistemic and Aleatoric Uncertainty Proposition 1 and Proposition 2 hold for any Gaussian Process prior.",
"By choosing the process appropriately, we can model both epistemic and aleatoric uncertainty.",
"Denote by {n(x)} a stochastic process obtained by randomly initializing neural networks and denote by { (x)σ 2 A } the noise term, modeling the aleatoric (observation) noise, where samples are obtained from (x) ∼ N (0, 1) at each x independently (see Appendix D for more background on aleatoric noise).",
"We can now choose the prior process as a sum {f (x)} = {n(x) + (x)σ 2 A } of epistemic component {n(x)} and the noise term.",
"The amount of aleatoric uncertainty can be adjusted by choosing σ 2 A .",
"Prior Choice, Weight Copying and Conservatism One question that can be asked about our architecture (Figure 2) is whether it is possible for the predictor to exactly copy the prior weights, giving zero uncertainty everywhere.",
"A useful edge case to consider here is when we are solving a one-dimensional regression problem, σ 2 A = 0 and the both the priors and predictors are linear functions.",
"In this case, after training on two points, the predictors will agree with the priors everywhere and uncertainty estimates will be zero.",
"However, this is still consistent with our conservatism guarantee The reason for this is once we assume such a linear prior, we are comparing to a GP with a linear kernel.",
"But a GP with that kernel will also have zero uncertainty after seeing two samples.",
"In practice, this means that we have to choose the architecture of the prior networks be expressive enough, which is no different from choosing a reasonable prior for Bayesian inference.",
"Empirically, the tested network architecture did not show weight copying.",
"We provided a theoretical justification for the use of random priors for obtaining uncertainty estimates in the context of deep learning.",
"We have shown that the obtained uncertainties are conservative and that they concentrate for any neural network architecture.",
"We performed an extensive empirical comparison, showing that random priors perform similarly to deep ensembles in a typical supervised training setting, while outperforming them in a regime where we are able to accomplish near-zero training loss for the predictors.",
"For the 1D regression experiment on synthetic data (Fig 1) , we used feed-forward neural networks with 2 layers of 128 units each and a 1-dimensional output layer.",
"We used an ensemble size of 5.",
"The network was trained on 20 points sampled from the negative domain of a sigmoid function and tested on 20 points sampled from the positive domain."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.29629629850387573,
0.06451612710952759,
0.2222222238779068,
0.25806450843811035,
0.4000000059604645,
0.2631579041481018,
0.08695651590824127,
0,
0.0952380895614624,
0.145454540848732,
0.15789473056793213,
0.1538461446762085,
0.1538461446762085,
0.09090908616781235,
0.09302324801683426,
0.25,
0,
0.06451612710952759,
0.0714285671710968,
0,
0.04651162400841713,
0.060606054961681366,
0.34285715222358704,
0.12121211737394333,
0.1428571343421936,
0,
0.1538461446762085,
0.07692307233810425,
0.0952380895614624,
0.04999999701976776,
0.06666666269302368,
0,
0.08695651590824127,
0.47058823704719543,
0.1428571343421936,
0.09999999403953552,
0,
0.14999999105930328,
0.21621620655059814,
0.05882352590560913,
0.12903225421905518,
0.13333332538604736,
0.05882352590560913,
0,
0.09090908616781235,
0.0714285671710968,
0.07407406717538834,
0.06666666269302368,
0.04999999701976776,
0.07407406717538834,
0.12765957415103912,
0.09756097197532654,
0.1764705777168274,
0.10526315122842789,
0.06896550953388214,
0.0952380895614624,
0,
0.5625,
0.19354838132858276,
0.2448979616165161,
0,
0.0952380895614624,
0
] | BJlahxHYDS | true | [
"We provide theoretical support to uncertainty estimates for deep learning obtained fitting random priors."
] |
[
"In this paper, we propose an arbitrarily-conditioned data imputation framework built upon variational autoencoders and normalizing flows.",
"The proposed model is capable of mapping any partial data to a multi-modal latent variational distribution.",
"Sampling from such a distribution leads to stochastic imputation.",
"Preliminary evaluation on MNIST dataset shows promising stochastic imputation conditioned on partial images as input.",
"Neural network based algorithms have been shown effective and promising for various downstream tasks including classification (Deng et al., 2009; Damianou and Lawrence, 2013) , retrieval (Carvalho et al., 2018) , prediction (He et al., 2018) , and more.",
"In order to correctly learn how to perform these tasks, they usually rely strictly on access to fully-observed data.",
"However, acquiring this type of data in real life requires tremendous human effort, limiting the applicability of this family of models.",
"Having a framework designed to perform inference on partially-observed data will not only alleviate the aforementioned constraint, but also open possibilities to perform data imputation, in which the missing data is inferred.",
"Data imputation, also referred to conditional generation, has been an active research area (Little and Rubin, 1986; Song et al., 2018; Zadeh et al., 2019) .",
"The probabilistic nature of this task makes it difficult to adopt off-the-shelf deterministic models widely studied.",
"In other words, conditioned on the same partially-observed data as input, multiple plausible fully-observed data should be able to be imputed.",
"Variational autoencoders (VAEs) (Kingma and Welling, 2013) , as a popular probabilistic modelling approach, have been applied to the data imputation task recently.",
"A variational autoencoder defines a generative process that jointly models the distribution p θ (x, z) of the observed variable x and latent variable z, governed by parameters θ.",
"Instead of performing local inference, VAEs include an inference network parameterized by φ to output an approximate posterior distribution q φ (z|x).",
"Both the generative model and the inference model are optimized with a unified evidence lower bound (ELBO) on marginal data likelihood:",
". Recent literature on utilizing VAEbased models mainly focus on the effectiveness of combination of various obversed parts (Ma et al., 2019; Ivanov et al., 2018) . Different from the related works described above, we propose to enrich the latent space of variational autoencoders to enable multi-modal posterior inference, and therefore probabilistic imputation. Specifically, we use a two-stage model, with first-stage focusing on learning a representation space based on fully-observed data, and second-stage focusing on aligning the representation space embedded from partially-observed data to the one in stage-one. Using flow-based transformations for constructing a rich latent distribution, the proposed model is capable of inferring multi-modal variational latent distributions."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.8387096524238586,
0.13333332538604736,
0.08695651590824127,
0.0714285671710968,
0.045454539358615875,
0.06451612710952759,
0.0624999962747097,
0.09756097197532654,
0.10526315122842789,
0,
0.060606054961681366,
0.21621620655059814,
0.09999999403953552,
0.05882352590560913,
0.12121211737394333,
0.1304347813129425
] | r1eP5khVKB | true | [
"We propose an arbitrarily-conditioned data imputation framework built upon variational autoencoders and normalizing flows"
] |
[
"This paper studies \\emph{model inversion attacks}, in which the access to a model is abused to infer information about the training data.",
"Since its first introduction by~\\citet{fredrikson2014privacy}, such attacks have raised serious concerns given that training data usually contain sensitive information.",
"Thus far, successful model inversion attacks have only been demonstrated on simple models, such as linear regression and logistic regression.",
"Previous attempts to invert neural networks, even the ones with simple architectures, have failed to produce convincing results.",
"We present a novel attack method, termed the \\emph{generative model inversion attack}, which can invert deep neural networks with high success rates.",
"Rather than reconstructing private training data from scratch, we leverage partial public information, which can be very generic, to learn a distributional prior via generative adversarial networks (GANs) and use it to guide the inversion process.",
"Moreover, we theoretically prove that a model's predictive power and its vulnerability to inversion attacks are indeed two sides of the same coin---highly predictive models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount the attacks.\n",
"Our experiments demonstrate that the proposed attack improves identification accuracy over the existing work by about $75\\%$ for reconstructing face images from a state-of-the-art face recognition classifier.",
"We also show that differential privacy, in its canonical form, is of little avail to protect against our attacks.",
"Deep neural networks (DNNs) have been adopted in a wide range of applications, including computer vision, speech recognition, healthcare, among others.",
"The fact that many compelling applications of DNNs involve processing sensitive and proprietary datasets raised great concerns about privacy.",
"In particular, when machine learning (ML) algorithms are applied to private training data, the resulting models may unintentionally leak information about training data through their output (i.e., black-box attack) or their parameters (i.e., white-box attack).",
"A concrete example of privacy attacks is model inversion (MI) attacks, which aim to reconstruct sensitive features of training data by taking advantage of their correlation with the model output.",
"Algorithmically, MI attacks are implemented as an optimization problem seeking for the sensitive feature value that achieves the maximum likelihood under the target model.",
"The first MI attack was proposed in the context of genomic privacy (Fredrikson et al., 2014) , where the authors showed that adversarial access to a linear regression model for personalized medicine can be abused to infer private genomic attributes about individuals in the training dataset.",
"Recent work (Fredrikson et al., 2015) extended MI attacks to other settings, e.g., recovering an image of a person from a face recognition model given just their name, and other target models, e.g., logistic regression and decision trees.",
"Thus far, effective MI attacks have only been demonstrated on the aforementioned simple models.",
"It remains an open question whether it is possible to launch the attacks against a DNN and reconstruct its private training data.",
"The challenges of inverting DNNs arise from the intractability and ill-posedness of the underlying attack optimization problem.",
"For neural networks, even the ones with one hidden layer, the corresponding attack optimization becomes a non-convex problem; solving it via gradient descent methods may easily stuck in local minima, which leads to poor attack performance.",
"Moreover, in the attack scenarios where the target model is a DNN (e.g., attacking face recognition models), the sensitive features (face images) to be recovered often lie in a high-dimensional, continuous data space.",
"Directly optimizing over the high-dimensional space without any constraints may generate unrealistic features lacking semantic information (See Figure 1) .",
"Figure 1 : Reconstruction of the individual on the left by attacking three face recognition models (logistic regression, one-hidden-layer and twohidden-layer neural network) using the existing attack algorithm in (Fredrikson et al., 2015) In this paper, we focus on image data and propose a simple yet effective attack method, termed the generative model inversion (GMI) attack, which can invert DNNs and synthesize private training data with high fidelity.",
"The key observation supporting our approach is that it is arguably easy to obtain information about the general data distribution, especially for the image case.",
"For example, against a face recognition classifier, the adversary could randomly crawl facial images from the Internet without knowing the private training data.",
"We find these datasets, although may not contain the target individuals, still provide rich knowledge about how a face image might be structured; extraction and proper formulation of such prior knowledge will help regularize the originally ill-posed inversion problem.",
"We also move beyond specific attack algorithms and explore the fundamental reasons for a model's susceptibility to inversion attacks.",
"We show that the vulnerability is unavoidable for highly predictive models, since these models are able to establish a strong correlation between features and labels, which coincides exactly with what an adversary exploits to mount MI attacks.",
"Our contributions can be summarized as follows: (1) We propose to use generative models to learn an informative prior from public datasets so as to regularize the ill-posed inversion problem.",
"(2) We propose an end-to-end GMI attack algorithm based on GANs, which can reveal private training data of DNNs with high fidelity.",
"(3) We present a theoretical result that uncovers the fundamental connection between a model's predictive power and its susceptibility to general MI attacks and empirically validate it.",
"(4) We conduct extensive experiments to demonstrate the performance of the proposed attack.",
"Experiment code is publicly available at https://tinyurl.com/yxbnjk4s.",
"Related Work Privacy attacks against ML models consist of methods that aim to reveal some aspects of training data.",
"Of particular interest are membership attacks and MI attacks.",
"Membership attacks aim to determine whether a given individual's data is used in training the model (Shokri et al., 2017) .",
"MI attacks, on the other hand, aim to reconstruct the features corresponding to specific target labels.",
"In parallel to the emergence of various privacy attack methods, there is a line work that formalizes the privacy notion and develops defenses with formal and provable privacy guarantees.",
"One dominate definition of privacy is differential privacy (DP), which carefully randomizes an algorithm so that its output does not to depend too much on any individuals' data (Dwork et al., 2014) .",
"In the context of ML algorithms, DP guarantees protect against attempts to infer whether a data record is included in the training set from the trained model (Abadi et al., 2016) .",
"By definition, DP limits the success rate of membership attacks.",
"However, it does not explicitly protect attribute privacy, which is the target of MI attacks (Fredrikson et al., 2014) .",
"The first MI attack was demonstrated in (Fredrikson et al., 2014) , where the authors presented an algorithm to recover genetic markers given the linear regression that uses them as input features, the response of the model, as well as other non-sensitive features of the input.",
"Hidano et al. (2017) proposed a algorithm that allows MI attacks to be carried out without the knowledge of non-sensitive features by poisoning training data properly.",
"Despite the generality of the algorithmic frameworks proposed in the above two papers, the evaluation of the attacks is only limited to linear models.",
"Fredrikson et al. (2015) discussed the application of MI attacks to more complex models including some shallow neural networks in the context of face recognition.",
"Although the attack can reconstruct face images with identification rates much higher than random guessing, the recovered faces are indeed blurry and hardly recognizable.",
"Moreover, the quality of reconstruction tends to degrade for more complex architectures.",
"Yang et al. (2019b) proposed to train a separate network that swaps the input and output of the target network to perform MI attacks.",
"The inversion model can be trained with black-box accesses to the target model.",
"However, their approach cannot directly be benefited from the white-box setting.",
"Moreover, several recent papers started to formalize MI attacks and study the factors that affect a model's vulnerability from a theoretical viewpoint.",
"For instance, Wu et al. (2016) characterized model invertibility for Boolean functions using the concept of influence from Boolean analysis; Yeom et al. (2018) formalized the risk that the model poses specifically to individuals in the training data and shows that the risk increases with the degree of overfitting of the model.",
"However, their theory assumed that the adversary has access to the join distribution of private feature and label, which is overly strong for many attack scenarios.",
"Our theory does not rely on this assumption and better supports the experimental findings.",
"In this paper, we present a generative approach to MI attacks, which can achieve the-state-of-the-art success rates for attacking the DNNs with high-dimensional input data.",
"The idea of our approach is to extract generic knowledge from public datasets via GAN and use it to regularize the inversion problem.",
"Our experimental results show that our proposed attack is highly performant even when the public datasets (1) do not include the identities that the adversary aims to recover, (2) are unlabeled, (3) have small sizes, (4) come from a different distribution from the private data.",
"We also provide theoretical analysis showing the fundamental connection between a model's predictive power and its vulnerability to inversion attacks.",
"For future work, we are interested in extending the attack to the black-box setting and studying effective defenses against MI attacks.",
"A PROOF OF THEOREM 1 Theorem 2.",
"Let f 1 and f 2 are two models such that for any fixed label y ∈ Y, U f1 (x ns , y) ≥ U f2 (x ns , y).",
"Then, S KL (p(X s |y, x ns )||p f1 (X s |y, x ns )) ≥ S KL (p(X s |y, x ns )||p f2 (X s |y, x ns )).",
"Proof.",
"We can expand the KL divergence D KL (p(X s |y, x ns )||p f1 (X s |y, x ns ) as follows.",
"Thus,",
"B EXPERIMENTAL DETAILS B.1",
"NETWORK ARCHITECTURE",
"The detailed architectures for the two encoders, the decoder of the generator, the local discriminator, and the global discriminator are presented in Table 6, Table 7, Table 8 , Table 9 , and Table 10 , respectively.",
"(1) LeNet adapted from (Lecun et al., 1998) , which has three convolutional layers, two max pooling layers and one FC layer; (2) SimpleCNN, which has five convolutional layers, each followed by a batch normalization layer and a leaky ReLU layer; (3) SoftmaxNet, which has only one FC layer.",
"We split the MNIST dataset into the private set used for training target networks with digits 0 ∼ 4 and the public set used for distilling prior knowledge with digits 5 ∼ 9.",
"The target network is implemented as a Multilayer Perceptron with 2 hidden layers, which have 512 and 256 neurons, respectively.",
"The evaluation classifier is a convulutional neural network with three convolution layers, followed by two fully-connected layers.",
"It is trained on the entire MNIST training set and can achieve 99.2% accuracy on the MNIST test set.",
"Differential privacy of target networks is guaranteed by adding Gaussian noise to each stochastic gradient descent step.",
"We use the moment accounting technique to keep track of the privacy budget spent during training (Abadi et al., 2016) .",
"During the training of the target networks, we set the batch size to be 256.",
"We fix the number of epochs to be 40 and clip the L2 norm of per-sample gradient to be bounded by 1.5.",
"We set the ratio between the noise scale and the gradient clipping threshold to be 0, 0.694, 0.92, 3, 28, respectively, to obtain the target networks with ε = ∞, 9.89, 4.94, 0.98, 0.10 when δ = 10 −5 .",
"For model with ε = 0.1, we use the SGD with a small learning rate 0.01 to ensure stable convergence; otherwise, we set the learning rate to be 0.1.",
"The architecture of the generator in Section B.1 is tailored to the MNIST dataset.",
"We reduce the number of input channels, change the size of kernels, and modify the layers of discriminators to be compatible with the shape of the MNIST data.",
"To train the GAN in the first stage of our GMI attack, we set the batch size to be 64 and use the Adam optimizer with the learning rate 0.004, β 1 = 0.5, and β 2 = 0.999 (Kingma and Ba, 2014) .",
"For the second stage, we set the batch size to be 64 and use the SGD with the Nesterov momentum that has the learning rate 0.01 and momentum 0.9.",
"The optimization is performed for 1500 iterations.",
"The center mask depicted in the main text is used to block the central part of digits.",
"We report the attack accuracy averaged across 640 randomly sampled images from the private set and 5 random initializations of the latent vector for each sampled image."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.15789473056793213,
0.21621620655059814,
0,
0.05714285373687744,
0.29999998211860657,
0.18867924809455872,
0.16949151456356049,
0.23255813121795654,
0.21621620655059814,
0.10256409645080566,
0.21621620655059814,
0.11764705181121826,
0.2666666507720947,
0.14999999105930328,
0.23728813230991364,
0.1111111044883728,
0.0624999962747097,
0.19999998807907104,
0.24242423474788666,
0.11538460850715637,
0.20408162474632263,
0.05405404791235924,
0.15189872682094574,
0.1463414579629898,
0.20512819290161133,
0.145454540848732,
0.21621620655059814,
0.14814814925193787,
0.17777776718139648,
0.25,
0.23255813121795654,
0.2666666507720947,
0,
0.1666666567325592,
0,
0.1538461446762085,
0.0624999962747097,
0.2790697515010834,
0.23999999463558197,
0.2083333283662796,
0.1428571343421936,
0.10526315122842789,
0.2142857164144516,
0.22727271914482117,
0.10810810327529907,
0.09756097197532654,
0.1463414579629898,
0.13333332538604736,
0.307692289352417,
0.13333332538604736,
0.13793103396892548,
0.20512819290161133,
0.1818181723356247,
0.1860465109348297,
0.0624999962747097,
0.23255813121795654,
0.14999999105930328,
0.20689654350280762,
0.21052631735801697,
0.10526315122842789,
0,
0.04651162400841713,
0,
0.1666666567325592,
0,
0.09090908616781235,
0.072727270424366,
0.09302324801683426,
0.052631575614213943,
0.05714285373687744,
0.11764705181121826,
0.11428570747375488,
0.21052631735801697,
0.12903225421905518,
0.1621621549129486,
0.07407406717538834,
0.0952380895614624,
0.1249999925494194,
0.25641024112701416,
0.07407406717538834,
0.0952380895614624,
0,
0.11764705181121826,
0.2380952388048172
] | ByevJerKwS | true | [
"We develop a privacy attack that can recover the sensitive input data of a deep net from its output"
] |
[
"Latent space based GAN methods and attention based encoder-decoder architectures have achieved impressive results in text generation and Unsupervised NMT respectively.",
"Leveraging the two domains, we propose an adversarial latent space based architecture capable of generating parallel sentences in two languages concurrently and translating bidirectionally.",
"The bilingual generation goal is achieved by sampling from the latent space that is adversarially constrained to be shared between both languages.",
"First an NMT model is trained, with back-translation and an adversarial setup, to enforce a latent state between the two languages.",
"The encoder and decoder are shared for the two translation directions.",
"Next, a GAN is trained to generate ‘synthetic’ code mimicking the languages’ shared latent space.",
"This code is then fed into the decoder to generate text in either language.",
"We perform our experiments on Europarl and Multi30k datasets, on the English-French language pair, and document our performance using both Supervised and Unsupervised NMT.",
"Neural machine translation (NMT) and neural text generation (NTG) are among the pool of successful NLP tasks handled by neural approaches.",
"For example, NMT has acheived close to human-level performance using sequence to sequence models, which tries to solve the translation problem endto-end.",
"NTG techniques can be categorized into three classes: Maximum Likelihood Estimation based, GAN-based and reinforcement learning (RL)-based.",
"Recently, researchers have extensively used GANs BID8 as a potentially powerful generative model for text BID32 , because of their great success in the field of image generation.Inspired by human bilingualism, this work proposes a Bilingual-GAN agent, capable of deriving a shared latent space between two languages, and then leveraging that shared space in translation and text generation in both languages.",
"Currently, in the literature, neural text generation (NTG) and NMT are treated as two independent problems; however, we believe that they are two sides of the same coin and could be studied jointly.",
"Emerging latent variable-based techniques can facilitate unifying NTG and NMT and the proposed Bilingual-GAN will be a pioneering attempt in this direction.Learning latent space manifold via adversarial training has gained a lot of attention recently BID21 ; text generation and unsupervised NMT BID15 are among these examples where autoencoder (AE) latent space manifolds are learned adversarially.",
"For NTG, in Adversarially Regularized Autoencoders (ARAE) work , a critic-generator-autoencoder combo is proposed to tackle the non-differentiability problem rising due to the discrete nature of text.",
"The ARAE approach is to learn the continuous manifold of the autoencoder latent space and generate samples from it instead of direct synthesis of discrete (text) outputs.",
"Output text is then reconstructed by the decoder from the generated latent samples, similarly to the autoencoding process.Adversarial learning of autoencoders' latent manifold has also been used for unsupervised NMT BID15 BID17 BID30 BID1 .",
"In BID15 , a single denoising autoencoder is trained to derive a shared latent space between two languages using different loss functions.",
"One of their objectives adversarially enforces the latent space generated by the encoders of the different languages to become shared and difficult to tell apart.",
"Other objectives are autoencoder reconstruction measures and a cross-domain cost closely related to backtranslation BID24 terms.The contribution of this paper is to propose a latent space based architecture as a bilingual agent handling text generation and machine translation simultaneously.",
"We demonstrate that our method even works when using complex multi-dimensional latent representations with attention based decoders, which weren't used in 2 RELATED WORK 2.1 LATENT SPACE BASED UNMT Neural Machine Translation BID10 BID26 BID27 constitutes the state-of-the-art in translation tasks for the majority of language pairs.",
"On the unsupervised side, a few works BID15 ; BID0 ; BID16 have emerged recently to deal with neural machine translation without using parallel corpora, i.e sentences in one language have no matching translation in the other language.",
"They all have a similar approach to unsupervised neural machine translation (UNMT) that uses an encoder-decoder pair sequence-to-sequence model that is shared between the languages while trying to find a latent space common to both languages.",
"They all make use of back-translation BID24 needed for the unsupervised part of the training.",
"BID15 use a word by word translation dictionary learned in an unsupervised way BID5 as part of their back-translation along with an adversarial loss to enforce language Independence in the latent code space.",
"They later improve their model BID16 by removing these two elements and instead using a BPE sub-word tokenization BID23 with embeddings learned using FastText BID3 so that the sentences are embedded in a common space.",
"BID0 have a similar flavour but uses some crosslingual embeddings to embed sentences in a shared space.",
"They also decouple the decoder so that one is used per language.",
"Our work proposed a novel method combining neural machine translation with word-based adversarial language generation to generate bilingual, aligned sentences.",
"This work demonstrates the deep common grounds between language (text) generation and translation, which have not been studied before.",
"We also explored learning a large code space comprising of the hidden states of an RNN over the entire sequence length.",
"The results are promising and motivate a few improvements such as improving the quality of the generated sentences and eliminating language specific performance degradation.",
"Finally, various generation methods including reinforcement learning-based, codebased, text-based and mixed methods can be incorporated into the proposed framework to improve the performance of bilingual text generation.",
"Since during language generation our learned code space favors English sentences over French sentences, we need to remove language specific biases or explore disentangling the code space into language specific and language agnostic subspaces."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.05714285373687744,
0.25641024112701416,
0.05405404791235924,
0.1666666567325592,
0.14814814925193787,
0.06451612710952759,
0.06666666269302368,
0.0555555522441864,
0,
0,
0,
0.1492537260055542,
0.08888888359069824,
0.0615384578704834,
0.09756097197532654,
0,
0.0416666604578495,
0.1621621549129486,
0.05405404791235924,
0.038461532443761826,
0.13114753365516663,
0.16326530277729034,
0.08510638028383255,
0.06896550953388214,
0.08695651590824127,
0.16326530277729034,
0.1875,
0,
0.2222222238779068,
0,
0.11428570747375488,
0.10526315122842789,
0,
0.045454539358615875
] | BJgAfh09tm | true | [
"We present a novel method for Bilingual Text Generation producing parallel concurrent sentences in two languages."
] |
[
"As Artificial Intelligence (AI) becomes an integral part of our life, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. ",
"For a robotic teammate, the ability to generate explanations to explain its behavior is one of the key requirements of an explainable agency.",
"Prior work on explanation generation focuses on supporting the reasoning behind the robot's behavior.",
"These approaches, however, fail to consider the mental workload needed to understand the received explanation.",
"In other words, the human teammate is expected to understand any explanation provided, often before the task execution, no matter how much information is presented in the explanation.\n",
"In this work, we argue that an explanation, especially complex ones, should be made in an online fashion during the execution, which helps spread out the information to be explained and thus reducing the mental workload of humans.",
"However, a challenge here is that the different parts of an explanation are dependent on each other, which must be taken into account when generating online explanations.",
"To this end, a general formulation of online explanation generation is presented along with three different implementations satisfying different online properties.",
"We base our explanation generation method on a model reconciliation setting introduced in our prior work.",
"Our approaches are evaluated both with human subjects in a standard planning competition (IPC) domain, using NASA Task Load Index (TLX), as well as in simulation with ten different problems across two IPC domains.\n",
"As intelligent robots become more prevalent in our lives, the interaction of these AI agents with humans becomes more frequent and essential.",
"One of the most important aspects of human-AI interaction is for the AI agent to provide explanations to convey the reasoning behind the robot's decision-making BID0 .",
"An explanation provides justifications for the agent's intent, which helps the human maintain trust of the robotic peer as well as a shared situation awareness BID1 , BID2 .",
"Prior work on explanation generation often focuses on supporting the motivation for the agent's decision while ignoring the underlying requirements of the recipient to understand the explanation BID3 , BID4 , BID5 .",
"However, a good explanation should be generated in a lucid fashion from the recipient's perspective BID6 .To",
"address this challenge, the agent should consider the discrepancies between the human and its own model while generating explanations. In",
"our prior work BID6 , we encapsulate such inconsistencies as model differences. An",
"explanation then becomes a request to the human to adjust The model reconciliation setting BID6 . M",
"R represents the robot's model and M H represents the human's model of expectation. Using",
"M H , the human generates π M H , which captures the human's expectation of the robot. Whenever",
"the two plans are different, the robot should explain by generating an explanation to reconcile the two models. the model",
"differences in his mind so that the robot's behavior would make sense in the updated model, which is used to produce the human's expectation of the robot. The general",
"decision-making process of an agent in the presence of such model differences is termed model reconciliation BID6 , BID7 .One remaining",
"issue, however, is the ignorance of the mental workload required of the human for understanding an explanation. In most earlier",
"work on explanation generation, the human is expected to understand any explanation provided regardless of how much information is present and no discussion has been provided on the process for presenting the information. In this work, we",
"argue that explanations, especially complex ones, should be provided in an online fashion, which intertwines the communication of explanations with plan execution. In such a manner",
", an online explanation requires less mental workload at any specific point of time. One of the main",
"challenges here, however, is that the different parts of an explanation could be dependent on each other, which must be taken into account when generating online explanations. The online explanation",
"generation process spreads out the information to be communicated while ensuring that they do not introduce cognitive dissonance so that the different parts of the information are perceived in a smooth fashion.",
"In this paper, we introduced a novel approach for explanation generation to reduce the mental workload needed for the human to interpret the explanations, throughout a humanrobot interaction scheme.",
"The key idea here is to break down a complex explanation into smaller parts and convey them in an online fashion, while intertwined with the plan execution.",
"We take a step further from our prior work by considering not only providing the correct explanations, but also the explanations that are easily understandable.",
"We provided three different approaches each of which focuses on one aspect of explanation generation weaved in plan execution.",
"This is an important step toward achieving explainable AI.",
"We evaluated our approaches using both simulation and human subjects.",
"Results showed that our approaches achieved better task performance while reducing the mental workload."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.09756097197532654,
0.1666666567325592,
0.1428571343421936,
0.27586206793785095,
0.19512194395065308,
0.1599999964237213,
0.1860465109348297,
0.17142856121063232,
0.12903225421905518,
0.0416666604578495,
0.10810810327529907,
0.2702702581882477,
0.24390242993831635,
0.24390242993831635,
0.1875,
0.23529411852359772,
0,
0.25806450843811035,
0.1428571343421936,
0.19999998807907104,
0.25,
0.1463414579629898,
0.17142856121063232,
0.3636363446712494,
0.260869562625885,
0.1428571343421936,
0.24242423474788666,
0.1860465109348297,
0.22727271914482117,
0.25,
0.1860465109348297,
0.14999999105930328,
0.1764705777168274,
0,
0.1538461446762085,
0.06666666269302368
] | Byg6QT3QqV | true | [
"We introduce online explanation to consider the cognitive requirement of the human for understanding the generated explanation by the agent."
] |
[
"Deep neural networks use deeper and broader structures to achieve better performance and consequently, use increasingly more GPU memory as well.",
"However, limited GPU memory restricts many potential designs of neural networks.",
"In this paper, we propose a reinforcement learning based variable swapping and recomputation algorithm to reduce the memory cost, without sacrificing the accuracy of models.",
"Variable swapping can transfer variables between CPU and GPU memory to reduce variables stored in GPU memory.",
"Recomputation can trade time for space by removing some feature maps during forward propagation.",
"Forward functions are executed once again to get the feature maps before reuse.",
"However, how to automatically decide which variables to be swapped or recomputed remains a challenging problem.",
"To address this issue, we propose to use a deep Q-network(DQN) to make plans.",
"By combining variable swapping and recomputation, our results outperform several well-known benchmarks.",
"Limited GPU memory restricts model performance due to two different reasons.",
"Firstly, there is a trend that deep neural networks (DNNs) use deeper and more GPU memory-intensive structures (Wang et al., 2018) , and have continuously made improvement in various computer vision areas such as image classification, object detection, and semantic segmentation (He et al., 2016a; Simonyan & Zisserman, 2014; Krizhevsky et al., 2012; Ronneberger et al., 2015; Goodfellow et al., 2016; Szegedy et al., 2015) .",
"Likewise, empirical results show that deeper networks can achieve higher accuracy (He et al., 2016b; Urban et al., 2016) .",
"Deeper network means higher consumption of GPU memory.",
"Secondly, He et al. (2019) shows that bigger input batch size can speed up the training process and achieve higher accuracy.",
"However, a bigger input batch size requires more GPU memory to store intermediate variables.",
"We want more GPU memory to get better performance.",
"The rationale to utilize CPU memory by offloading, and later prefetching variables from it is twofold.",
"Firstly, the size of the CPU memory is usually bigger than that of GPU memory.",
"If we do not use variable swapping, all the tensors will stay in GPU memory.",
"Figure 1 shows the details of variable swapping.",
"Secondly, due to the availability of the GPU direct memory access (DMA) engines, which can overlap data transfers with kernel execution.",
"More specifically, a GPU engine is an independent unit which can operate or be scheduled in parallel with other engines.",
"DMA engines control data transfers, and kernel engines can execute different layer functions of DNNs.",
"Hence, in the ideal case, we can completely overlap DNNs training with variable swapping.",
"Therefore, variable swapping is efficient.",
"Regarding recomputation, some feature maps are not stored in GPU memory in forward propagation, but the feature maps are gotten by running forward functions in backpropagation, as shown in Figure 2 .",
"Why do we combine swapping with recomputation?",
"Because recomputation uses GPU computing engines to reduce memory usage, and variable swapping uses DMA engines to save memory.",
"Different engines can run parallelly.",
"If we execute recomputation during data transfers, we will not waste computing engines or DMA engines.",
"It is hard to decide which variables should be swapped or recomputed.",
"Different DNNs have different structures.",
"Networks have thousands of variables during training, so it is intractable to enumerate the search space exhaustively.",
"Some existing works use heuristic search algorithms for recompu- Figure 1 : The upper graph shows GPU operations in a standard neural network in a time sequence.",
"The lower one shows how to add variable swapping operations.",
"The nodes in the same column represent they occur at the same time.",
"We copy X 0 into CPU memory while reading X 0 .",
"After data transfer and reading, we f ree X 0 from GPU memory.",
"Before using X 0 again, we allocate space for X 0 and transfer it back to GPU memory.",
"Figure 2: If we do not store X 1 in the memory in the forward propagation, we need to execute the layer 0 forward function again to get X 1 for the layer 1 backward function.",
"tation or swapping with limited information from computational graphs.",
"For example, they do not consider the time cost of recomputing different layers or swapping different variables.",
"Additionally, they do not make plans for recomputation during swapping in order to increase GPU utilization.",
"Our work utilizes more information from computational graphs than theirs and makes plans automatically for users.",
"The contribution of our paper is that we propose a DQN algorithm to make plans for swapping and recomputation to reduce memory usage of DNNs.",
"Users only need to set memory usage limits and do not require background knowledge on DNNs.",
"Additionally, the variable swapping and recomputation will not decrease the accuracy of networks.",
"In this paper, we propose a DQN to devise plans for variable swapping and recomputation to reduce memory usage.",
"Our work can work well with different memory limits.",
"Our method provides plans automatically for users.",
"They only need to set a memory limit and do not require background knowledge on DNN or machine learning algorithm.",
"Our method can work well for different network structures such as ResNet, VGG, K-means, SD ResNet, and LSTM.",
"Besides, the variable swapping and recomputation do not decrease the accuracy of networks."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17142856121063232,
0.07407406717538834,
0.699999988079071,
0.3333333134651184,
0,
0.13793103396892548,
0.12903225421905518,
0.20689654350280762,
0.2142857164144516,
0.14814814925193787,
0.057971011847257614,
0,
0.0833333283662796,
0.10810810327529907,
0.19999998807907104,
0.23999999463558197,
0.1875,
0.1428571343421936,
0.19354838132858276,
0.25,
0.1666666567325592,
0.0555555522441864,
0.06666666269302368,
0.19999998807907104,
0.190476194024086,
0.09999999403953552,
0.08695651590824127,
0.4516128897666931,
0,
0.06666666269302368,
0.0714285671710968,
0,
0.12121211737394333,
0.04878048226237297,
0.23076923191547394,
0.07407406717538834,
0.1599999964237213,
0.13793103396892548,
0.1875,
0.14999999105930328,
0.07999999821186066,
0.1875,
0.1875,
0.0624999962747097,
0.4615384638309479,
0.1875,
0.3571428656578064,
0.5294117331504822,
0.0833333283662796,
0,
0.3333333432674408,
0.060606054961681366,
0.3571428656578064
] | BJxg7eHYvB | true | [
"We propose a reinforcement learning based variable swapping and recomputation algorithm to reduce the memory cost."
] |
[
"In vanilla backpropagation (VBP), activation function matters considerably in terms of non-linearity and differentiability.\n",
"Vanishing gradient has been an important problem related to the bad choice of activation function in deep learning (DL).\n",
"This work shows that a differentiable activation function is not necessary any more for error backpropagation. \n",
"The derivative of the activation function can be replaced by an iterative temporal differencing (ITD) using fixed random feedback weight alignment (FBA).\n",
"Using FBA with ITD, we can transform the VBP into a more biologically plausible approach for learning deep neural network architectures.\n",
"We don't claim that ITD works completely the same as the spike-time dependent plasticity (STDP) in our brain but this work can be a step toward the integration of STDP-based error backpropagation in deep learning.",
"VBP was proposed around 1987 BID10 .",
"Almost at the same time, biologicallyinspired convolutional networks was also introduced as well using VBP BID5 .",
"Deep learning (DL) was introduced as an approach to learn deep neural network architecture using VBP BID5 ; BID4 .",
"Extremely deep networks learning reached 152 layers of representation with residual and highway networks BID3 ; BID13 .",
"Deep reinforcement learning was successfully implemented and applied which was mimicking the dopamine effect in our brain for self-supervised and unsupervised learning BID11 BID9 BID8 .",
"Hierarchical convolutional neural network have been biologically inspired by our visual cortex Hubel & Wiesel (1959) ; BID1 BID0 BID14 .",
"The discovery of fixed random synaptic feedback weights alignments (FBA) in error backpropagation for deep learning started a new quest of finding the biological version of VBP since it solves the symmetrical synaptic weights problem in backprop.",
"Recently, spiketime dependent plasticity was the important issue with backprop.",
"One of the works in this direction, highly inspired from Hinton's recirculation idea Hinton & McClelland (1988) , is deep learning using segregated dendrites BID2 .",
"Apical dendrites as the segregated synaptic feedback are claimed to be capable of modeling STDP into the backprop successfully BID2 .",
"In this paper, we took one more step toward a more biologically plausible backpropagation for deep learning.",
"After hierarchical convolutional neural network and fixed random synaptic feedback alignment, we believe iterative temporal differencing is a way toward integrating STDP learning process in the brain.",
"We believe the next steps should be to investigate more into the STDP processes details in learning, dopamine-based unsupervised learning, and generating Poisson-based spikes."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.15789473056793213,
0.11428570747375488,
0.2926829159259796,
0.19999998807907104,
0.2800000011920929,
0,
0,
0.10810810327529907,
0.1764705777168274,
0.14999999105930328,
0,
0.3265306055545807,
0.2142857164144516,
0.1395348757505417,
0.05405404791235924,
0.23529411852359772,
0.31111109256744385,
0.04999999329447746
] | BkpXqwUTZ | true | [
"Iterative temporal differencing with fixed random feedback alignment support spike-time dependent plasticity in vanilla backpropagation for deep learning."
] |
[
"Inspired by the recent successes of deep generative models for Text-To-Speech (TTS) such as WaveNet (van den Oord et al., 2016) and Tacotron (Wang et al., 2017), this article proposes the use of a deep generative model tailored for Automatic Speech Recognition (ASR) as the primary acoustic model (AM) for an overall recognition system with a separate language model (LM).",
"Two dimensions of depth are considered: (1) the use of mixture density networks, both autoregressive and non-autoregressive, to generate density functions capable of modeling acoustic input sequences with much more powerful conditioning than the first-generation generative models for ASR, Gaussian Mixture Models / Hidden Markov Models (GMM/HMMs), and (2) the use of standard LSTMs, in the spirit of the original tandem approach, to produce discriminative feature vectors for generative modeling.",
"Combining mixture density networks and deep discriminative features leads to a novel dual-stack LSTM architecture directly related to the RNN Transducer (Graves, 2012), but with the explicit functional form of a density, and combining naturally with a separate language model, using Bayes rule.",
"The generative models discussed here are compared experimentally in terms of log-likelihoods and frame accuracies."
] | [
1,
0,
0,
0
] | [
0.3055555522441864,
0.1794871687889099,
0.2950819730758667,
0.10256409645080566
] | S1fbqB0noQ | false | [
"This paper proposes the use of a deep generative acoustic model for automatic speech recognition, combining naturally with other deep sequence-to-sequence modules using Bayes' rule."
] |
[
"Recent studies show that widely used Deep neural networks (DNNs) are vulnerable to the carefully crafted adversarial examples.\n",
"Many advanced algorithms have been proposed to generate adversarial examples by leveraging the L_p distance for penalizing perturbations.\n",
"Different defense methods have also been explored to defend against such adversarial attacks. \n",
"While the effectiveness of L_p distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works.\n",
"Perturbations generated through spatial transformation could result in large L_p distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems.",
"This potentially provides a new direction in adversarial example generation and the design of corresponding defenses.\n",
"We visualize the spatial transformation based perturbation for different examples and show that our technique\n",
"can produce realistic adversarial examples with smooth image deformation.\n",
"Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.",
"Deep neural networks (DNNs) have demonstrated their outstanding performance in different domains, ranging from image processing BID18 BID10 ), text analysis BID3 to speech recognition .",
"Though deep networks have exhibited high performance for these tasks, recently they have been shown to be particularly vulnerable to adversarial perturbations added to the input images BID34 BID7 .",
"These perturbed instances are called adversarial examples, which can lead to undesirable consequences in many practical applications based on DNNs.",
"For example, adversarial examples can be used to subvert malware detection, fraud detection, or even potentially mislead autonomous navigation systems BID30 BID5 BID8 and therefore pose security risks when applied to security-related applications.",
"A comprehensive study about adversarial examples is required to motivate effective defenses.",
"Different methods have been proposed to generate adversarial examples such as fast gradient sign methods (FGSM) BID7 , which can produce adversarial instances rapidly, and optimization-based methods (C&W) BID1 , which search for adversarial examples with smaller magnitude of perturbation.One important criterion for adversarial examples is that the perturbed images should \"look like\" the original instances.",
"The traditional attack strategies adopt L 2 (or other L p ) norm distance as a perceptual similarity metric to evaluate the distortion BID9 .",
"However, this is not an ideal metric BID16 BID14 , as L 2 similarity is sensitive to lighting and viewpoint change of a pictured object.",
"For instance, an image can be shifted by one pixel, which will lead to large L 2 distance, while the translated image actually appear \"the same\" to human perception.",
"Motivated by this example, in this paper we aim to look for other types of adversarial examples and propose to create perceptually realistic examples by changing the positions of pixels instead of directly manipulating existing pixel values.",
"This has been shown to better preserve the identity and structure of the original image BID44 .",
"Thus, the proposed spatially transformed adversarial example optimization method (stAdv) can keep adversarial examples less distinguishable from real instances (such examples can be found in Figure 3 ).Various",
"defense methods have also been proposed to defend against adversarial examples. Adversarial",
"training based methods have so far achieved the most promising results BID7 BID38 BID28 . They have demonstrated",
"the robustness of improved deep networks under certain constraints. However, the spatially",
"transformed adversarial examples are generated through a rather different principle, whereby what is being minimized is the local geometric distortion rather than the L p pixel error between the adversarial and original instances. Thus, the previous adversarial",
"training based defense method may appear less effective against this new attack given the fact that these examples generated by stAdv have never been seen before. This opens a new challenge about",
"how to defend against such attacks, as well as other attacks that are not based on direct pixel value manipulation.We visualize the spatial deformation generated by stAdv; it is seen to be locally smooth and virtually imperceptible to the human eye. In addition, to better understand",
"the properties of deep neural networks on different adversarial examples, we provide visualizations of the attention of the DNN given adversarial examples generated by different attack algorithms. We find that the spatial transformation",
"based attack is more resilient across different defense models, including adversarially trained robust models.Our contributions are summarized as follows:• We propose to generate adversarial examples based on spatial transformation instead of direct manipulation of the pixel values, and we show realistic and effective adversarial examples on the MNIST, CIFAR-10, and ImageNet datasets.• We provide visualizations of optimized",
"transformations and show that such geometric changes are small and locally smooth, leading to high perceptual quality.• We empirically show that, compared to other",
"attacks, adversarial examples generated by stAdv are more difficult to detect with current defense systems.• Finally, we visualize the attention maps of",
"deep networks on different adversarial examples and demonstrate that adversarial examples based on stAdv can more consistently mislead the adversarial trained robust deep networks compared to other existing attack methods.",
"Different from the previous works that generate adversarial examples by directly manipulating pixel values, in this work we propose a new type of perturbation based on spatial transformation, which aims to preserve high perceptual quality for adversarial examples.",
"We have shown that adversarial examples generated by stAdv are more difficult for humans to distinguish from original instances.",
"We also analyze the attack success rate of these examples under existing defense methods and demonstrate they are harder to defend against, which opens new directions for developing more robust defense algorithms.",
"Finally, we visualize the attention regions of DNNs on our adversarial examples to better understand this new attack.",
"A MODEL ARCHITECTURES Here we evaluated adversarial examples generated by stAdv against the 3 × 3 average pooling restoration mechanism suggested in .",
"TAB5 shows the classification accuracy of recovered images after performing 3 × 3 average pooling on different models.",
"ImageNet-compatible.",
"We use benign images from the DEV set from the NIPS 2017 targeted adversarial attack competition.",
"4 This competition provided a dataset compatible with ImageNet and containing target labels for a targeted attack.",
"We generate targeted adversarial examples for the target inception_v3 model.",
"In Figure 10 below, we show the original images on the left with the correct label, and we show adversarial examples generated by stAdv on the right with the target label."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19512194395065308,
0.24390242993831635,
0.2222222238779068,
0.19672130048274994,
0.24137930572032928,
0.20512819290161133,
0.2702702581882477,
0.25,
0.1428571343421936,
0.04255318641662598,
0.1249999925494194,
0.2380952388048172,
0.11320754140615463,
0.1764705777168274,
0.1492537260055542,
0.08888888359069824,
0.08695651590824127,
0.08163265138864517,
0.30188679695129395,
0.05405404791235924,
0.0833333283662796,
0.1764705777168274,
0.052631575614213943,
0,
0.11538460850715637,
0.15094339847564697,
0.1875,
0.2083333283662796,
0.2535211145877838,
0.13636362552642822,
0.13333332538604736,
0.30434781312942505,
0.41379308700561523,
0.24390242993831635,
0.2641509473323822,
0.25,
0.09302324801683426,
0.05128204822540283,
0.1111111044883728,
0.10526315122842789,
0.25,
0.13333332538604736
] | HyydRMZC- | true | [
"We propose a new approach for generating adversarial examples based on spatial transformation, which produces perceptually realistic examples compared to existing attacks. "
] |
[
"Most existing deep reinforcement learning (DRL) frameworks consider action spaces that are either\n",
"discrete or continuous space.",
"Motivated by the project of design Game AI for King of Glory\n",
"(KOG), one the world’s most popular mobile game, we consider the scenario with the discrete-continuous\n",
"hybrid action space.",
"To directly apply existing DLR frameworks, existing approaches\n",
"either approximate the hybrid space by a discrete set or relaxing it into a continuous set, which is\n",
"usually less efficient and robust.",
"In this paper, we propose a parametrized deep Q-network (P-DQN)\n",
"for the hybrid action space without approximation or relaxation.",
"Our algorithm combines DQN and\n",
"DDPG and can be viewed as an extension of the DQN to hybrid actions.",
"The empirical study on the\n",
"game KOG validates the efficiency and effectiveness of our method.",
"In recent years, the exciting field of deep reinforcement learning (DRL) have witnessed striking empirical achievements in complicated sequential decision making problems that are once believed unsolvable.",
"One active area of the application of DRL methods is to design artificial intelligence (AI) for games.",
"The success of DRL in the game of Go provides a promising methodology for game AI.",
"In addition to the game of Go, DRL has been widely used in other games such as Atari BID19 , Robot Soccer BID8 BID17 , and Torcs ) to achieve super-human performances.However, most existing DRL methods only handle the environments with actions chosen from a set which is either finite and discrete (e.g., Go and Atari) or continuous (e.g. MuJoCo and Torcs) For example, the algorithms for discrete action space include deep Q-network (DQN) BID18 , Double DQN (Hasselt et al., 2016) , A3C BID20 ; the algorithms for continuous action space include deterministic policy gradients (DPG) BID29 and its deep version DDPG .Motivated",
"by the applications in Real Time Strategic (RTS) games, we consider the reinforcement learning problem with a discrete-continuous hybrid action space. Different",
"from completely discrete or continuous actions that are widely studied in the existing literature, in our setting, the action is defined by the following hierarchical structure. We first",
"choose a high level action k from a discrete set {1, 2, · · · , K}; upon choosing k, we further choose a low level parameter x k ∈ X k which is associated with the k-th high level action. Here X k",
"is a continuous set for all k ∈ {1, . . . , K}.1 Therefore",
", we focus on a discrete-continuous hybrid action space A = (k, x k ) x k ∈ X k for all 1 ≤ k ≤ K .To apply",
"existing DRL approaches on this hybrid action space, two straightforward ideas include:• Approximate A by an finite discrete set. We could",
"approximate each X k by a discrete subset, which, however, might lose the natural structure of X k . Moreover",
", when X k is a region in the Euclidean space, establishing a good approximation usually requires a huge number discrete actions.• Relax",
"A into a continuous set. To apply",
"existing DRL framework with continuous action spaces, BID8 define the following approximate space DISPLAYFORM0 where F k ⊆ R. Here f 1 , f 2 , . . . , f K is used to select the discrete action either deterministically (by picking arg max i f i ) or randomly (with probability softmax(f )). Compared",
"with the original action space A, A might significantly increases the complexity of the action space. Furthermore",
", continuous relaxation can also lead to unnecessary confusion by over-parametrization. For example",
", (1, 0, · · · , 0, x 1 , x 2 , x 3 , · · · , x K ) ∈ A and (1, 0, · · · , 0, x 1 , x 2 , x 3 , · · · , x K ) ∈ A indeed represent the same action (1, x 1 ) in the original space A.In this paper, we propose a novel DRL framework, namely parametrized deep Q-network learning (P-DQN), which directly work on the discrete-continuous hybrid action space without approximation or relaxation. Our method",
"can be viewed as an extension of the famous DQN algorithm to hybrid action spaces. Similar to",
"deterministic policy gradient methods, to handle the continuous parameters within actions, we first define a deterministic function which maps the state and each discrete action to its corresponding continuous parameter. Then we define",
"a action-value function which maps the state and finite hybrid actions to real values, where the continuous parameters are obtained from the deterministic function in the first step. With the merits",
"of both DQN and DDPG, we expect our algorithm to find the optimal discrete action as well as avoid exhaustive search over continuous action parameters. To evaluate the",
"empirical performances, we apply our algorithm to King of Glory (KOG), which is one of the most popular online games worldwide, with over 200 million active users per month. KOG is a multi-agent",
"online battle arena (MOBA) game on mobile devices, which requires players to take hybrid actions to interact with other players in real-time. Empirical study indicates",
"that P-DQN is more efficient and robust than BID8 's method that relaxes A into a continuous set and applies DDPG.",
"Previous deep reinforcement learning algorithms mostly can work with either discrete or continuous action space.",
"In this work, we consider the scenario with discrete-continuous hybrid action space.",
"In contrast of existing approaches of approximating the hybrid space by a discrete set or relaxing it into a continuous set, we propose the parameterized deep Q-network (P-DQN), which extends the classical DQN with deterministic policy for the continuous part of actions.",
"Empirical experiments of training AI for King of Glory, one of the most popular games, demonstrate the efficiency and effectiveness of P-DQN."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.0714285671710968,
0.10526315122842789,
0.07692307233810425,
0.2142857164144516,
0.3333333432674408,
0,
0.25,
0.09999999403953552,
0,
0.3333333432674408,
0.29999998211860657,
0.41379308700561523,
0.09999999403953552,
0.1599999964237213,
0.0476190447807312,
0.19354838132858276,
0.06896550953388214,
0.17999999225139618,
0.3333333432674408,
0.14999999105930328,
0.17391303181648254,
0.06896550953388214,
0.25641024112701416,
0.1666666567325592,
0.0624999962747097,
0.10810810327529907,
0.09090908616781235,
0.19672130048274994,
0.3571428656578064,
0.0714285671710968,
0.2222222238779068,
0.3870967626571655,
0.1904761791229248,
0.19512194395065308,
0.29999998211860657,
0.21739129722118378,
0.15789473056793213,
0.23529411852359772,
0.19999998807907104,
0.4444444477558136,
0.20000000298023224,
0.12121211737394333
] | Sy_MK3lAZ | true | [
"A DQN and DDPG hybrid algorithm is proposed to deal with the discrete-continuous hybrid action space."
] |
[
"Over the past decade, knowledge graphs became popular for capturing structured domain knowledge. \n",
"Relational learning models enable the prediction of missing links inside knowledge graphs.",
"More specifically, latent distance approaches model the relationships among entities via a distance between latent representations.\n",
"Translating embedding models (e.g., TransE) are among the most popular latent distance approaches which use one distance function to learn multiple relation patterns. \n",
"However, they are mostly inefficient in capturing symmetric relations since the representation vector norm for all the symmetric relations becomes equal to zero.",
"They also lose information when learning relations with reflexive patterns since they become symmetric and transitive.\n",
"We propose the Multiple Distance Embedding model (MDE) that addresses these limitations and a framework which enables collaborative combinations of latent distance-based terms (MDE).\n",
"Our solution is based on two principles:",
"1) using limit-based loss instead of margin ranking loss and",
"2) by learning independent embedding vectors for each of terms we can collectively train and predict using contradicting distance terms.\n",
"We further demonstrate that MDE allows modeling relations with (anti)symmetry, inversion, and composition patterns.",
"We propose MDE as a neural network model which allows us to map non-linear relations between the embedding vectors and the expected output of the score function.\n",
"Our empirical results show that MDE outperforms the state-of-the-art embedding models on several benchmark datasets.",
"While machine learning methods conventionally model functions given sample inputs and outputs, a subset of Statistical Relational Learning (SRL) (De Raedt, 2008; Nickel et al., 2015) approaches specifically aim to model \"things\" (entities) and relations between them.",
"These methods usually model human knowledge which is structured in the form of multi-relational Knowledge Graphs (KG).",
"KGs allow semantically rich queries and are used in search engines, natural language processing (NLP) and dialog systems.",
"However, they usually miss many of the true relations (West et al., 2014) , therefore, the prediction of missing links/relations in KGs is a crucial challenge for SRL approaches.",
"A KG usually consists of a set of facts.",
"A fact is a triple (head, relation, tail) where heads and tails are called entities.",
"Among the SRL models, distance-based KG embeddings are popular because of their simplicity, their low number of parameters, and their efficiency on large scale datasets.",
"Specifically, their simplicity allows integrating them into many models.",
"Previous studies have integrated them with logical rule embeddings (Guo et al., 2016) , have adopted them to encode temporal information (Jiang et al., 2016) and have applied them to find equivalent entities between multi-language datasets (Muhao et al., 2017) .",
"Soon after the introduction of the first multi-relational distance-based method TransE (Bordes et al., 2013) it was acknowledged that it is inefficient in learning of symmetric relations, since the norm of the representation vector for all the symmetric relations in the KG becomes close to zero.",
"This means the model cannot distinguish well between different symmetric relations in a KG.",
"To extend this model many variations are studied afterwards, e.g., TransH (Wang et al., 2014b) , TransR (Lin et al., 2015b) , TransD (Ji et al., 2015) , and STransE (Dat et al., 2016) .",
"Even though they solved the issue of symmetric relations, they introduced a new problem: these models were no longer efficient in learning the inversion and composition relation patterns that originally TransE could handle.",
"Besides, as noted in (Kazemi & Poole, 2018; Sun et al., 2019) , within the family of distancebased embeddings, usually reflexive relations are forced to become symmetric and transitive.",
"In this study, we take advantage of independent vector representations of vectors that enable us to view the same relations from different aspects and put forward a translation-based model that addresses these limitations and allows the learning of all three relation patterns.",
"In addition, we address the issue of the limit-based loss function in finding an optimal limit and suggest an updating limit loss function to be used complementary to the current limit-based loss function which has fixed limits.",
"Moreover, we frame our model into a neural network structure that allows it to learn non-linear patterns between embedding vectors and the expected output which substantially improves the generalization power of the model in link prediction tasks.",
"The model performs well in the empirical evaluations, improving upon the state-of-the-art results in link prediction benchmarks.",
"Since our approach involves several elements that model the relations between entities as the geometric distance of vectors from different views, we dubbed it multipledistance embeddings (MDE).",
"In this study, we showed how MDE relieves the expressiveness restrictions of the distance-based embedding models and proposed a general method to override these limitations for the older models.",
"Beside MDE and RotatE, most of the existing KG embedding approaches are unable to allow modeling of all the three relation patterns.",
"We framed MDE into a Neural Network structure and validated our contributions via both theoretical proofs and empirical results.",
"We demonstrated that with multiple views to translation embeddings and using independent vectors (that previously were suggested to cause poor performance (Trouillon et al., 2017; Kazemi & Poole, 2018) ) a model can outperform the existing state-of-the-art models for link prediction.",
"Our experimental results confirm the competitive performance of MDE and particularly MDE N N that achieves state-of-the-art MR and Hit@10 performance on all the benchmark datasets."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.07692307233810425,
0,
0,
0,
0.06451612710952759,
0.15789473056793213,
0.1904761791229248,
0.17391303181648254,
0.11764705181121826,
0.0714285671710968,
0.09999999403953552,
0.06896550953388214,
0.07999999821186066,
0.19354838132858276,
0.06451612710952759,
0.04878048226237297,
0.1818181723356247,
0.13793103396892548,
0.1666666567325592,
0,
0.045454539358615875,
0.07843136787414551,
0,
0.04878048226237297,
0.08888888359069824,
0.09302324801683426,
0.07843136787414551,
0.09756097197532654,
0.0833333283662796,
0,
0.04999999701976776,
0.14999999105930328,
0.11764705181121826,
0.1249999925494194,
0.0363636314868927,
0.17142856121063232
] | B1gOe6NKPB | true | [
"A novel method of modelling Knowledge Graphs based on Distance Embeddings and Neural Networks"
] |
[
"Improved generative adversarial network (Improved GAN) is a successful method of using generative adversarial models to solve the problem of semi-supervised learning.",
"However, it suffers from the problem of unstable training.",
"In this paper, we found that the instability is mostly due to the vanishing gradients on the generator.",
"To remedy this issue, we propose a new method to use collaborative training to improve the stability of semi-supervised GAN with the combination of Wasserstein GAN.",
"The experiments have shown that our proposed method is more stable than the original Improved GAN and achieves comparable classification accuracy on different data sets.",
"Generative adversarial networks (GANs) BID3 have been recently studied intensively and achieved great success in deep learning domain BID14 BID9 BID15 .",
"A typical GAN simulates a two-player minimax game, where one aims to fool the other and the overall system is finally able to achieve equilibrium.Specifically speaking, we have a generator G to generate fake data G(z) from a random variable z whose distribution density is p(z), and also we have a discriminator D(x) to discriminate the real x from the generated data G(z), where x ∼ p r (x) and p r is the distribution density of real data.",
"We optimize the two players G(z) and D(x) by solving the following minimax problem: DISPLAYFORM0 This method is so called as the original GAN BID3 .",
"After this, many different types of GANs have been proposed, e.g., least-squared GAN BID9 , cat-GAN BID15 , W-GAN , Improved GAN BID14 , so on and so forth, focusing on improving the performance of GANs and extending the GAN idea to other application scenarios.For instance, the original GAN is trained in a completely unsupervised learning way BID3 , along with many variants, such as LS-GAN and cat-GAN.",
"It was later extended to semi-supervised learning.",
"In BID14 , Salimans et al. proposed the Improved GAN to enable generation and classification of data simultaneously.",
"In BID7 , Li et al. extended this method to consider conditional data generation.Another issue regarding the unsupervised learning of GANs is the lack of training stability in the original GANs, mostly because of dimension mismatch .",
"A lot of efforts have been dedicated to solve this issue.",
"For instance, in , the authors theoretically found that the instability problem and dimension mismatch of the unsupervised learning GAN was due to the maxing out of Jensen-Shannon divergence between the true and fake distribution and therefore proposed using the Wasserstein distance to train GAN.",
"However, to calculate the Wasserstein distance, the network functions are required to be 1-Lipschitz, which was simply implemented by clipping the weights of the networks in .",
"Later, Gulrajani et.",
"al. improved it by using gradient penalty BID4 .",
"Besides them, the same issue was also addressed from different perspectives.",
"In BID13 , Roth et al. used gradient norm-based regularization to smooth the f-divergence objective function so as to reduce dimension mismatch.",
"However, the method could not directly work on f-divergence, which was intractable to solve, but they instead optimized its variational lower bound.",
"Its converging rate is still an open question and its computational complexity may be high.",
"On the other hand, there were also some efforts to solve the issue of mode collapse, so as to try to stabilize the training of GANs from another perspective, including the unrolled method in BID10 , mode regularization with VAEGAN (Che et al., 2016) , and variance regularization with bi-modal Gaussian distributions BID5 .",
"However, all these methods were investigated in the context of unsupervised learning.",
"Instability issue for semi-supervised GAN is still open.In this work, we focus on investigating the training stability issue for semi-supervised GAN.",
"To the authors' best knowledge, it is the first work to investigate the training instability for semi-supervised GANs, though some were done for unsupervised GANs as aforementioned.",
"The instability issue of the semi-supervised GAN BID14 is first identified and analyzed from a theoretical perspective.",
"We prove that this issue is in fact caused by the vanishing gradients theorem on the generator.",
"We thus propose to solve this issue by using collaborative training to improve its training stability.",
"We theoretically show that the proposed method does not have vanishing gradients on the generator, such that its training stability is improved.",
"Besides the theoretical contribution, we also show by experiments that the proposed method can indeed improve the training stability of the Improved GAN, and at the same time achieve comparable classification accuracy.It is also worth to note that BID7 proposed the Triple GAN that also possessed two discriminators.",
"However, its purpose is focused on using conditional probability training (the original GAN uses unconditional probability) based on data labels to improve the training of GAN, but not on solving the instability issue.",
"Therefore, the question of instability for the Triple GAN is still unclear.",
"More importantly, the method, collaborative training, proposed for exploring the data labels with only unconditional probability in this paper , can also be applied to the Triple GAN to improve its training stability, in the case of conditional probability case.The rest of the paper is organized as follows: in Section 2, we present the generator vanishing gradient theorem of the Improved GAN.",
"In Section 3, we propose a new method, collaborative training Wasserstein GAN (CTW-GAN) and prove its nonvanishing gradient theorem.",
"In Section 4, we present our experimental results and finally give our conclusion in Section 5.",
"In the paper, we study the training instability issue of semi-supervised improved GAN.",
"We have found that the training instability is mainly due to the vanishing gradients on the generator of the Improved GAN.",
"In order to make the training of the Improved GAN more stable, we propose a collaborative training method to combine Wasserstein GAN with the semi-supervised improved GAN.",
"Both theoretical analysis and experimental results on MNIST and CIFAR-10 have shown the effectiveness of the proposed method to improve training stability of the Improved GAN.",
"In addition, it also achieves the classification accuracy comparable to the original Improved GAN."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.06896550953388214,
0.10526315122842789,
0,
0.1249999925494194,
0,
0.06451612710952759,
0.03124999813735485,
0,
0.0634920597076416,
0,
0.0714285671710968,
0.04651162400841713,
0.0952380895614624,
0.04444444179534912,
0.0624999962747097,
0,
0,
0,
0,
0,
0,
0.07547169178724289,
0.09090908616781235,
0,
0,
0.07407406717538834,
0,
0,
0,
0.04081632196903229,
0.05128204822540283,
0.0952380895614624,
0.06896551698446274,
0,
0,
0.09090908616781235,
0.0714285671710968,
0.12903225421905518,
0.0624999962747097,
0
] | ry4SNTe0- | true | [
"Improve Training Stability of Semi-supervised Generative Adversarial Networks with Collaborative Training"
] |
[
"It has long been known that a single-layer fully-connected neural network with an i.i.d.",
"prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. ",
"This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP.",
"Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework.",
"As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network.\n\n",
"In this work, we derive the exact equivalence between infinitely wide, deep, networks and GPs with a particular covariance function.",
"We further develop a computationally efficient pipeline to compute this covariance function.",
"We then use the resulting GP to perform Bayesian inference for deep neural networks on MNIST and CIFAR-10. ",
"We observe that the trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error.",
"We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and that the GP-based predictions typically outperform those of finite-width networks.",
"Finally we connect the prior distribution over weights and variances in our GP formulation to the recent development of signal propagation in random neural networks.",
"Deep neural networks have emerged in recent years as flexible parametric models which can fit complex patterns in data.",
"As a contrasting approach, Gaussian processes have long served as a traditional nonparametric tool for modeling.",
"An equivalence between these two approaches was derived in BID17 , for the case of one layer networks in the limit of infinite width.",
"Neal (1994a) further suggested that a similar correspondence might hold for deeper networks.Consider a deep fully-connected neural network with i.i.d. random parameters.",
"Each scalar output of the network, an affine transformation of the final hidden layer, will be a sum of i.i.d. terms.",
"As we will discuss in detail below, in the limit of infinite width the Central Limit Theorem 1 implies that the function computed by the neural network (NN) is a function drawn from a Gaussian process (GP).",
"In the case of single hidden-layer networks, the form of the kernel of this GP is well known BID17 BID25 ).This",
"correspondence implies that if we choose the hypothesis space to be the class of infinitely wide neural networks, an i.i.d. prior over weights and biases can be replaced with a corresponding GP prior over functions. As noted",
"by BID25 , this substitution enables exact Bayesian inference for regression using neural networks. The computation",
"requires building the necessary covariance matrices over the training and test sets and straightforward linear algebra computations.In light of the resurgence in popularity of neural networks, it is timely to revisit this line of work. We delineate the",
"correspondence between deep and wide neural networks and GPs and utilize it for Bayesian training of neural networks on regression tasks.",
"By harnessing the limit of infinite width, we have specified a correspondence between priors on deep neural networks and Gaussian processes whose kernel function is constructed in a compositional, but fully deterministic and differentiable, manner.",
"Use of a GP prior on functions enables exact Bayesian inference for regression from matrix computations, and hence we are able to obtain predictions and uncertainty estimates from deep neural networks without stochastic gradient-based training.",
"The performance is competitive with the best neural networks (within specified class of fully-connected models) trained on the same regression task under similar hyperparameter settings.",
"While we were able to run experiments for somewhat large datasets (sizes of 50k), we intend to look into scalability for larger learning tasks, possibly harnessing recent progress in scalable GPs BID20 ; BID12 ).",
"In our experiments, we observed the performance of the optimized neural network appears to approach that of the GP computation with increasing width.",
"Whether gradient-based stochastic optimization implements an approximate Bayesian computation is an interesting question BID16 .",
"Further investigation is needed to determine if SGD does approximately implement Bayesian inference under the conditions typically employed in practice.Additionally, the NNGP provides explicit estimates of uncertainty.",
"This may be useful in predicting model failure in critical applications of deep learning, or for active learning tasks where it can be used to identify the best datapoints to hand label.A DRAWS FROM AN NNGP PRIOR FIG5 illustrates the nature of the GP prior for the ReLU nonlinearity by depicting samples of 1D functions z(x) drawn from a ReLU GP, GP(0, K L ), with fixed depth L = 10 and (σ Figure 6: The angular structure of the kernel and its evolution with depth.",
"Also illustrated is the good agreement between the kernel computed using the methods of Section 2.5 (blue, starred) and the analytic form of the kernel (red).",
"The depth l in K l runs from l = 0, ..., 9 (flattened curves for increasing l), and (σ In the main text, we noted that the recurrence relation Equation 5 can be computed analytically for certain nonlinearities.",
"In particular, this was computed in BID2 for polynomial rectified nonlinearities.",
"For ReLU, the result including the weight and bias variance is DISPLAYFORM0 To illustrate the angular form of K l (x, x ) and its evolution with l, in Figure 6 we plot K l (θ) for the ReLU nonlinearity, where θ is the angle between x and x with norms such that ||x|| 2 = ||x || 2 = d in .",
"We observe a flattening of the angular structure with increase in depth l, as predicted from the understanding in Section 3.2.",
"Simultaneously, the figure also illustrates the good agreement between the kernel computed using the numerical implementation of Section 2.5 (blue, starred) and the analytic arccosine kernel, Equation 11 (red), for a particular choice of hyperparameters (σ"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.06451612710952759,
0.060606054961681366,
0.06451612710952759,
0.04878048226237297,
0.0624999962747097,
0.1666666567325592,
0.25806450843811035,
0.05714285373687744,
0.20512820780277252,
0.11428570747375488,
0.06666666269302368,
0,
0.060606054961681366,
0.11428570747375488,
0,
0,
0.06666666269302368,
0.08695651590824127,
0.1428571343421936,
0.17777776718139648,
0.20689654350280762,
0.08888888359069824,
0.2666666507720947,
0.0555555522441864,
0.045454543083906174,
0.0624999962747097,
0,
0.05128204822540283,
0.048192769289016724,
0.060606054961681366,
0,
0,
0,
0.0624999962747097,
0.045454543083906174
] | B1EA-M-0Z | true | [
"We show how to make predictions using deep networks, without training deep networks."
] |
[
"Search space is a key consideration for neural architecture search.",
"Recently, Xie et al. (2019a) found that randomly generated networks from the same distribution perform similarly, which suggest we should search for random graph distributions instead of graphs.",
"We propose graphon as a new search space.",
"A graphon is the limit of Cauchy sequence of graphs and a scale-free probabilistic distribution, from which graphs of different number of vertices can be drawn.",
"This property enables us to perform NAS using fast, low-capacity models and scale the found models up when necessary.",
"We develop an algorithm for NAS in the space of graphons and empirically demonstrate that it can find stage-wise graphs that outperform DenseNet and other baselines on ImageNet.",
"Neural architecture search (NAS) aims to automate the discovery of neural architectures with high performance and low cost.",
"Of primary concern to NAS is the design of the search space [23] , which needs to balance multiple considerations.",
"For instance, too small a space would exclude many good solutions, whereas a space that is too large would be prohibitively expensive to search through.",
"An ideal space should have a one-to-one mapping to solutions and sufficiently smooth in order to accelerate the search.",
"A common technique [37, 17, 35, 19, 24, 34] to keep the search space manageable is to search for a small cell structure, typically containing about 10 operations with 1-2 input sources each.",
"When needed, identical cells are stacked to form a large network.",
"This technique allows cells found on, for instance, CIFAR-10 to work on ImageNet.",
"Though this practice is effective, it cannot be used to optimize the overall network structure.",
"In both manual and automatic network design, the overall network structure is commonly divided into several stages, where one stage operates on one spatial resolution and contains several nearidentical layers or multi-layer structures (i.e., cells).",
"For example, ResNet-34 [11] contains 4 stages with 6, 8, 12 , and 6 convolutional layers, respectively.",
"DenseNet-121 [12] contains 4 stages with 6, 12, 24, and 16 two-layer cells.",
"AmoebaNet-A [24] has 3 stages, within each 6 cells are arranged sequentially.",
"Among cells in the same stage, most connections are sequential with skip connections occasionally used.",
"As an exception, DenseNet introduces connections between every pairs of cells within the same stage.",
"Here we emphasize the difference between a stage and a cell.",
"A cell typically contains about 10 operations, each taking input from 1-2 other operations.",
"In comparison, a stage can contain 60 or more operations organized in repeated patterns and the connections can be arbitrary.",
"A network usually contains only 3-4 stages but many more cells.",
"In this paper, we focus on the network organization at the level of stage rather than cell.",
"[32] recently showed that the stage structure can be sampled from probabilistic distributions of graphs, including Erdős-Rényi (ER) (1960), Watts-Strogatz (WS) (1998), and Barabási-Albert (BA) (1999), yielding high-performing networks with low in-group variance.",
"This finding suggests the random graph distribution, rather than the exact graph, is the main causal factor behind network performance.",
"Thus, searching for the graph is likely not as efficient as searching for the random (c) m0 = m = 100, n = 1000 Figure 1 : Three adjacency matrices of graphs generated by the Barabási-Albert model with m = m 0 = 0.1n.",
"A black dot at location (i, j) denotes an edge from node i to node j.",
"The sequence of matrices converges to its limit, the graphon, as n → ∞.",
"Figure 2: Graphons for common random graph models.",
"Different shades denote different probabilities (e.g., p and 1 − p).",
"The Erdős-Rényi model has two parameters: number of nodes n and probability p.",
"The Watts-Strogatz (WS) model has three parameters: number of nodes n, replacement probability p, and initial neighborhood width k.",
"Technically, the WS model has a constant number of edges, violating exchangeability for random graphs; graphs sampled from (b) converges in probability to the same number of edges as n increases.",
"graph distribution.",
"The parameter space of random graph distributions may appear to be a good search space.",
"We propose a different search space, the space of graphons [20] , and argue for its superiority as an NAS search space.",
"Formally introduced in Section 3, a graphon is a measurable function defined on [0, 1] 2 → [0, 1] and a probabilistic distribution from which graphs can be drawn.",
"Graphons are limit objects of Cauchy sequences of finite graphs under the cut distance metric.",
"Figure 1 visualizes three adjacency matrices randomly generated by the Barabási-Albert (BA) model with increasing numbers of nodes.",
"It is easy to see that, as the number of nodes increases, the sequence of random graphs converges to its limit, a graphon.",
"The BA model starts with an initial seed graph with m 0 nodes and arbitrary interconnections.",
"Here we choose a complete graph as the seed.",
"It sequentially adds new nodes until there are n nodes in the graph.",
"For every new node v new , m edges are added, with the probability of adding an edge between v new and the node v i being proportional to the degree of v i .",
"In Figure 1 , we let m = m 0 = 0.1n.",
"The fact that different parameterization results in the same adjacency matrix suggests that directly searching in the parameter space will revisit the same configuration and is less efficient than searching in the graphon space.",
"Additionally, graphon provides a unified and more expressive space than common random graph models.",
"Figure 2 illustrates the graphons for the WS and the ER models.",
"We can observe that these random models only capture a small proportion of all possible graphons.",
"The graphon space allows new possibilities such as interpolation or striped combination of different random graph models.",
"Finally, graphon is scale-free, so we should be able to sample an arbitrary-sized stage-wise architecture with identical layers (or cells) from a graphon.",
"This allows us to perform expensive NAS on small datasets (e.g., CIFAR-10) using low-capacity models and obtain large stage-wise graphs to build large models.",
"By relating graphon theory to NAS, we provide theoretically motivated techniques that scale up stage-wise graphs, which are shown to be effective in practice.",
"Our experiments aim to fairly compare the stage-wise graphs found by our method against DenseNet and the WS random graph model by keeping other network structures and other hyperparameters constant.",
"The results indicate that the graphs found outperform the baselines consistently across a range of model capacities.",
"The contribution of this paper revolves around building a solid connection between theory and practice.",
"More specifically,",
"• We propose graphon, a generalization of random graphs, as a search space for stage-wise neural architecture that consists of connections among mostly identical units.",
"• We develop an operationalization of the theory on graphon in the representation, scaling and search of neural stage-wise graphs that perform well in fair comparisons.",
"We attribute the performance differences to the stage-wise graphs, since we have strictly applied the same setting, including the global network structure, the cell structure, and hyperparameter settings.",
"The first conclusion we draw is the effectiveness of the theoretically motivated scaling technique for graphon.",
"We scaled up the 11-node graph found by the search to graphs with up to 64 nodes in the experiments.",
"We also scaled the WS(4, 0.25) network, initially defined for 32 nodes in [32] , to 64 nodes in the DenseNet-264 group.",
"The experiments show that after scaling, the relative rankings of these methods are maintained, suggesting that the proposed scaling technique incurs no performance loss.",
"Second, we observe the standard deviations for most methods are low, even though they edge a bit higher for ImageNet V2 where model selection has been carried out.",
"This is consistent with the findings of [32] and reaffirms that searching for random graphs is a valid approach for NAS.",
"Finally, we emphasize that these results are created for the purpose of fair comparisons and not for showcasing the best possible performance.",
"Our goal is to show that the graphon space and the associated cut distance metric provide a feasible approach for NAS and the empirical evidences support our argument.",
"The design of search space is of paramount importance for neural architecture search.",
"Recent work [32] suggests that searching for random graph distributions is an effective strategy for the organization of layers within one stage.",
"Inspired by mathematical theories on graph limits, we propose a new search space based on graphons, which are the limits of Cauchy sequences of graphs based on the cut distance metric.",
"The contribution of this paper is the operationalization of the graphon theory as practical NAS solutions.",
"First, we intuitively explain why graphon is a superior search space than the parameter space of random graph models such as the Erdős-Rényi model.",
"Furthermore, we propose a technique for scaling up random graphs found by NAS to arbitrary size and present a theoretical analysis under the cut distance metric associated with graphon.",
"Finally, we describe an operational algorithm that finds stage-wise graphs that outperform manually designed DenseNet as well as randomly wired architectures in [32] .",
"Although we find neural architectures with good performance, we remind the reader that absolute performance is not the goal of this paper.",
"Future work involves expanding the work to different operators in the same stage graph.",
"This can be achieved, for example, in the same manner that digraphon accommodates different types of connections.",
"We contend that the results achieved in this paper should not be considered an upper bound, but only the beginning, of what can be achieved.",
"We believe this work opens the door toward advanced NAS algorithms in the space of graphon and the cut distance metric.",
"A The DenseNet network contains a stem network before the first stage, which contains a 3 × 3 convolution, batch normalization, ReLU and max-pooling.",
"This is followed by three stages for CIFAR-10 and four stages for ImageNet.",
"Between every two stages, there is a transition block containing a 1×1 convolution for channel reduction and a 2 × 2 average pool with stride 2 for downsampling.",
"The network ends with a 7 × 7 global average pooling and a linear layer before a softmax.",
"Figure 4 shows the cell structure for DenseNet, which contains two convolutions with different kernel size: 1 × 1 and 3 × 3.",
"Each of the two convolutions are immediately preceded by a batch normalization and ReLU.",
"Every cell in the same stage outputs c channels.",
"The input to the n th cell is the concatenation of outputs from the cell 1 to cell n − 1, for a total of c(n − 1) channels.",
"As every cell increments the number of input channels by c, it is called the growth rate.",
"Theorem 5 shows the k-fold blow-up method is a better approximation of the original graph in terms of the cut distance δ than the 1D linear interpolation.",
"But the exact k-fold blow-up is only applicable when k is an integer.",
"If a graph of size n + m(0 < m < n) is desired, we need to resort to the fractional blow-up method, which has been analyzed in Theorems 3 and 4.",
"We show that when m is 1 or n − 1, this partial blowup operation does not cause δ to change more than O(β ∆ /n).",
"However, when m is n/2, δ between the original graph and the new graph could be up to β ∆ /6.",
"This suggests that the fractional upsampling results in a graph that is similar to the original when only a small number of nodes (relative to n) is added."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.6086956262588501,
0.1463414579629898,
0.2857142686843872,
0.17142856121063232,
0.06451612710952759,
0.20512820780277252,
0.25806450843811035,
0.19354838132858276,
0.29411762952804565,
0.25806450843811035,
0.22727271914482117,
0.0833333283662796,
0.07692307233810425,
0.0714285671710968,
0.08695651590824127,
0.06666666269302368,
0.07692307233810425,
0,
0,
0,
0.17391303181648254,
0,
0.1249999925494194,
0,
0,
0.08695651590824127,
0.06451612710952759,
0.08695651590824127,
0,
0,
0.0952380895614624,
0.07692307233810425,
0.07692307233810425,
0.0624999962747097,
0.09756097197532654,
0.29629629850387573,
0.3030303120613098,
0.15789473056793213,
0,
0,
0.12121211737394333,
0.0714285671710968,
0.09090908616781235,
0,
0.05405404791235924,
0,
0.15789473056793213,
0.2222222238779068,
0.17391303181648254,
0.06896550953388214,
0.06666666269302368,
0.17142856121063232,
0.0555555522441864,
0,
0.05128204822540283,
0.06896550953388214,
0.1428571343421936,
0.3333333432674408,
0.1666666567325592,
0.0555555522441864,
0.1428571343421936,
0.06896550953388214,
0.060606054961681366,
0,
0.09999999403953552,
0.25,
0.12121211737394333,
0.2631579041481018,
0.5,
0.11764705181121826,
0.1538461446762085,
0.07407406717538834,
0.22857142984867096,
0.1463414579629898,
0,
0.1818181723356247,
0,
0.06666666269302368,
0,
0.1249999925494194,
0.12121211737394333,
0.25,
0.2222222238779068,
0.1428571343421936,
0.12121211737394333,
0.14814814925193787,
0,
0.1764705777168274,
0.06896550953388214,
0.1111111044883728,
0.07999999821186066,
0.1395348757505417,
0.05128204822540283,
0.1249999925494194,
0.1111111044883728
] | SkxWnkStvS | true | [
"Graphon is a good search space for neural architecture search and empirically produces good networks."
] |
[
"Adversarial training is by far the most successful strategy for improving robustness of neural networks to adversarial attacks.",
"Despite its success as a defense mechanism, adversarial training fails to generalize well to unperturbed test set.",
"We hypothesize that this poor generalization is a consequence of adversarial training with uniform perturbation radius around every training sample.",
"Samples close to decision boundary can be morphed into a different class under a small perturbation budget, and enforcing large margins around these samples produce poor decision boundaries that generalize poorly.",
"Motivated by this hypothesis, we propose instance adaptive adversarial training -- a technique that enforces sample-specific perturbation margins around every training sample.",
"We show that using our approach, test accuracy on unperturbed samples improve with a marginal drop in robustness.",
"Extensive experiments on CIFAR-10, CIFAR-100 and Imagenet datasets demonstrate the effectiveness of our proposed approach.",
"A key challenge when deploying neural networks in safety-critical applications is their poor stability to input perturbations.",
"Extremely tiny perturbations to network inputs may be imperceptible to the human eye, and yet cause major changes to outputs.",
"One of the most effective and widely used methods for hardening networks to small perturbations is \"adversarial training\" (Madry et al., 2018) , in which a network is trained using adversarially perturbed samples with a fixed perturbation size.",
"By doing so, adversarial training typically tries to enforce that the output of a neural network remains nearly constant within an p ball of every training input.",
"Despite its ability to increase robustness, adversarial training suffers from poor accuracy on clean (natural) test inputs.",
"The drop in clean accuracy can be as high as 10% on CIFAR-10, and 15% on Imagenet (Madry et al., 2018; Xie et al., 2019) , making robust models undesirable in some industrial settings.",
"The consistently poor performance of robust models on clean data has lead to the line of thought that there may be a fundamental trade-off between robustness and accuracy (Zhang et al., 2019; Tsipras et al., 2019) , and recent theoretical results characterized this tradeoff (Fawzi et al., 2018; Shafahi et al., 2018; Mahloujifar et al., 2019) .",
"In this work, we aim to understand and optimize the tradeoff between robustness and clean accuracy.",
"More concretely, our objective is to improve the clean accuracy of adversarial training for a chosen level of adversarial robustness.",
"Our method is inspired by the observation that the constraints enforced by adversarial training are infeasible; for commonly used values of , it is not possible to achieve label consistency within an -ball of each input image because the balls around images of different classes overlap.",
"This is illustrated on the left of Figure 1 , which shows that the -ball around a \"bird\" (from the CIFAR-10 training set) contains images of class \"deer\" (that do not appear in the training set).",
"If adversarial training were successful at enforcing label stability in an = 8 ball around the \"bird\" training image, doing so would come at the unavoidable cost of misclassifying the nearby \"deer\" images that come along at test time.",
"At the same time, when training images lie far from the decision boundary (eg., the deer image on the right in Fig 1) , it is possible to enforce stability with large with no compromise in clean accuracy.",
"When adversarial training on CIFAR-10, we see that = 8 is too large for some images, causing accuracy loss, while being unnecessarily small for others, leading to sub-optimal robustness.",
"In this work, we focus on improving the robustness-accuracy tradeoff in adversarial training.",
"We first show that realizable robustness is a sample-specific attribute: samples close to the decision boundary can only achieve robustness within a small ball, as they contain samples from a different class beyond this radius.",
"On the other hand samples far from the decision boundary can be robust on a relatively large perturbation radius.",
"Motivated by this observation, we develop instance adaptive adversarial training, in which label consistency constraints are imposed within sample-specific perturbation radii, which are in-turn estimated.",
"Our proposed algorithm has empirically been shown to improve the robustness-accuracy tradeoff in CIFAR-10, CIFAR-100 and Imagenet datasets.",
"A recent paper that addresses the problem of improving natural accuracy in adversarial training is mixup adversarial training (Lamb et al., 2019) , where adversarially trained models are optimized using mixup loss instead of the standard cross-entropy loss.",
"In this paper, natural accuracy was shown to improve with no drop in adversarial robustness.",
"However, the robustness experiments were not evaluated on strong attacks (experiments were reported only on PGD-20).",
"We compare our implementation of mixup adversarial training with IAAT on stronger attacks in Table.",
"8.",
"We observe that while natural accuracy improves for mixup, drop in adversarial accuracy is much higher than IAAT."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.307692289352417,
0.1666666567325592,
0.14814814925193787,
0,
0.20689654350280762,
0,
0,
0,
0,
0.045454543083906174,
0.12121211737394333,
0.1599999964237213,
0,
0.038461536169052124,
0.08695651590824127,
0.23076923191547394,
0.125,
0.052631575614213943,
0.09756097197532654,
0.04878048598766327,
0.1666666567325592,
0.4761904776096344,
0,
0,
0.12903225421905518,
0.1538461446762085,
0.14999999105930328,
0.08695651590824127,
0,
0.17391303181648254,
0.1599999964237213
] | SyeOVTEFPH | true | [
"Instance adaptive adversarial training for improving robustness-accuracy tradeoff"
] |
[
"Machine learning models including traditional models and neural networks can be easily fooled by adversarial examples which are generated from the natural examples with small perturbations. ",
"This poses a critical challenge to machine learning security, and impedes the wide application of machine learning in many important domains such as computer vision and malware detection. ",
"Unfortunately, even state-of-the-art defense approaches such as adversarial training and defensive distillation still suffer from major limitations and can be circumvented. ",
"From a unique angle, we propose to investigate two important research questions in this paper: Are adversarial examples distinguishable from natural examples? ",
"Are adversarial examples generated by different methods distinguishable from each other? ",
"These two questions concern the distinguishability of adversarial examples. ",
"Answering them will potentially lead to a simple yet effective approach, termed as defensive distinction in this paper under the formulation of multi-label classification, for protecting against adversarial examples. ",
"We design and perform experiments using the MNIST dataset to investigate these two questions, and obtain highly positive results demonstrating the strong distinguishability of adversarial examples. ",
"We recommend that this unique defensive distinction approach should be seriously considered to complement other defense approaches.",
"Machine learning models including SVMs BID0 and especially deep neural networks BID17 can be easily fooled by adversarial examples which are generated from the natural examples with small perturbations.",
"Quite often, both machine learning models and humans can classify the natural examples such as the images of pandas with high accuracy, and humans can still classify the adversarial examples as pandas with high accuracy because the small perturbations are imperceptible; however, machine learning models are fooled to misclassify adversarial examples as some targets such as gibbons BID4 desired by attackers.This intriguing property or vulnerability of machine learning models poses a critical challenge to machine learning security, and it impedes the wide application of machine learning in many important domains such as computer vision (e.g., for self driving cars) and even in malware detection BID0 BID19 .",
"Furthermore, the discovery of new and powerful adversarial example generation methods such as BID17 BID4 BID1 BID2 BID7 BID9 BID11 BID10 BID16 goes on without cessation, indicating to a certain extent the unlimited capabilities for attackers to continuously and easily fool machine learning models.",
"On the other hand, even state-of-the-art defense approaches such as adversarial training BID17 BID4 and defensive distillation BID12 still suffer from major limitations and can be circumvented (Section 2).",
"Therefore, the unfortunate status quo is that attackers prevail over defenders.In this paper, from a unique angle, we propose to investigate two important research questions that concern the distinguishability of adversarial examples.",
"Question 1: are adversarial examples distinguishable from natural examples?",
"Question 2: are adversarial examples generated by different methods distinguishable from each other?",
"If the answer to Question 1 will be positive, i.e., given a certain classification task such as image classification, generated adversarial examples (regardless of the objects they represent) largely belong to one class while natural examples belong to the other class, then defenders can simply discard those adversarial examples to protect the machine learning models.",
"If the answer to Question 2 will be positive, i.e., adversarial examples generated by different methods clearly belong to different classes, defenders can better protect the machine learning models, for example, by incorporating the corresponding examples into the adversarial training process to enhance the robustness of the models.",
"Besides such practical benefits, answering these two questions may also help researchers further identify the nature of adversarial examples.Formally, we consider a classification problem in adversarial environments as a multi-label classification problem.",
"That is, upon seeing a new input such as an image, while the original task such as classifying the image as a certain object is important, it is also important to classify the image as a generated adversarial vs. a natural example in the first place.",
"We formulate this multi-label classification problem in Section 3 to guide us in answering the two questions, and term the corresponding defense approach as defensive distinction, which distinguishes adversarial vs. natural examples and distinguishes adversarial examples generated by different methods to protect against the attacks.We design and perform experiments using the MNIST dataset to investigate the two research questions and evaluate the effectiveness of our defensive distinction approach.",
"In our experiments, we consider multiple scenario-case combinations that defenders either know or do not know the neural network, source images, and methods as well as parameters used by attackers for generating adversarial examples.",
"We obtain highly positive answers to both research questions.",
"For example, in some typical cases, adversarial vs. natural examples can be distinguished perfectly with 100% accuracy, while adversarial examples generated by different methods can be distinguished with over 90% accuracy.",
"Our experimental results demonstrate the strong distinguishability of adversarial examples, and demonstrate the value of the defensive distinction approach.",
"We recommend that this unique defense approach should be seriously considered to complement other defense approaches.We make four main contributions in this paper: (1) we propose to investigate two important research questions that concern the distinguishability of adversarial examples; (2) we formulate a classification problem in adversarial environments as a multi-label classification problem to answer the two questions; (3) we propose and explore a unique defense approach termed as defensive distinction; (4) we design and perform experiments to empirically demonstrate the strong distinguishability of adversarial examples and the value of our defensive distinction approach.",
"We proposed two important research questions that concern the distinguishability of adversarial examples, and formulated a classification problem in adversarial environments as a multi-label classification problem.",
"We proposed a defensive distinction protection approach to answer the two questions and address the problem.",
"We designed and performed experiments using the MNIST dataset and eight representative cases.",
"Our experimental results demonstrate the strong distinguishability of adversarial examples, and the practical as well as research value of our approach.",
"Our work also suggests many possibilities for the future work such as adopting high-order multi-label learning strategies to further explore the intrinsic correlations of labels as discussed in Section 3.2, investigating the distinguishability of adversarial examples for large tasks such as on ImageNet, and investigating the appropriate ways for integrating defensive distinction with other defense approaches.A APPENDIX: A POTENTIAL EXTENSION TO THE PROBLEM FORMULATION More labels could be added to include more concepts or semantic meanings in our multi-label classification formulation of the problem.",
"For example, y i can be extended to a triplet (a i , b i , c i ) ∈ Y where Y = Y × Z × Y is a 3-ary Cartesian product, and c i ∈ Y can indicate the source example class from which the input example x i was created.",
"In the training set, c i can simply be a i for a natural example, and is assumed to be known for an adversarial example.",
"This more complex version of formulation has its values on further correlating to the labels of source examples, but we do not explore it in the paper."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19999998807907104,
0.19512194395065308,
0.1666666567325592,
0.21621620655059814,
0.1538461446762085,
0.4000000059604645,
0.31111109256744385,
0.4000000059604645,
0.25,
0.1860465109348297,
0.13793103396892548,
0.178571417927742,
0.1860465109348297,
0.30434781312942505,
0.1666666567325592,
0.1428571343421936,
0.16129031777381897,
0.14814814925193787,
0.22727271914482117,
0.12244897335767746,
0.27272728085517883,
0.1702127605676651,
0.0833333283662796,
0.09999999403953552,
0.6666666865348816,
0.3636363744735718,
0.37837836146354675,
0.5333333015441895,
0.2222222238779068,
0.4848484694957733,
0.1927710771560669,
0.11538460850715637,
0.2222222238779068,
0.09999999403953552
] | r1glehC5tQ | true | [
"We propose a defensive distinction protection approach and demonstrate the strong distinguishability of adversarial examples."
] |
[
"For a long time, designing neural architectures that exhibit high performance was considered a dark art that required expert hand-tuning.",
"One of the few well-known guidelines for architecture design is the avoidance of exploding or vanishing gradients.",
"However, even this guideline has remained relatively vague and circumstantial, because there exists no well-defined, gradient-based metric that can be computed {\\it before} training begins and can robustly predict the performance of the network {\\it after} training is complete.\n\n",
"We introduce what is, to the best of our knowledge, the first such metric: the nonlinearity coefficient (NLC).",
"Via an extensive empirical study, we show that the NLC, computed in the network's randomly initialized state, is a powerful predictor of test error and that attaining a right-sized NLC is essential for attaining an optimal test error, at least in fully-connected feedforward networks.",
"The NLC is also conceptually simple, cheap to compute, and is robust to a range of confounders and architectural design choices that comparable metrics are not necessarily robust to.",
"Hence, we argue the NLC is an important tool for architecture search and design, as it can robustly predict poor training outcomes before training even begins.",
"Designing neural architectures that perform well can be a difficult process.",
"In particular, the exploding / vanishing gradient problem has been a major challenge for building very deep neural networks at least since the advent of gradient-based parameter learning (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997; BID4 .",
"However, there is still no consensus about which metric should be used for determining the presence of pathological exploding or vanishing gradients.",
"Should we care about the length of the gradient vector (He et al., 2015) , or about the size of individual components of the gradient vector (Schoenholz et al., 2017; Yang & Schoenholz, 2017; Glorot & Bengio, 2010) , or about the eigenvalues of the Jacobian (Saxe et al., 2014; Pascanu et al., 2013; Pennington et al., 2017) ?",
"Depending on the metric used, different strategies arise for combating exploding and vanishing gradients.",
"For example, manipulating the width of layers as suggested by e.g. Yang & Schoenholz (2018) ; Han et al. (2017) can greatly impact the size of gradient vector components but tends to leave the length of the entire gradient vector relatively unchanged.",
"The popular He initialization for ReLU networks (He et al., 2015) is designed to stabilize gradient vector length, wheareas the popular Xavier initialization for tanh networks (Glorot & Bengio, 2010) is designed to stabilize the size of gradient vector components.",
"While the papers cited above provide much evidence that gradient explosion / vanishing when defined according to some metrics is associated with poor performance when certain architectures are paired with certain optimization algorithms, it is often unclear how general those results are.We make the following core contributions.1.",
"We introduce the nonlinearity coefficient (NLC), a gradient-based measurement of the degree of nonlinearity of a neural network (section 3).2.",
"We show that the NLC, computed in the networks randomly initialized state, is a powerful predictor of test error and that attaining a right-sized NLC is essential for achieving an optimal test error, at least in fully-connected feedforward networks (section 4).The",
"NLC (defined at the top of page 3) combines the Frobenius norm of the Jacobian of a neural network with the global variability of the input data and the global variability of the network's outputs into a single metric. Despite",
"its simplicity, it is tied to many important properties of the network. It is a",
"remarkably accurate predictor of the network's nonlinearity as measured by the relative diameter of the regions in input space that can be well-approximated by a linear function (section 3 and figure 1). It is closely",
"related to the nonlinearity of the individual activation functions used in the network and the degree to which they can be approximated by a linear function (section 5). It is tied to",
"the susceptibility of the network's output to small random input perturbations.",
"We introduced the nonlinearity coefficient, a measure of neural network nonlinearity that is closely tied to the relative diameter of linearly approximable regions in the input space of the network, to the sensitivity of the network output with respect to small input changes, as well as to the linear approximability of activation functions used in the network.",
"Because of this conceptual grounding, because its value in the randomly initialized state is highly predictive of test error while also remaining somewhat stable throughout training, because it is robust to simple network changes that confound other metrics such as raw gradient size or correlation information, because it is cheap to compute and conceptually simple, we argue that the NLC is the best standalone metric for predicting test error in fully-connected feedforward networks.",
"It has clear applications to neural architecture search and design as it allows sub-optimal architectures to be discarded before training.",
"In addition to a right-sized NLC, we also found that avoiding excessive output bias and using skip connections play important independent roles in performance.",
"This paper makes important contributions to several long-standing debates.",
"We clearly show that neural networks are capable of overfitting when the output is too sensitive to small changes in the input.",
"In fact, our random architecture sampling scheme shows that such architectures are not unlikely to arise.",
"However, overfitting seems to be tied not to depth or the number of parameters, but rather to nonlinearity.",
"In contrast to Schoenholz et al. (2017); Xiao et al. (2018) , we find that a very high output sensitivity does not harm trainability, but only generalization.",
"This difference is likely caused by our very extensive learning rate search and 64 bit precision training.While the popular guidance for architecture designers is to avoid exploding and vanishing gradients, we argue that achieving an ideal nonlinearity level is the more important criterion.",
"While the raw gradient is susceptible to confounders and cannot be directly linked to meaningful network properties, the NLC captures what appears to be a deep and robust property.",
"It turns out that architectures that were specifically designed to attain a stable gradient, such as He-initialized ReLU networks, in fact display a divergent NLC at great depth.It has been argued that the strength of deep networks lies in their exponential expressivity (e.g. Raghu et al. (2017); Telgarsky (2015) ).",
"While we show that the NLC indeed exhibits exponential behavior, we find this property to be largely harmful, not helpful, as did e.g. Schoenholz et al. (2017) .",
"While very large datasets likely benefit from greater expressivity, in our study such expressivity only leads to lack of generalization rather than improved trainability.",
"In fact, at least in fully-connected feedforward networks, we conjecture that great depth does not confer significant practical benefit.In future work, we plan to study whether the ideal range of NLC values we discovered for our three datasets (1 N LC 3) holds also for larger datasets and if not, how we might predict this ideal range a priori.",
"We plan to investigate additional causes for why certain architectures perform badly despite a right-sized NLC, as well as extend our study to convolutional and densely-connected networks.",
"We are interested in studying the connection of the NLC to e.g. adversarial robustness, quantizability, sample complexity, training time and training noise.",
"Finally, unfortunately, we found the empirical measurement of the NLC to be too noisy to conclusively detect an underfitting regime.",
"We plan to study this regime in future.",
"The goal of this section is to provide an intuitive, graphical explanation of the NLC in addition to the mathematical derivation and analysis in section 3 for readers interested in developing a better intuition of this concept."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0952380895614624,
0.1538461446762085,
0.20338982343673706,
0.25,
0.46666666865348816,
0.2916666567325592,
0.12244897335767746,
0.11428570747375488,
0.20689654350280762,
0.17391303181648254,
0.06779660284519196,
0.15789473056793213,
0.09999999403953552,
0.18518517911434174,
0.1492537260055542,
0.25,
0.508474588394165,
0.23529411852359772,
0.2631579041481018,
0.2545454502105713,
0.2745097875595093,
0.1764705777168274,
0.26229506731033325,
0.40963855385780334,
0.09302324801683426,
0.25,
0.060606054961681366,
0.35555556416511536,
0.09999999403953552,
0.14999999105930328,
0.12244897335767746,
0.15625,
0.2083333283662796,
0.22535210847854614,
0.11764705181121826,
0.1249999925494194,
0.2631579041481018,
0.2448979616165161,
0.2666666507720947,
0.1428571343421936,
0.1875,
0.2641509473323822
] | BkeK-nRcFX | true | [
"We introduce the NLC, a metric that is cheap to compute in the networks randomly initialized state and is highly predictive of generalization, at least in fully-connected networks."
] |
[
"This paper gives a rigorous analysis of trained Generalized Hamming Networks (GHN) proposed by Fan (2017) and discloses an interesting finding about GHNs, i.e. stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer.",
"The revealed equivalence, on the theoretical side, can be regarded as a constructive manifestation of the universal approximation theorem Cybenko (1989); Hornik (1991).",
"In practice, it has profound and multi-fold implications.",
"For network visualization, the constructed deep epitomes at each layer provide a visualization of network internal representation that does not rely on the input data.",
"Moreover, deep epitomes allows the direct extraction of features in just one step, without resorting to regularized optimizations used in existing visualization tools.",
"Despite the great success in recent years, neural networks have long been criticized for their blackbox natures and the lack of comprehensive understanding of underlying mechanisms e.g. in BID3 ; BID12 ; BID30 ; BID29 .",
"The earliest effort to interpret neural computing in terms of logic inferencing indeed dated back to the seminal paper of BID24 , followed by recent attempts to provide explanations from a multitude of perspectives (reviewed in Section 2).As",
"an alternative approach to deciphering the mysterious neural networks, various network visualization techniques have been actively developed in recent years (e.g. BID11 ; BID28 and references therein). Such",
"visualizations not only provide general understanding about the learning process of networks, but also disclose operational instructions on how to adjust network architecture for performance improvements. Majority",
"of visualization approaches probe the relations between input data and neuron activations, by showing either how neurons react to some sample inputs or, reversely, how desired activations are attained or maximized with regularized reconstruction of inputs BID7 ; BID20 ; BID36 ; BID31 ; BID23 ; BID33 ; BID0 . Input data",
"are invariably used in visualization to probe how the information flow is transformed through the different layers of neural networks. Although insightful",
", visualization approaches as such have to face a critical open question: to what extend the conclusions drawn from the analysis of sample inputs can be safely applied to new data?In order to furnish",
"confirmatory answer to the above-mentioned question, ideally, one would have to employ a visualization tool that is independent of input data. This ambitious mission",
"appears impossible at a first glance -the final neuron outputs cannot be readily decomposed as the product of inputs and neuron weights because the thresholding in ReLU activations is input data dependent. By following the principle",
"of fuzzy logic, BID8 recently demonstrated that ReLUs are not essential and can be removed from the so called generalized hamming network (GHN) . This simplified network architecture",
", as reviewed in section 3, facilitates the analysis of neuron interplay based on connection weights only. Consequently, stacked convolution layers",
"can be merged into a single hidden layer without taking into account of inputs from previous layers. Equivalent weights of the merged GHN, which",
"is called deep epitome, are computed analytically without resorting to any learning or optimization processes. Moreover, deep epitomes constructed at different",
"layers can be readily applied to new data to extract hierarchical features in just one step (section 4).",
"We have proposed in this paper a novel network representation, called deep epitome, which is proved to be equivalent to stacked convolution layers in generalized hamming networks (GHN).",
"Theoretically this representation provides a constructive manifestation for the universal approximation theorem BID6 BID15 , which states that a single layered network, in principle, is able to approximate any arbitrary decision functions up to any desired accuracy.",
"On the other hand, it is a dominant belief BID10 , which is supported by abundant empirical evidences, that deep structures play an indispensable role in decomposing the combinatorial optimization problem into layer-wise manageable sub-problems.",
"We concur with the view and supplement with our demonstration that, a trained deep GHN can be converted into a simplified networks for the sake of high interpretability, reduced algorithmic and computational complexities.The success of our endeavours lies in the rigorous derivation of convolving epitomes across different layers in eq. (4) and (5), which set due bias terms analytically without resorting to optimizationbased approaches.",
"Consequently, deep epitomes at all convolution layers can be computed without using any input data.",
"Moreover, deep epitomes can be used to extract hierarchical features in just one step at any desired layers.",
"In the light of fuzzy logic, the normalized epitome (definition 3) encodes a grade of fitness between the learnt templates and given inputs at certain spatial locations.",
"This fuzzy logic interpretation furnishes a refreshing perspective that, in our view, will open the black box of deep learning eventually.APPENDIX A Definition",
"1. For two given tuples DISPLAYFORM0 .",
". , y L }, the hamming outer product, denoted , is a set of corresponding elements x DISPLAYFORM1 . . L , where ⊕ denotes the generalized hamming distance operator. Then the product has following properties, DISPLAYFORM2 K but they are permutation equivalent, in the sense that there exist permutation matrices P and Q such that x DISPLAYFORM3",
"2. non-linear: in contrast to the standard outer product which is bilinear in each of its entry, the hamming outer product is non-linear since in general x DISPLAYFORM4 where µ ∈ R is a scalar.",
"Therefore, the hamming outer product defined as such is a pseudo outer product.",
"DISPLAYFORM5 M because of the associativity of GHD.",
"This property holds for arbitrary number of tuples."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.04651162400841713,
0.0714285671710968,
0,
0.06896551698446274,
0.1428571343421936,
0.10810810327529907,
0.14999999105930328,
0.11428571492433548,
0.060606058686971664,
0.0416666641831398,
0.14814814925193787,
0.0555555522441864,
0.06896551698446274,
0.10256409645080566,
0.0624999962747097,
0.14814814925193787,
0.07407406717538834,
0,
0.08695651590824127,
0.0624999962747097,
0.09999999403953552,
0.10256409645080566,
0.06557376682758331,
0,
0.0833333283662796,
0.06666666269302368,
0.13333332538604736,
0,
0.07547169923782349,
0.11764705926179886,
0.11764705181121826,
0.1538461446762085,
0
] | r1PaPUsXM | true | [
"bridge the gap in soft computing"
] |
[
"There are two major paradigms of white-box adversarial attacks that attempt to impose input perturbations. ",
"The first paradigm, called the fix-perturbation attack, crafts adversarial samples within a given perturbation level. ",
"The second paradigm, called the zero-confidence attack, finds the smallest perturbation needed to cause misclassification, also known as the margin of an input feature. ",
"While the former paradigm is well-resolved, the latter is not. ",
"Existing zero-confidence attacks either introduce significant approximation errors, or are too time-consuming. ",
"We therefore propose MarginAttack, a zero-confidence attack framework that is able to compute the margin with improved accuracy and efficiency. ",
"Our experiments show that MarginAttack is able to compute a smaller margin than the state-of-the-art zero-confidence attacks, and matches the state-of-the-art fix-perturbation attacks. ",
"In addition, it runs significantly faster than the Carlini-Wagner attack, currently the most accurate zero-confidence attack algorithm.",
"Adversarial attack refers to the task of finding small and imperceptible input transformations that cause a neural network classifier to misclassify.",
"White-box attacks are a subset of attacks that have access to gradient information of the target network.",
"In this paper, we will focus on the white-box attacks.",
"An important class of input transformations is adding small perturbations to the input.",
"There are two major paradigms of adversarial attacks that attempt to impose input perturbations.",
"The first paradigm, called the fix-perturbation attack, tries to find perturbations that are most likely to cause misclassification, with the constraint that the norm of the perturbations cannot exceed a given level.",
"Since the perturbation level is fixed, fix-perturbation attacks may fail to find any adversarial samples for inputs that are far away from the decision boundary.",
"The second paradigm, called the zero-confidence attack, tries to find the smallest perturbations that are guaranteed to cause misclassification, regardless of how large the perturbations are.",
"Since they aim to minimize the perturbation norm, zero-confidence attacks usually find adversarial samples that ride right on the decision boundaries, and hence the name \"zero-confidence\".",
"The resulting perturbation norm is also known as the margin of an input feature to the decision boundary.",
"Both of these paradigms are essentially constrained optimization problems.",
"The former has a simple convex constraint (perturbation norm), but a non-convex target (classification loss or logit differences).",
"In contrast, the latter has a non-convex constraint (classification loss or logit differences), but a simple convex target (perturbation norm).Despite",
"their similarity as optimization problems, the two paradigms differ significantly in terms of difficulty. The fix-perturbation",
"attack problem is easier. The state-of-the-art",
"algorithms, including projected gradient descent (PGD) BID10 and distributional adversarial attack (Zheng et al., 2018) , can achieve both high efficiency and high success rate, and often come with theoretical convergence guarantee. On the other hand, the",
"zero-confidence attack problem is much more challenging. Existing methods are either",
"not strong enough or too slow. For example, DeepFool BID11",
"and fast gradient sign method (FGSM) BID3 BID7 b) linearizes the constraint, and solves the simplified optimization problem with a simple convex target and a linear constraint. However, due to the linearization",
"approximation errors, the solution can be far from optimal. As another extreme, L-BFGS BID18",
"and Carlini-Wagner (CW) BID1 convert the optimization problem into a Lagrangian, and the Lagrangian multiplier is determined through grid search or binary search. These attacks are generally much",
"stronger and theoretically grounded, but can be very slow.The necessity of developing a better zero-confidence attack is evident. The zero-confidence attack paradigm",
"is a more realistic attack setting. More importantly, it aims to measure",
"the margin of each individual token, which lends more insight into the data distribution and adversarial robustness. Motivated by this, we propose MARGINATTACK",
", a zero-confidence attack framework that is able to compute the margin with improved accuracy and efficiency. Specifically, MARGINATTACK iterates between",
"two moves. The first move, called restoration move, linearizes",
"the constraint and solves the simplified optimization problem, just like DeepFool and FGSM; the second move, called projection move, explores even smaller perturbations without changing the constraint values significantly. By construction, MARGINATTACK inherits the efficiency",
"in DeepFool and FGSM, and improves over them in terms of accuracy with a convergence guarantee. Our experiments show that MARGINAT-TACK attack is able",
"to compute a smaller margin than the state-of-the-art zero-confidence attacks, and matches the state-of-the-art fix-perturbation attacks. In addition, it runs significantly faster than CW, and",
"in some cases comparable to DeepFool and FGSM.",
"We have proposed MARGINATTACK, a novel zero-confidence adversarial attack algorithm that is better able to find a smaller perturbation that results in misclassification.",
"Both theoretical and empirical analyses have demonstrated that MARGINATTACK is an efficient, reliable and accurate adversarial attack algorithm, and establishes a new state-of-the-art among zero-confidence attacks.",
"What is more, MARGINATTACK still has room for improvement.",
"So far, only two settings of a (k) and b (k) are developed, but MARGINATTACK will work for many other settings, as long as assumption 5 is satisfied.",
"Authors hereby encourage exploring novel and better settings for the MARGINATTACK framework, and promote MARGINATTACK as a new robustness evaluation measure or baseline in the field of adversarial attack and defense.",
"This supplementary material aims to prove Thm.",
"1. Without the loss of generality, K in Eq. (9) in set to",
"0. Before we prove the theorem, we need to introduce some lemmas.",
"Lemma 1.1.",
"If assumption 3 in Thm.",
"1 holds, then ∀x ∈ B DISPLAYFORM0 Proof.",
"According to Eq. (5), for 2 norm, DISPLAYFORM1 for ∞ norm, DISPLAYFORM2 Lemma 1.2.",
"Given all the assumptions in Thm.",
"1, where DISPLAYFORM3 and assuming DISPLAYFORM4 where DISPLAYFORM5 A and B are defined in Eq. (32).According",
"to assumption 8, this implies DISPLAYFORM6 at the rate of at least 1/n ν .Proof. As",
"a digression",
", the second term in Eq. FORMULA0 is well defined, because DISPLAYFORM7 is upper bounded by Lem. 1.1 and assumptions",
"3.Back to proving the lemma, we will prove that each restoration move will bring c(x (k) ) closer to 0, while each projection move will not change c(x (k) ) much.First, for the restoration move DISPLAYFORM8 The first line is from the generalization of Mean-Value Theorem with jump discontinuities, and ξ = tz (k) + (1 − t)x (k) and t is a real number in [0, 1]. The second line is",
"from Eq. (4). The last line is from",
"assumptions 4 and 7 and Eq. (19).Next, for the projection",
"move DISPLAYFORM9 The first line is from the fact that assumption 3 implies that c(x) is M -Lipschitz continuous. DISPLAYFORM10 for some M",
"d and M s . To see this, for 2 norm",
"DISPLAYFORM11 where b is defined as the maximum perturbation norm ( 2 ) within B, i.e. DISPLAYFORM12 which is well defined because B is a tight set. For ∞ norm, DISPLAYFORM13",
"Note that Eq. (26) also holds for other norms. With Eq. (26) and assumption",
"8, Eq. FORMULA1 becomes DISPLAYFORM14 Combining Eqs. FORMULA1 and FORMULA2 we have DISPLAYFORM15 where DISPLAYFORM16 According to assumption 7, 0 < A < 1. Also, according to Eq. FORMULA1 , DISPLAYFORM17 and thus DISPLAYFORM18 If DISPLAYFORM19 Otherwise, Eq. (34) implies DISPLAYFORM20 This concludes the proof. Lemma 1.3. Given all the assumptions",
"in Thm. 1,",
"and assuming DISPLAYFORM21 Proof.",
"First, for restoration move DISPLAYFORM22",
"δm 2 (38) Line 4 is given by Eq. (3). Line 5 is derived from Lem. 1.1. The last",
"line is from Lem. 1.2."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07407406717538834,
0.14814814925193787,
0.05882352590560913,
0,
0.0833333283662796,
0.3125,
0.1818181723356247,
0.2222222238779068,
0.19354838132858276,
0.07692307233810425,
0,
0,
0.07999999821186066,
0.054054051637649536,
0.05714285373687744,
0.0624999962747097,
0.17142856121063232,
0,
0,
0.0714285671710968,
0.06451612710952759,
0,
0.11764705181121826,
0.13636362552642822,
0.1818181723356247,
0,
0.10526315122842789,
0,
0.1111111044883728,
0.32258063554763794,
0.17391303181648254,
0.12121211737394333,
0.25,
0,
0.05128204822540283,
0.1818181723356247,
0.25,
0.10526315122842789,
0.25,
0.2857142686843872,
0,
0.10810810327529907,
0.21052631735801697,
0.1111111044883728,
0,
0,
0,
0,
0,
0,
0.07692307233810425,
0,
0.06666666269302368,
0.060606058686971664,
0,
0.0952380895614624,
0,
0.0952380895614624,
0.04999999701976776,
0.08695651590824127,
0.07407406717538834,
0,
0.13333332538604736,
0,
0,
0
] | B1gHjoRqYQ | true | [
"This paper introduces MarginAttack, a stronger and faster zero-confidence adversarial attack."
] |
[
"Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field.",
"Rather than enumerate variations across filter channels or pyramid levels, dynamic models locally predict scale and adapt receptive fields accordingly.",
"The degree of variation and diversity of inputs makes this a difficult task.",
"Existing methods either learn a feedforward predictor, which is not itself totally immune to the scale variation it is meant to counter, or select scales by a fixed algorithm, which cannot learn from the given task and data.",
"We extend dynamic scale inference from feedforward prediction to iterative optimization for further adaptivity.",
"We propose a novel entropy minimization objective for inference and optimize over task and structure parameters to tune the model to each input.",
"Optimization during inference improves semantic segmentation accuracy and generalizes better to extreme scale variations that cause feedforward dynamic inference to falter.",
"The world is infinite in its variations, but our models are finite.",
"While inputs differ in many dimensions and degrees, a deep network is only so deep and wide.",
"To nevertheless cope with variation, there are two main strategies: static enumeration and dynamic adaptation.",
"Static enumeration defines a set of variations, processes them all, and combines the results.",
"For example, pyramids enumerate scales (Burt & Adelson, 1983; Kanazawa et al., 2014) and group-structured filters enumerate orientations (Cohen & Welling, 2017) .",
"Dynamic adaptation selects a single variation, conditioned on the input, and transforms processing accordingly.",
"For example, scale-space search (Lindeberg, 1994; Lowe, 2004) selects a scale transformation from input statistics and end-to-end dynamic networks select geometric transformations (Jaderberg et al., 2015; Dai et al., 2017) , parameter transformations (De Brabandere et al., 2016) , and feature transformations (Perez et al., 2017) directly from the input.",
"Enumeration and adaptation both help, but are limited by computation and supervision, because the sets enumerated and ranges selected are bounded by model size and training data.",
"Deep networks for vision exploit enumeration and adaptation, but generalization is still limited.",
"Networks are enumerative, by convolving with a set of filters to cover different variations then summing across them to pool the variants (LeCun et al., 1998; Krizhevsky et al., 2012; Zeiler & Fergus, 2014) .",
"For scale variation, image pyramids (Burt & Adelson, 1983 ) and feature pyramids (Shelhamer et al., 2017; enumerate scales, process each, and combine the outputs.",
"However, static models have only so many filters and scales, and may lack the capacity or supervision for the full data distribution.",
"Dynamic models instead adapt to each input (Olshausen et al., 1993) .",
"The landmark scale invariant feature transform (Lowe, 2004 ) extracts a representation adapted to scales and orientations predicted from input statistics.",
"Dynamic networks, including spatial transformers (Jaderberg et al., 2015) and deformable convolution (Dai et al., 2017) , make these predictions and transformations end-to-end.",
"Predictive dynamic inference is however insufficient: the predictor may be imperfect in its architecture or parameters, or may not generalize to data it was not designed or optimized for.",
"Bottom-up prediction, with only one step of adaptation, can struggle to counter variations in scale and other factors that are too large or unfamiliar.",
"To further address the kinds and degrees of variations, including extreme out-of-distribution shifts, we devise a complementary third strategy: unsupervised optimization during inference.",
"We define an unsupervised objective and a constrained set of variables for effective gradient optimization.",
"Our novel inference objective minimizes the entropy of the model output to optimize for confidence.",
"The variables optimized over are task parameters for pixel-wise classification and structure parameters Accuracy is high and prediction entropy is low for training and testing at the same scale (left).",
"Accuracy drops and entropy rises when tested at 3x the training scale, even when the network is equipped with dynamic receptive fields to adapt to scale variation (middle).",
"Previous approaches are limited to one-step, feedforward scale prediction, and are unable to handle a 3x shift.",
"In contrast our iterative gradient optimization approach is able to adapt further (right), and achieve higher accuracy by minimizing entropy with respect to task and scale parameters.",
"for receptive field adaptation, which are updated together to compensate for scale shifts.",
"This optimization functions as top-down feedback to iteratively adjust feedforward inference.",
"In effect, we update the trained model parameters to tune a custom model for each test input.",
"Optimization during inference extends dynamic adaptation past the present limits of supervision and computation.",
"Unsupervised optimization boosts generalization beyond training by top-down tuning during testing.",
"Iterative updates decouple the amount of computation, and thus degree of adaptation, from the network architecture.",
"Our main result is to demonstrate that adaptation by entropy optimization improves accuracy and generalization beyond adaptation by prediction (see Figure 1 ), which we show for semantic segmentation by inference time optimization of a dynamic Gaussian receptive field model (Shelhamer et al., 2019) on the PASCAL VOC (Everingham et al., 2010) dataset.",
"Dynamic inference by optimization iteratively adapts the model to each input.",
"Our results show that optimization to minimize entropy with respect to score and scale parameters extends adaptivity for semantic segmentation beyond feedforward dynamic inference.",
"Generalization improves when the training and testing scales differ substantially, and modest refinement is achieved even when the training and testing scales are the same.",
"While we focus on entropy minimization and scale inference, more optimization for dynamic inference schemes are potentially possible through the choice of objective and variables.",
"is the out-of-distribution prediction for our iterative optimization method.",
"Our method corrects noisy, over-segmented fragments and false negatives in true segments."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1428571343421936,
0.051282044500112534,
0.12903225421905518,
0.15686273574829102,
0.42424240708351135,
0.14999999105930328,
0.2631579041481018,
0,
0,
0,
0.060606054961681366,
0,
0,
0.0357142798602581,
0,
0.0624999962747097,
0.07843136787414551,
0.04651162400841713,
0.051282044500112534,
0.06451612710952759,
0.09999999403953552,
0,
0.13636362552642822,
0.1395348757505417,
0.1904761791229248,
0.1764705777168274,
0.24242423474788666,
0.13636362552642822,
0.13636362552642822,
0.1764705777168274,
0.13636362552642822,
0.19354838132858276,
0.5333333015441895,
0.11428570747375488,
0.1818181723356247,
0.2666666507720947,
0.060606054961681366,
0.17910447716712952,
0.2666666507720947,
0.2857142686843872,
0,
0.2790697515010834,
0.2142857164144516,
0
] | SyxiRJStwr | true | [
"Unsupervised optimization during inference gives top-down feedback to iteratively adjust feedforward prediction of scale variation for more equivariant recognition."
] |
[
"The Deep Image Prior (DIP, Ulyanov et al., 2017) is a fascinating recent approach for recovering images which appear natural, yet is not fully understood.",
"This work aims at shedding some further light on this approach by investigating the properties of the early outputs of the DIP.",
"First, we show that these early iterations demonstrate invariance to adversarial perturbations by classifying progressive DIP outputs and using a novel saliency map approach.",
"Next we explore using DIP as a defence against adversaries, showing good potential.",
"Finally, we examine the adversarial invariancy of the early DIP outputs, and hypothesize that these outputs may remove non-robust image features.",
"By comparing classification confidence values we show some evidence confirming this hypothesis."
] | [
1,
0,
0,
0,
0,
0
] | [
0.25641024112701416,
0.1818181723356247,
0,
0,
0.11764705181121826,
0
] | HylijQ35IS | false | [
"We investigate properties of the recently introduced Deep Image Prior (Ulyanov et al, 2017)"
] |
[
"In this paper, we address the challenge of limited labeled data and class imbalance problem for machine learning-based rumor detection on social media.",
"We present an offline data augmentation method based on semantic relatedness for rumor detection.",
"To this end, unlabeled social media data is exploited to augment limited labeled data.",
"A context-aware neural language model and a large credibility-focused Twitter corpus are employed to learn effective representations of rumor tweets for semantic relatedness measurement.",
"A language model fine-tuned with the a large domain-specific corpus shows a dramatic improvement on training data augmentation for rumor detection over pretrained language models.",
"We conduct experiments on six different real-world events based on five publicly available data sets and one augmented data set.",
"Our experiments show that the proposed method allows us to generate a larger training data with reasonable quality via weak supervision.",
"We present preliminary results achieved using a state-of-the-art neural network model with augmented data for rumor detection.",
"Research areas that have recently been received much attention in using Machine Learning (ML) and Natural Language Processing for automated rumor and fake news detection BID5 BID11 BID19 BID25 BID22 and fact-checking BID2 BID21 BID10 .",
"One major bottleneck of state-of-the-art (SoA) ML methods is that they require a vast amount of labeled data to be trained and manual labeling of rumors source on social media requires special skills and time-consuming BID26 .",
"Due to limited labeled training data, existing neural networks (NNs) for rumor detection usually have shallow architecture BID3 BID13 .",
"The scarcity of labeled data is a major challenge of studying rumors on social media BID0 .",
"Another problem is that publicly available data sets for rumor-related tasks such as PHEME data BID10 suffer from imbalanced class distributions .",
"Existing methods for handling the class imbalance problem (e.g., oversampling and the use of synthetic data BID24 ) may cause over-fitting and poor generalization performance.",
"A methodology for rumor data augmentation with the minimum of human supervision is necessary.",
"Previous studies presented that rumors can evolve into many variants which share similar propagation patterns in their early stage BID14 BID3 BID1 BID4 .",
"Based on this hypothesis, we argue that enriching existing labeled data with unlabeled source tweets conveying the same or similar meanings is a promising attempt for rumor detection methods that rely on the structure of rumor propagation in social media.",
"In this work, we propose a novel data augmentation method for automatic rumor detection based on semantic relatedness.",
"We exploit a publicly available paraphrase identification corpus as well as context-sensitive embeddings of labeled references and unlabeled candidate source tweets.",
"Pairwise similarity is used to guide the assignment of pseudolabels to unlabeled tweets.",
"ELMo BID18 , a SoA context-sensitive neural language model (NLM), is fine-tuned on a large credibility-focused social media corpus and used to encode tweets.",
"Our results show that data augmentation can contribute to rumor detection with deep learning with increased training data size and a reasonable level of quality.",
"This has potential for further performance improvements using deeper NNs.",
"We present data augmentation results for three events and the performance of a SoA DNN model for rumor detection with augmented data in Section 5.",
"Data Augmentation Before filtering out source tweets without replies, 1,238 rumors and 3,714 non-rumors are collected for \"bostonbombings\".",
"After filtering, 165 rumors and 228 non-rumors remain.",
"Although the augmented data size is very limited for \"bostonbombings\", experiments on \"sydneysiege\" and \"ottawashooting\" show encouraging results.",
"A total of 25,486 rumors and 76,106 non-rumors are additionally obtained for \"sydneysiege\", and 21,519 rumors and 62,590 non-rumors are additionally obtained for \"ottawashooting\".",
"We make our augmented data publicly available 4 .",
"Rumor Detection We conduct rumor detection experiments using two different data sets: (1) PHEME5, (2) PHEME5 with the \"bostonbombings\" data (\"PHEME5+Boston\").",
"We employ BID10 's method as a SoA baseline model for rumor detection with slight modifications.",
"For the sake of simplicity, we modify the implementation of \"MTL2 Veracity+Detection\" for rumor detection only.",
"We construct input by using a source tweet and the top (i.e., most recent) 24 replies in this task.",
"We perform leave-one-out cross-validation (LOOCV) on the PHEME5 and augmented data sets.",
"The overall experimental results for rumor detection are presented in TAB4 .",
"TAB5 shows LOOCV results.",
"We observe that overall performance decreases with the augmented data (i.e., PHEME5+Boston).",
"The \"fergusonunrest\" is the most difficult event for a rumor detection model as it has a unique class distribution distinguished from all other events BID10 .",
"It is worth noting that our data augmentation improves the performance of rumor detection on the \"fergusonunrest\".",
"The completion of data augmentation for events other than \"'bostonbombings\" has potential to boost overall and per event performance of rumor detection.",
"We present a methodology of data augmentation for rumor detection that exploits semantic relatedness between limited labeled data and unlabeled data.",
"This study is part of further research that aims to use a massive amount of publicly available unlabeled Twitter data and the potential of DNNs in a wide range of tasks related to rumors on social media.",
"Our current research has demonstrated the potential efficiency and effectiveness of semantically augmented data in combating the labeled data scarcity and class imbalance problems of publicly available rumor data sets.",
"In future work, we plan to augment data for more events to build comprehensive data sets for rumor detection, and conduct experiments on rumor detection via deep learning.",
"We will evaluate the effectiveness of augmented data in alleviating over-fitting and its usefulness in facilitating deeper NNs for rumor detection.",
"Further experiments will be conducted to examine the generalization of rumor detection models on unseen rumors."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.3636363446712494,
0.4000000059604645,
0.23529411852359772,
0.2666666507720947,
0.22727271914482117,
0.3589743673801422,
0.0952380895614624,
0.2631579041481018,
0.1111111044883728,
0.2222222238779068,
0.19999998807907104,
0.277777761220932,
0.19512194395065308,
0.17391303181648254,
0.2857142686843872,
0.045454539358615875,
0.28070175647735596,
0.41025641560554504,
0.39024388790130615,
0.12121211737394333,
0.13636362552642822,
0.22727271914482117,
0.06451612710952759,
0.3181818127632141,
0.10256409645080566,
0.06896550953388214,
0.25641024112701416,
0.1621621549129486,
0.27586206793785095,
0.1463414579629898,
0.21621620655059814,
0.17142856121063232,
0.1428571343421936,
0.24242423474788666,
0.1249999925494194,
0,
0.11428570747375488,
0.13333332538604736,
0.21621620655059814,
0.2380952388048172,
0.6499999761581421,
0.30188679695129395,
0.30434781312942505,
0.2222222238779068,
0.2926829159259796,
0.1621621549129486
] | SyxCysRNdV | true | [
"We propose a methodology of augmenting publicly available data for rumor studies based on samantic relatedness between limited labeled and unlabeled data."
] |
[
"Self-attention is a useful mechanism to build generative models for language and images.",
"It determines the importance of context elements by comparing each element to the current time step.",
"In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results.",
"Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention.",
"We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements.",
"The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic.",
"Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models.",
"On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU.",
"There has been much recent progress in sequence modeling through recurrent neural networks (RNN; BID54 , convolutional networks (CNN; BID28 BID14 BID7 ) and self-attention models BID40 BID58 .",
"RNNs integrate context information by updating a hidden state at every time-step, CNNs summarize a fixed size context through multiple layers, while as self-attention directly summarizes all context.",
"Attention assigns context elements attention weights which define a weighted sum over context representations BID52 BID8 BID36 .",
"Source-target attention summarizes information from another sequence such as in machine translation while as self-attention operates over the current sequence.",
"Self-attention has been formulated as content-based where attention weights are computed by comparing the current time-step to all elements in the context FIG0 ).",
"The ability to compute comparisons over such unrestricted context sizes are seen as a key characteristic of self-attention BID58 .",
"However, the ability of self-attention to model long-range dependencies has recently come into question BID57 and the unlimited context size is computationally very challenging due to the quadratic complexity in the input length.",
"Furthermore, in practice long sequences require the introduction of hierarchies .In",
"this paper, we introduce lightweight convolutions which are depth-wise separable BID51 BID7 , softmax-normalized and share weights over the channel dimension. The",
"result is a convolution with several orders of magnitude fewer weights than a standard nonseparable convolution. Different",
"to self-attention, lightweight convolutions reuse the same weights for context elements, regardless of the current time-step.Dynamic convolutions build on lightweight convolutions by predicting a different convolution kernel at every time-step. The kernel",
"is a function of the current time-step only as opposed to the entire context as in self-attention ( FIG0 . Dynamic convolutions",
"are similar to locally connected layers in the sense that the weights change at every position, however, the difference is that weights are dynamically generated by the model rather than fixed after training BID30 BID56 BID6 . Our approach also bears",
"similarity to location-based attention which does not access the context to determine attention weights, however, we do not directly take the attention weights from the previous time-step into account BID8 BID36 . BID49 reduce complexity",
"by performing attention within blocks of the input sequence and BID48 BID50 perform more fine-grained attention over each feature. BID47 and BID17 use input-dependent",
"filters for text classification tasks.Our experiments show that lightweight convolutions perform competitively to strong self-attention results and that dynamic convolutions can perform even better. On WMT English-German translation dynamic",
"convolutions achieve a new state of the art of 29.7 BLEU, on WMT English-French they match the best reported result in the literature, and on IWSLT German-English dynamic convolutions outperform self-attention by 0.8 BLEU. Dynamic convolutions achieve 20% faster runtime",
"than a highly-optimized self-attention baseline. For language modeling on the Billion word benchmark",
"dynamic convolutions perform as well as or better than self-attention and on CNN-DailyMail abstractive document summarization we outperform a strong self-attention model.",
"We presented lightweight convolutions which perform competitively to the best reported results in the literature despite their simplicity.",
"They have a very small parameter footprint and the kernel does not change over time-steps.",
"This demonstrates that self-attention is not critical to achieve good accuracy on the language tasks we considered.Dynamic convolutions build on lightweight convolutions by predicting a different kernel at every time-step, similar to the attention weights computed by self-attention.",
"The dynamic weights are a function of the current time-step only rather than the entire context.Our experiments show that lightweight convolutions can outperform a strong self-attention baseline on WMT'17 Chinese-English translation, IWSLT'14 German-English translation and CNNDailyMail summarization.",
"Dynamic convolutions improve further and achieve a new state of the art on the test set of WMT'14 English-German.",
"Both lightweight convolution and dynamic convolution are 20% faster at runtime than self-attention.",
"On Billion word language modeling we achieve comparable results to self-attention.We are excited about the future of dynamic convolutions and plan to apply them to other tasks such as question answering and computer vision where inputs are even larger than the tasks we considered in this paper."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.17391303181648254,
0.07999999821186066,
0.20689654350280762,
0.260869562625885,
0.13793103396892548,
0.0714285671710968,
0.27586206793785095,
0.07407406717538834,
0.054054051637649536,
0.05714285373687744,
0,
0.0714285671710968,
0.12121211737394333,
0.20689654350280762,
0.10256409645080566,
0,
0.1875,
0,
0.2702702581882477,
0.27586206793785095,
0.09090908616781235,
0.052631575614213943,
0,
0.277777761220932,
0.17777776718139648,
0.260869562625885,
0.20000000298023224,
0.2222222238779068,
0,
0.3720930218696594,
0.21739129722118378,
0.2222222238779068,
0.27272728085517883,
0.23529411852359772
] | SkVhlh09tX | true | [
"Dynamic lightweight convolutions are competitive to self-attention on language tasks."
] |
[
"Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond.",
"By necessity, most theoretical guarantees revolve around convex-concave (or even linear) problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework.",
"To make piecemeal progress along these lines, we analyze the behavior of mirror descent (MD) in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality – a property which we call coherence.",
"We first show that ordinary, “vanilla” MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution.",
"We then show that this deficiency is mitigated by optimism: by taking an “extra-gradient” step, optimistic mirror descent (OMD) converges in all coherent problems.",
"Our analysis generalizes and extends the results of Daskalakis et al. [2018] for optimistic gradient descent (OGD) in bilinear problems, and makes concrete headway for provable convergence beyond convex-concave games.",
"We also provide stochastic analogues of these results, and we validate our analysis by numerical experiments in a wide array of GAN models (including Gaussian mixture models, and the CelebA and CIFAR-10 datasets).",
"The surge of recent breakthroughs in deep learning has sparked significant interest in solving optimization problems that are universally considered hard.",
"Accordingly, the need for an effective theory has two different sides: first, a deeper understanding would help demystify the reasons behind the success and/or failures of different training algorithms; second, theoretical advances can inspire effective algorithmic tweaks leading to concrete performance gains.",
"For instance, using tools from the theory of dynamical systems, BID28 BID29 and Panageas & Piliouras [2017] showed that a wide variety of first-order methods (including gradient descent and mirror descent) almost always avoid saddle points.",
"More generally, the optimization and machine learning communities alike have dedicated significant effort in understanding non-convex landscapes by searching for properties which could be leveraged for efficient training.",
"As an example, the \"strict saddle\" property was shown to hold in a wide range of salient objective functions ranging from low-rank matrix factorization BID8 BID20 and dictionary learning [Sun et al., 2017a,b] , to principal component analysis BID19 , and many other models.On the other hand, adversarial deep learning is nowhere near as well understood, especially in the case of generative adversarial networks (GANs) BID22 .",
"Despite an immense amount of recent scrutiny, our theoretical understanding cannot boast similar breakthroughs as in \"single-agent\" deep learning.",
"Because of this, a considerable corpus of work has been devoted to exploring and enhancing the stability of GANs, including techniques as diverse as the use of Wasserstein metrics , critic gradient penalties BID23 , feature matching, minibatch discrimination, etc.",
"[Radford et al., 2016; Salimans et al., 2016] .Even",
"before the advent of GANs, work on adaptive dynamics in general bilinear zero-sum games (e.g. Rock-Paper-Scissors) established that they lead to persistent, chaotic, recurrent (i.e. cycle-like) behavior [Sato et al., 2002; Piliouras & Shamma, 2014; Piliouras et al., 2014] . Recently",
", simple specific instances of cycle-like behavior in bilinear games have been revisited mainly through the lens of GANs BID15 Mescheder et al., 2018; Papadimitriou & Piliouras, 2018] . Two important",
"recent results have established unified pictures about the behavior of continuous and discrete-time first order methods in bilinear games: First, established that continuous-time descent methods in zero-sum games (e.g., gradient descent, follow-the-regularized-leader and the like) are Poincaré recurrent, returning arbitrarily closely to their initial conditions infinitely many times. Second, BID4",
"examined the discrete-time analogues (gradient descent, multiplicative weights and follow-the-regularized-leader) showing that orbits spiral slowly outwards. These recurrent",
"systems have formal connections to Hamiltonian dynamics and do not behave in a gradient-like fashion BID6 ; BID5 . This is a critical",
"failure of descent methods, but one which BID15 showed can be overcome through \"optimism\", interpreted in this context as an \"extra-gradient\" step that pushes the training process further along the incumbent gradient -as a result, optimistic gradient descent (OGD) succeeds in cases where vanilla gradient descent (GD) fails (specifically, unconstrained bilinear saddle-point problems).A common theme in the",
"above is that, to obtain a principled methodology for training GANs, it is beneficial to first establish improvements in a more restricted setting, and then test whether these gains carry over to more demanding learning environments. Following these theoretical",
"breadcrumbs, we focus on a class of non-monotone problems whose solutions are related to those of a naturally associated variational inequality, a property which we call coherence. Then, hoping to overcome the",
"shortcomings of ordinary descent methods by exploiting the problem's geometry, we examine the convergence of MD in coherent problems. On the positive side, we show",
"that if a problem is strictly coherent (a condition satisfied by all strictly convex-concave problems), MD converges almost surely, even in stochastic problems (Theorem 3.1). However, under null coherence",
"(the \"saturated\" opposite to strict coherence), MD spirals outwards from the problem's solutions and may cycle in perpetuity. The null coherence property covers",
"all bilinear models, so this result encompasses fully the analysis of BID4 for GD and follow-the-regularized-leader (FTRL) in general bilinear zero-sum games within our coherence framework. Thus, in and by themselves, gradient/mirror",
"descent methods do not suffice for training convoluted, adversarial deep learning models.To mitigate this deficiency, we consider the addition of an extra-gradient step which looks ahead and takes an additional step along a \"future\" gradient. This technique was first introduced by BID27",
"and subsequently gained great popularity as the basis of the mirror-prox algorithm of Nemirovski [2004] which achieves an optimal O(1/n) convergence rate in Lipschitz monotone variational inequalities (see also Nesterov, 2007 , for a primal-dual variant of the method and BID25 , for an extension to stochastic variational inequalities and saddle-point problems).In the learning literature, the extra-gradient",
"technique (or, sometimes, a variant thereof) is often referred to as optimistic mirror descent (OMD) [Rakhlin & Sridharan, 2013] and its effectiveness in GAN training was recently examined by BID15 and Yadav et al. [2018] (the latter involving a damping mechanism for only one of the players). More recently, BID21 considered a variant method",
"which incorporates a mechanism that \"extrapolates from the past\" in order to circumvent the need for a second oracle call in the extra-gradient step. Specifically, BID21 showed that the extra-gradient",
"algorithm with gradient reuse converges a) geometrically in strongly monotone, deterministic",
"variational inequalities; and b) ergodically in general stochastic variational inequalities",
", achieving in that case an oracle complexity bound that is √ 13/7/2 ≈ 68% of a bound previously established by BID25 for the mirror-prox algorithm.However, beyond convex-concave problems, averaging offers no tangible benefits because there is no way to relate the value of the ergodic average to the value of the iterates. As a result, moving closer to GAN training requires changing",
"both the algorithm's output as well as the accompanying analysis. With this as our guiding principle, we first show that the last",
"iterate of OMD converges in all coherent problems, including null-coherent ones. As a special case, this generalizes and extends the results of",
"Noor et al. [2011] for OGD in pseudo-monotone problems, and also settles in the affirmative an open question of BID15 concerning the convergence of the last iterate of OGD in nonlinear problems. Going beyond deterministic problems, we also show that OMD converges",
"with probability 1 even in stochastic saddle-point problems that are strictly coherent. These results complement the existing literature on the topic by showing",
"that a cheap extra-gradient add-on can lead to significant performance gains when applied to state-of-the-art methods (such as Adam). We validate this prediction for a wide array of standard GAN models in Section",
"5.",
"Our results suggest that the implementation of an optimistic, extra-gradient step is a flexible add-on that can be easily attached to a wide variety of GAN training methods (RMSProp, Adam, SGA, etc.) , and provides noticeable gains in performance and stability.",
"From a theoretical standpoint, the dichotomy between strict and null coherence provides a justification of why this is so: optimism eliminates cycles and, in so doing, stabilizes the method.",
"We find this property particularly appealing because it paves the way to a local analysis with provable convergence guarantees in multi-modal settings, and beyond zero-sum games; we intend to examine this question in future work."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.13636362552642822,
0.07999999821186066,
0.10526315122842789,
0.18518517911434174,
0.17391303181648254,
0.23529411852359772,
0.22641508281230927,
0.09302324801683426,
0.19672130048274994,
0.17543859779834747,
0.1599999964237213,
0.14999999105930328,
0.1428571343421936,
0.17543859779834747,
0,
0.158730149269104,
0.11538460850715637,
0.20000000298023224,
0.09756097197532654,
0.13636362552642822,
0.19178082048892975,
0.1428571343421936,
0.11999999731779099,
0.27272728085517883,
0.038461532443761826,
0.17391303181648254,
0.1538461446762085,
0.25,
0.23529411852359772,
0.19178082048892975,
0.21739129722118378,
0.05882352590560913,
0.1249999925494194,
0.19178082048892975,
0.09756097197532654,
0.22727271914482117,
0.25,
0.13333332538604736,
0.3333333432674408,
0.4590163826942444,
0.1599999964237213,
0.2181818187236786
] | Bkg8jjC9KQ | true | [
"We show how the inclusion of an extra-gradient step in first-order GAN training methods can improve stability and lead to improved convergence results."
] |
[
"Batch normalization (BN) is often used in an attempt to stabilize and accelerate training in deep neural networks.",
"In many cases it indeed decreases the number of parameter updates required to achieve low training error.",
"However, it also reduces robustness to small adversarial input perturbations and common corruptions by double-digit percentages, as we show on five standard datasets.",
"Furthermore, we find that substituting weight decay for BN is sufficient to nullify a relationship between adversarial vulnerability and the input dimension.",
"A recent mean-field analysis found that BN induces gradient explosion when used on multiple layers, but this cannot fully explain the vulnerability we observe, given that it occurs already for a single BN layer.",
"We argue that the actual cause is the tilting of the decision boundary with respect to the nearest-centroid classifier along input dimensions of low variance.",
"As a result, the constant introduced for numerical stability in the BN step acts as an important hyperparameter that can be tuned to recover some robustness at the cost of standard test accuracy.",
"We explain this mechanism explicitly on a linear ``toy model and show in experiments that it still holds for nonlinear ``real-world models.",
"BN is a standard component of modern deep neural networks, and tends to make the training process less sensitive to the choice of hyperparameters in many cases (Ioffe & Szegedy, 2015) .",
"While ease of training is desirable for model developers, an important concern among stakeholders is that of model robustness during deployment to plausible, previously unseen inputs.",
"The adversarial examples phenomenon has exposed unstable predictions across state-of-the-art models (Szegedy et al., 2014) .",
"This has led to a variety of methods that aim to improve robustness, but doing so effectively remains a challenge (Athalye et al., 2018; Schott et al., 2019; Hendrycks & Dietterich, 2019; Jacobsen et al., 2019a) .",
"We believe that a prerequisite to developing methods that increase robustness is an understanding of factors that reduce it.",
"Approaches for improving robustness often begin with existing neural network architectures-that use BN-and patch them against specific attacks, e.g., through inclusion of adversarial examples during training (Szegedy et al., 2014; Goodfellow et al., 2015; Kurakin et al., 2017; Madry et al., 2018 ).",
"An implicit assumption is that BN itself does not reduce robustness, however, recent initialization-time analyses have shown that it causes exploding gradients, and increased sensitivity to input perturbations as the network depth increases (Yang et al., 2019; Labatie, 2019) .",
"In this work, we consider the impact of BN in practical scenarios in terms of robustness to common corruptions (Hendrycks & Dietterich, 2019) and adversarial examples (Szegedy et al., 2014) , finding that BN induced sensitivity remains a concern even in cases where its use appears benign on the basis of clean test accuracy, and when only one BN layer is used.",
"The frequently made observation that adversarial vulnerability can scale with the input dimension (Goodfellow et al., 2015; Gilmer et al., 2018; Simon-Gabriel et al., 2019) highlights the importance of identifying regularizers as more than merely a way to improve test accuracy.",
"In particular, BN was a confounding factor in Simon-Gabriel et al. (2019) , making the results of their initialization-time analysis hold after training.",
"By adding 2 regularization and removing BN, we show that there is no inherent relationship between adversarial vulnerability and the input dimension.",
"We found that there is no free lunch with batch norm when model robustness is a concern: the accelerated training properties and occasionally higher clean test accuracy come at the cost of increased vulnerability, both to additive noise and for adversarial perturbations.",
"We have shown that there is no inherent relationship between the input dimension and vulnerability.",
"Our results highlight the importance of identifying the disparate mechanisms of regularization techniques."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.27586206793785095,
0.06896550953388214,
0.4000000059604645,
0.1764705777168274,
0,
0.060606054961681366,
0.1395348757505417,
0.05882352590560913,
0.09999999403953552,
0.11428570747375488,
0.1428571343421936,
0.0476190447807312,
0.13793103396892548,
0.11764705181121826,
0.07843136787414551,
0.21212120354175568,
0.08163265138864517,
0,
0.12121211737394333,
0.19607843458652496,
0.07407406717538834,
0
] | H1x-3xSKDr | true | [
"Batch normalization reduces robustness at test-time to common corruptions and adversarial examples."
] |
[
"Learning to Optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning.",
"In this paper, we explore learning an optimization algorithm for training shallow neural nets.",
"Such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms.",
"We develop an extension that is suited to learning optimization algorithms in this setting and demonstrate that the learned optimization algorithm consistently outperforms other known optimization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture.",
"More specifically, we show that an optimization algorithm trained with the proposed method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR-100.",
"Machine learning is centred on the philosophy that learning patterns automatically from data is generally better than meticulously crafting rules by hand.",
"This data-driven approach has delivered: today, machine learning techniques can be found in a wide range of application areas, both in AI and beyond.",
"Yet, there is one domain that has conspicuously been left untouched by machine learning: the design of tools that power machine learning itself.One of the most widely used tools in machine learning is optimization algorithms.",
"We have grown accustomed to seeing an optimization algorithm as a black box that takes in a model that we design and the data that we collect and outputs the optimal model parameters.",
"The optimization algorithm itself largely stays static: its design is reserved for human experts, who must toil through many rounds of theoretical analysis and empirical validation to devise a better optimization algorithm.",
"Given this state of affairs, perhaps it is time for us to start practicing what we preach and learn how to learn.Recently, BID20 and BID0 introduced two different frameworks for learning optimization algorithms.",
"Whereas BID0 focuses on learning an optimization algorithm for training models on a particular task, BID20 sets a more ambitious objective of learning an optimization algorithm for training models that is task-independent.",
"We study the latter paradigm in this paper and develop a method for learning an optimization algorithm for high-dimensional stochastic optimization problems, like the problem of training shallow neural nets.Under the \"Learning to Optimize\" framework proposed by BID20 , the problem of learning an optimization algorithm is formulated as a reinforcement learning problem.",
"We consider the general structure of an unconstrained continuous optimization algorithm, as shown in Algorithm 1.",
"In each iteration, the algorithm takes a step ∆x and uses it to update the current iterate x (i) .",
"In hand-engineered optimization algorithms, ∆x is computed using some fixed formula φ that depends on the objective function, the current iterate and past iterates.",
"Often, it is simply a function of the current and past gradients.",
"Different choices of φ yield different optimization algorithms and so each optimization algorithm is essentially characterized by its update formula φ.",
"Hence, by learning φ, we can learn an optimization algorithm.",
"BID20 observed that an optimization algorithm can be viewed as a Markov decision process (MDP), where the state includes the current iterate, the action is the step vector ∆x end if x (i) ← x (i−1) + ∆x end for and the policy is the update formula φ.",
"Hence, the problem of learning φ simply reduces to a policy search problem.In this paper, we build on the method proposed in BID20 and develop an extension that is suited to learning optimization algorithms for high-dimensional stochastic problems.",
"We use it to learn an optimization algorithm for training shallow neural nets and show that it outperforms popular hand-engineered optimization algorithms like ADAM BID18 , AdaGrad BID10 and RMSprop BID28 and an optimization algorithm learned using the supervised learning method proposed in BID0 .",
"Furthermore, we demonstrate that our optimization algorithm learned from the experience of training on MNIST generalizes to training on other datasets that have very dissimilar statistics, like the Toronto Faces Dataset, CIFAR-10 and CIFAR-100.",
"In this paper, we presented a new method for learning optimization algorithms for high-dimensional stochastic problems.",
"We applied the method to learning an optimization algorithm for training shallow neural nets.",
"We showed that the algorithm learned using our method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on unrelated tasks/datasets like the Toronto Faces Dataset, CIFAR-10 and CIFAR-100.",
"We also demonstrated that the learned optimization algorithm is robust to changes in the stochasticity of gradients and the neural net architecture."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.1666666567325592,
0.25,
0.08695651590824127,
0.35555556416511536,
0.2926829159259796,
0.06666666269302368,
0,
0.10526315122842789,
0.3333333432674408,
0.14999999105930328,
0.14999999105930328,
0.24242423474788666,
0.20000000298023224,
0.23076923191547394,
0.1428571343421936,
0.12121211737394333,
0,
0.13793103396892548,
0.4000000059604645,
0.16326530277729034,
0.17777776718139648,
0.2978723347187042,
0.25,
0.07999999821186066,
0.4166666567325592,
0.25,
0.3333333432674408
] | BkM27IxR- | true | [
"We learn an optimization algorithm that generalizes to unseen tasks"
] |
[
"The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks.",
"Nevertheless, the functional form of this dependency remains elusive.",
"In this work, we present a functional form which approximates well the generalization error in practice.",
"Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales.",
"Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks.",
"We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data.",
"With the success and heightened adoption of neural networks for real world tasks, some questions remain poorly answered.",
"For a given task and model architecture, how much data would one require to reach a prescribed performance level?",
"How big a model would be needed?",
"Addressing such questions is made especially difficult by the mounting evidence that large, deep neural networks trained on large-scale data outperform their smaller counterparts, rendering the training of high performance models prohibitively costly.",
"Indeed, in the absence of practical answers to the above questions, surrogate approaches have proven useful.",
"One such common approach is model scaling, where one designs and compares small-scale models, and applies the obtained architectural principles at a larger scale (e.g., Liu et al., 2018; Real et al., 2018; Zoph et al., 2018) .",
"Despite these heuristics being widely used to various degrees of success, the relation between the performance of a model in the small-and large-scale settings is not well understood.",
"Hence, exploring the limitations or improving the efficiency of such methods remains subject to trial and error.",
"In this work we circle back to the fundamental question: what is the (functional) relation between generalization error and model and dataset sizes?",
"Critically, we capitalize on the concept of model scaling in its strictest form: we consider the case where there is some given scaling policy that completely defines how to scale up a model from small to large scales.",
"We include in this context all model parameters, such that traversing from one scale (in which all parameters are known) to another requires no additional resources for specifying the model (e.g., architecture search/design).",
"We empirically explore the behavior of the generalization error over a wide range of datasets and models in vision and language tasks.",
"While the error landscape seems fairly complex at first glance, we observe the emergence of several key characteristics shared across benchmarks and domains.",
"Chief among these characteristics is the emergence of regions where power-law behavior approximates the error well both with respect to data size, when holding model size fixed, and vice versa.",
"Motivated by these observations, we establish criteria which a function approximating the error landscape should meet.",
"We propose an intuitive candidate for such a function and evaluate its quality, both in explaining the observed error landscapes and in extrapolating from small scale (seen) to large scale (unseen) errors.",
"Critically, our functional approximation of the error depends on both model and data sizes.",
"We find that this function leads to a high quality fit and extrapolation.",
"For instance, the mean and standard deviation of the relative errors are under 2% when fitting across all scales investigated and under 5% when extrapolating from a slimmed-down model (1/16 of the parameters) on a fraction of the training data (1/8 of the examples) on the ImageNet (Russakovsky et al., 2015) and WikiText-103 (Merity et al., 2016) datasets, with similar results for other datasets.",
"To the best of our knowledge, this is the first work that provides simultaneously:",
"• A joint functional form of the generalization error landscape-as dependent on both data and model size-with few, interpretable degrees of freedom (section 5).",
"• Direct and complete specification (via the scaling policy) of the model configuration attaining said generalization error across model and dataset sizes.",
"• Highly accurate approximation of error measurements across model and data scales via the functional form, evaluated on different models, datasets, and tasks (section 6 ).",
"• Highly accurate error prediction from small to large model and data (section 7).",
"We conclude with a discussion of some implications of our findings as a practical and principled tool for understanding network design at small scale and for efficient computation and trade-off design in general.",
"We hope this work also provides a useful empirical leg to stand on and an invitation to search for a theory of generalization error which accounts for our findings.",
"In this work, through insights gained by the joint examination of the dependencies of generalization error on both model and data size, we arrive at criteria for functions consistent with the form of the generalization error under a given scaling policy.",
"We consider one such function and find it to be in very good agreement with the actual behavior of the error landscape.",
"Indeed, the agreement is strong enough that extrapolation from small to large scale becomes feasible: the function predicts the behavior of the generalization error in practice for the practical case of scaling models and data.",
"We discuss several example implications of knowing such a functional form.",
"Small-scale network development: At the core of small fidelity searches is the notion of performance rank comparison between models.",
"However, small scale and large scale ranks are not assured to be consistent.",
"If indeed a functional form such as empirically found in this work holds very generally, then in contrast, one can safely assess scaling rank between models at small scale, with the assurance that it remains consistent.",
"This suggests that one would be well served by searching over scaling policies; a pertinent example of such a success is Tan & Le (2019) .",
"The functional form also explains the limitation of small-scale search: once reaching the random-guess error level, where the sensitivity to scaling vanishes, the informativeness of ranking diminishes.",
"Finally, the functional form allows direct usage of differentiable methods for NAS.",
"Principled design: Knowing the error landscape function facilitates reasoning about the choice of (m, n) attaining a specified error level.",
"In other words, for any given error level, one can solve Eq. 5 for m, n based on small-scale measurements.",
"Thus, one can quantitatively answer design questions regarding the expected (in particular, large-scale) relations between m, n, and .",
"In fact, Eq. 5 provides direct ansewrs to questions such as \"how much data would one require to reach a prescribed performance level?\" or \"how big a model would be needed?\"",
"Imposing constraints is also straightforward.",
"For instance, consider the following question: \"What is the maximal model size possibly needed (useful), when the data is limited in size, n = n lim (for a given model architecture and scaling policy)?\"",
"For a fixed dataset size, model scaling eventually contributes marginally to error reduction and becomes negligible when bm Similarly, The maximal useful amount of data for a limited sized model m lim is:",
"Moreover, Eq. 5 allows for complex design trade-offs.",
"Generally, given some design-tradeoff cost function C(m, n, ), one can minimize such cost s.t. Eq. 5.",
"For example, consider the case of optimizing for efficient computation which has both practical and environmental importance (Schwartz et al., 2019) .",
"Since the number of FLOPs during training is ∝ m · n (for constant epoch budget), the trade-off cost function may be formulated as C(FLOPS, ) = C(mn, ).",
"Further, since constant error contour is very well approximated by c = 1 n α + b m β (Eq. 5), dataset and models may be scaled with optimal resource efficiency with no effect on performance by solving for:",
"The solution gives us the optimal-computational-efficiency ratio of model to data size:",
"Limitations: We have made a few simplifying assumptions in our choice of approximating function, in particular in how to model the transition from the initial random-guess error level and the union of the random-guess level of the two scenarios (small model with large data and large model with small data).",
"We leave a more detailed examination of the behavior of the transitions from random-guess error levels and refinements of the functional form to future work.",
"Critically, the restrictive nature of our scaling framework (all parameters and hyperparameters described by a policy) is both a blessing and a challenge.",
"The blessing comes in fulfilling the goal of finding simultaneously both the form of the generalization error and the full specification of the model and hyperparameters that attain it across scales.",
"The challenge is that we have demonstrated in this work only the case of constant hyper-parameters.",
"We conjecture that the relation between model configuration and hyperparameter choice (Zela et al., 2018) may entail the potential to formulate hyperparameter-scaling policies similar in nature to the model-scaling polices, and that these too fall under the scope of the form we find in this work.",
"This too will be the subject of future work.",
"We hope that this work will bring the actual functional form of the generalization error in this practical case of scaling to the fore, both in practice and as an empirical leg to stand on in the quest for its theoretical origins.",
"Scaling the models' width is performed by multiplying the number of channels in each convolutional layer and the width of the hidden linear layers by a constant factor and rounding to the nearest integer.",
"The ranges of width scales (and data scales) for the main experiments are detailed in Table 1b ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.277777761220932,
0.08695651590824127,
0.2666666507720947,
0.3829787075519562,
0.1621621549129486,
0.22857142984867096,
0.1249999925494194,
0.1249999925494194,
0.0952380895614624,
0.043478257954120636,
0.06896550953388214,
0.12765957415103912,
0.10256409645080566,
0.19999998807907104,
0.2857142686843872,
0.12765957415103912,
0.1702127605676651,
0.3030303120613098,
0.2222222238779068,
0.1860465109348297,
0.19999998807907104,
0.1860465109348297,
0.2857142686843872,
0.14814814925193787,
0.158730149269104,
0.07407406717538834,
0.2702702581882477,
0.3636363446712494,
0.307692289352417,
0.2142857164144516,
0.09756097197532654,
0.25,
0.2083333283662796,
0.2857142686843872,
0.1818181723356247,
0.07999999821186066,
0.06451612710952759,
0.07692307233810425,
0.08163265138864517,
0,
0.10810810327529907,
0.07692307233810425,
0.1249999925494194,
0.060606054961681366,
0.1249999925494194,
0.04878048226237297,
0,
0.1395348757505417,
0.13333332538604736,
0,
0,
0.1666666567325592,
0.04878048226237297,
0.07843136787414551,
0.1538461446762085,
0.20408162474632263,
0.22857142984867096,
0.11764705181121826,
0.42105263471603394,
0.06666666269302368,
0.15094339847564697,
0.08695651590824127,
0.2083333283662796,
0.09999999403953552,
0.12903225421905518
] | ryenvpEKDr | true | [
"We predict the generalization error and specify the model which attains it across model/data scales."
] |
[
"Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research.",
"We present an unsupervised learning algorithm to train agents to achieve perceptually-specified goals using only a stream of observations and actions.",
"Our agent simultaneously learns a goal-conditioned policy and a goal achievement reward function that measures how similar a state is to the goal state.",
"This dual optimization leads to a co-operative game, giving rise to a learned reward function that reflects similarity in controllable aspects of the environment instead of distance in the space of observations.",
"We demonstrate the efficacy of our agent to learn, in an unsupervised manner, to reach a diverse set of goals on three domains -- Atari, the DeepMind Control Suite and DeepMind Lab.",
"Currently, the best performing methods on many reinforcement learning benchmark problems combine model-free reinforcement learning methods with policies represented using deep neural networks BID18 BID8 .",
"Despite reaching or surpassing human-level performance on many challenging tasks, deep model-free reinforcement learning methods that learn purely from the reward signal learn in a way that differs greatly from the manner in which humans learn.",
"In the case of learning to play a video game, a human player not only acquires a strategy for achieving a high score, but also gains a degree of mastery of the environment in the process.",
"Notably, a human player quickly learns which aspects of the environment are under their control as well as how to control them, as evidenced by their ability to rapidly adapt to novel reward functions BID22 .Focusing",
"learning on mastery of the environment instead of optimizing a single scalar reward function has many potential benefits. One benefit",
"is that learning is possible even in the absence of an extrinsic reward signal or with an extrinsic reward signal that is very sparse. Another benefit",
"is that an agent that has fully mastered its environment should be able to reach arbitrary achievable goals, which would allow it to generalize to tasks on which it wasn't explicitly trained. Building reinforcement",
"learning agents that aim for environment mastery instead of or in addition to learning about a scalar reward signal is currently an open challenge.One way to represent such knowledge about an environment is using an environment model. Modelbased reinforcement",
"learning methods aim to learn accurate environment models and use them either for planning or for training a policy. While learning accurate",
"environment models of some visually rich environments is now possible BID33 BID7 BID15 using learned models in model-based reinforcement learning has proved to be challenging and model-free approaches still dominate common benchmarks.We present a new model-free agent architecture of Discriminative Embedding Reward Networks, or DISCERN for short. DISCERN learns to control",
"an environment in an unsupervised way by learning purely from the stream of observations and actions. The aim of our agent is to",
"learn a goal-conditioned policy π θ (a|s; s g ) BID19 BID37 which can reach any goal state s g that is reachable from the current state s. We show how to learn a goal",
"achievement reward function r(s; s g ) that measures how similar state s is to state s g using a mutual information objective at the same time as learning π θ (a|s; s g ). The resulting learned reward",
"function r(s; s g ) measures similarity in the space of controllable aspects of the environment instead of in the space of raw observations. Crucially, the DISCERN architecture",
"is able to deal with goal states that are not perfectly reachable, for example, due to the presence of distractor objects that are not under the agent's control. In such cases the goal-conditioned",
"policy learned by DISCERN tends to seek states where the controllable elements match those in the goal state as closely as possible.We demonstrate the effectiveness of our approach on three domains -Atari games, continuous control tasks from the DeepMind Control Suite, and DeepMind Lab. We show that our agent learns to successfully",
"achieve a wide variety of visually-specified goals, discovering underlying degrees of controllability of an environment in a purely unsupervised manner and without access to an extrinsic reward signal.",
"We have presented a system that can learn to achieve goals, specified in the form of observations from the environment, in a purely unsupervised fashion, i.e. without any extrinsic rewards or expert demonstrations.",
"Integral to this system is a powerful and principled discriminative reward learning objective, which we have demonstrated can recover the dominant underlying degrees of controllability in a variety of visual domains.In this work, we have adopted a fixed episode length of T in the interest of simplicity and computational efficiency.",
"This implicitly assumes not only that all sampled goals are approximately achievable in T steps, but that the policy need not be concerned with finishing in less than the allotted number of steps.",
"Both of these limitations could be addressed by considering schemes for early termination based on the embedding, though care must be taken not to deleteriously impact training by terminating episodes too early based on a poorly trained reward embedding.",
"Relatedly, our goal selection strategy is agnostic to both the state of the environment at the commencement of the goal episode and the current skill profile of the policy, utilizing at most the content of the goal itself to drive the evolution of the goal buffer G. We view it as highly encouraging that learning proceeds using such a naive goal selection strategy, however more sophisticated strategies, such as tracking and sampling from the frontier of currently achievable goals BID16 , may yield substantial improvements.DISCERN's ability to automatically discover controllable aspects of the observation space is a highly desirable property in the pursuit of robust low-level control.",
"A natural next step is the incorporation of DISCERN into a deep hierarchical reinforcement learning setup BID46 BID24 BID30 where a meta-policy for proposing goals is learned after or in tandem with a low-level controller, i.e. by optimizing an extrinsic reward signal."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17142856121063232,
0.3030303120613098,
0.1818181723356247,
0.10256409645080566,
0.1463414579629898,
0.11428570747375488,
0.1395348757505417,
0.19512194395065308,
0.09302324801683426,
0.1249999925494194,
0.0624999962747097,
0.0952380895614624,
0.2222222238779068,
0.3125,
0.1666666567325592,
0.11764705181121826,
0.1463414579629898,
0.13636362552642822,
0,
0.09999999403953552,
0.06896551698446274,
0.1621621549129486,
0.1818181723356247,
0.11320754140615463,
0.0952380895614624,
0.12765957415103912,
0.09090908616781235,
0.18867924809455872
] | r1eVMnA9K7 | true | [
"Unsupervised reinforcement learning method for learning a policy to robustly achieve perceptually specified goals."
] |
[
"State of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process.\n",
"In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence.\n",
"Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity.\n",
"On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers.",
"The size of modern neural sequence models (Gehring et al., 2017; Vaswani et al., 2017; Devlin et al., 2019) can amount to billions of parameters (Radford et al., 2019) .",
"For example, the winning entry of the WMT'19 news machine translation task in English-German used an ensemble totaling two billion parameters .",
"While large models are required to do better on hard examples, small models are likely to perform as well on easy ones, e.g., the aforementioned ensemble is probably not required to translate a short phrase such as \"Thank you\".",
"However, current models apply the same amount of computation regardless of whether the input is easy or hard.",
"In this paper, we propose Transformers which adapt the number of layers to each input in order to achieve a good speed-accuracy trade off at inference time.",
"We extend Graves (2016; ACT) who introduced dynamic computation to recurrent neural networks in several ways: we apply different layers at each stage, we investigate a range of designs and training targets for the halting module and we explicitly supervise through simple oracles to achieve good performance on large-scale tasks.",
"Universal Transformers (UT) rely on ACT for dynamic computation and repeatedly apply the same layer (Dehghani et al., 2018) .",
"Our work considers a variety of mechanisms to estimate the network depth and applies a different layer at each step.",
"Moreover, Dehghani et al. (2018) fix the number of steps for large-scale machine translation whereas we vary the number of steps to demonstrate substantial improvements in speed at no loss in accuracy.",
"UT uses a layer which contains as many weights as an entire standard Transformer and this layer is applied several times which impacts speed.",
"Our approach does not increase the size of individual layers.",
"We also extend the resource efficient object classification work of Huang et al. (2017) to structured prediction where dynamic computation decisions impact future computation.",
"Related work from computer vision includes Teerapittayanon et al. (2016) ; Figurnov et al. (2017) and Wang et al. (2018) who explored the idea of dynamic routing either by exiting early or by skipping layers.",
"We encode the input sequence using a standard Transformer encoder to generate the output sequence with a varying amount of computation in the decoder network.",
"Dynamic computation poses a challenge for self-attention because omitted layers in prior time-steps may be required in the future.",
"We experiment with two approaches to address this and show that a simple approach works well ( §2).",
"Next, we investigate different mechanisms to control the amount of computation in the decoder network, either for the entire sequence or on a per-token basis.",
"This includes multinomial and binomial classifiers supervised by the model likelihood or whether the argmax is already correct as well as simply thresholding the model score ( §3).",
"Experiments on IWSLT14 German-English Figure 1: Training regimes for decoder networks able to emit outputs at any layer.",
"Aligned training optimizes all output classifiers C n simultaneously assuming all previous hidden states for the current layer are available.",
"Mixed training samples M paths of random exits at which the model is assumed to have exited; missing previous hidden states are copied from below.",
"translation (Cettolo et al., 2014) as well as WMT'14 English-French translation show that we can match the performance of well tuned baseline models at up to 76% less computation ( §4).",
"We extended anytime prediction to the structured prediction setting and introduced simple but effective methods to equip sequence models to make predictions at different points in the network.",
"We compared a number of different mechanisms to predict the required network depth and find that a simple correctness based geometric-like classifier obtains the best trade-off between speed and accuracy.",
"Results show that the number of decoder layers can be reduced by more than three quarters at no loss in accuracy compared to a well tuned Transformer baseline."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25,
0.17777776718139648,
0.24390242993831635,
0.11764705181121826,
0.12121211737394333,
0.1249999925494194,
0.043478257954120636,
0.3571428656578064,
0.21052631735801697,
0.17241379618644714,
0.1875,
0.19354838132858276,
0.1538461446762085,
0,
0.1818181723356247,
0.17142856121063232,
0.0952380895614624,
0.3030303120613098,
0.19999998807907104,
0.06666666269302368,
0.2857142686843872,
0.1111111044883728,
0.06666666269302368,
0.12903225421905518,
0.1621621549129486,
0.19512194395065308,
0.0555555522441864,
0.1538461446762085,
0.14999999105930328
] | SJg7KhVKPH | true | [
"Sequence model that dynamically adjusts the amount of computation for each input."
] |
[
"Variational Auto-encoders (VAEs) are deep generative latent variable models consisting of two components: a generative model that captures a data distribution p(x) by transforming a distribution p(z) over latent space, and an inference model that infers likely latent codes for each data point (Kingma and Welling, 2013).",
"Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata: (1) the learned generative model captures the observed data distribution but does so while ignoring the latent codes, resulting in codes that do not represent the data (e.g. van den Oord et al. (2017); Kim et al. (2018)); (2) the aggregate of the learned latent codes does not match the prior p(z).",
"This mismatch means that the learned generative model will be unable to generate realistic data with samples from p(z)(e.g. Makhzani et al. (2015); Tomczak and Welling (2017)).\n\n",
"In this paper, we demonstrate that both issues stem from the fact that the global optima of the VAE training objective often correspond to undesirable solutions.",
"Our analysis builds on two observations: (1) the generative model is unidentifiable – there exist many generative models that explain the data equally well, each with different (and potentially unwanted) properties and (2) bias in the VAE objective – the VAE objective may prefer generative models that explain the data poorly but have posteriors that are easy to approximate.",
"We present a novel inference method, LiBI, mitigating the problems identified in our analysis.",
"On synthetic datasets, we show that LiBI can learn generative models that capture the data distribution and inference models that better satisfy modeling assumptions when traditional methods struggle to do so."
] | [
0,
0,
0,
0,
0,
1,
0
] | [
0.145454540848732,
0.08571428060531616,
0.12765957415103912,
0.3414634168148041,
0.158730149269104,
0.375,
0.17391303181648254
] | HkgL5khEYH | false | [
"We characterize problematic global optima of the VAE objective and present a novel inference method to avoid such optima."
] |
[
"Modeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, relation extraction, and question answering.",
"Supervised learning from labeled hypernym sources, such as WordNet, limit the coverage of these models, which can be addressed by learning hypernyms from unlabeled text. ",
"Existing unsupervised methods either do not scale to large vocabularies or yield unacceptably poor accuracy. ",
"This paper introduces {\\it distributional inclusion vector embedding (DIVE)}, a simple-to-implement unsupervised method of hypernym discovery via per-word non-negative vector embeddings which preserve the inclusion property of word contexts.",
"In experimental evaluations more comprehensive than any previous literature of which we are aware---evaluating on 11 datasets using multiple existing as well as newly proposed scoring functions---we find that our method provides up to double the precision of previous unsupervised methods, and the highest average performance, using a much more compact word representation, and yielding many new state-of-the-art results.",
"In addition, the meaning of each dimension in DIVE is interpretable, which leads to a novel approach on word sense disambiguation as another promising application of DIVE.",
"Numerous applications benefit from compactly representing context distributions, which assign meaning to objects under the rubric of distributional semantics.",
"In natural language processing, distributional semantics has long been used to assign meanings to words (that is, to lexemes in the dictionary, not individual instances of word tokens).",
"The meaning of a word in the distributional sense is often taken to be the set of textual contexts (nearby tokens) in which that word appears, represented as a large sparse bag of words (SBOW).",
"Without any supervision, word2vec BID22 , among other approaches based on matrix factorization BID20 , successfully compress the SBOW into a much lower dimensional embedding space, increasing the scalability and applicability of the embeddings while preserving (or even improving) the correlation of geometric embedding similarities with human word similarity judgments.While embedding models have achieved impressive results, context distributions capture more semantic features than just word similarity.",
"The distributional inclusion hypothesis (DIH) BID49 BID11 BID6 posits that the context set of a word tends to be a subset of the contexts of its hypernyms.",
"For a concrete example, most adjectives that can be applied to poodle can also be applied to dog, because dog is a hypernym of poodle.",
"For instance, both can be obedient.",
"However, the converse is not necessarily true -a dog can be straight-haired but a poodle cannot.",
"Therefore, dog tends to have a broader context set than poodle.",
"Many asymmetric scoring functions comparing SBOW based on DIH have been developed for automatic hypernymy detection BID49 BID11 BID38 .Hypernymy",
"detection plays a key role in many challenging NLP tasks, such as textual entailment BID34 , coreference BID32 , relation extraction BID8 and question answering BID13 . Leveraging",
"the variety of contexts and inclusion properties in context distributions can greatly increase the ability to discover taxonomic structure among words BID38 . The inability",
"to preserve these features limits the semantic representation power and downstream applicability of some popular existing unsupervised learning approaches such as word2vec.Several recently proposed methods aim to encode hypernym relations between words in dense embeddings, such as Gaussian embedding BID45 BID0 , order embedding BID44 , H-feature detector BID33 , HyperScore (Nguyen et al., 2017) , dual tensor BID12 , Poincaré embedding BID28 , and LEAR BID46 . However, the",
"methods focus on supervised or semi-supervised setting BID44 BID33 BID27 BID12 BID46 , do not learn from raw text BID28 or lack comprehensive experiments on the hypernym detection task BID45 BID0 .Recent studies",
"BID21 BID38 have underscored the difficulty of generalizing supervised hypernymy annotations to unseen pairs -classifiers often effectively memorize prototypical hypernyms ('general' words) and ignore relations between words. These findings",
"motivate us to develop more accurate and scalable unsupervised embeddings to detect hypernymy and propose several scoring functions to analyze the embeddings from different perspectives.",
"We show the micro average AP@all on 10 datasets using different hypernymy scoring functions in TAB2 .",
"We can see the combination functions such as C·∆S and W·∆S perform the best overall.",
"Among the unnormalized inclusion based scoring functions, CDE works the best.",
"AL 1 performs well compared with other functions which remove the frequency signal such as Word2vec, Cosine, and SLQS Row.",
"The summation is the most robust generality measurement.",
"In the In TAB4 , DIVE with two of the best scoring functions (C·∆S and W·∆S) is compared with the previous unsupervised state-of-the-art approaches based on SBOW on different datasets.There are several reasons which might cause the large performance gaps in some datasets.",
"In addition to the effectiveness of DIVE, some improvements come from our proposed scoring functions.",
"The fact that every paper uses a different training corpus also affects the performances.",
"Furthermore, BID38 select the scoring functions and feature space for the first 4 datasets based on AP@100, which we believe is too sensitive to the hyper-parameter settings of different methods.",
"To isolate the impact of each factor, we perform a more comprehensive comparison next.",
"In TAB6 , we first confirm the finding of the previous review study of BID38 : there is no single hypernymy scoring function which always outperforms others.",
"One of the main reasons is that different datasets collect negative samples differently.",
"This is also why we evaluate our method on many datasets to make sure our conclusions hold in general.",
"For example, if negative samples come from random word pairs (e.g. WordNet dataset), a symmetric similarity measure is already a pretty good scoring function.",
"On the other hand, negative samples come from related or similar words in HyperNet, EVALution, Lenci/Benotto, and Weeds, so only computing generality difference leads to the best (or close to the best) performance.",
"The negative samples in many datasets are composed of both random samples and similar words (such as BLESS), so the combination of similarity and generality difference yields the most stable results.DIVE performs similar or better on all the scoring functions compared with SBOW consistently across all datasets in TAB6 , while using many fewer dimensions (see TAB8 ).",
"Its results on combination scoring functions outperform SBOW Freq.",
"Meanwhile, its results on AL 1 outperform SBOW PPMI.",
"The fact that combination scoring functions (i.e., W·∆S or C·∆S) usually outperform generality functions suggests that only memorizing general words is not sufficient.",
"The best average performance on 4 and 10 datasets are both produced by W·∆S on DIVE.SBOW PPMI improves the combination functions from SBOW Freq but sacrifices AP on the inclusion functions.",
"It generally hurts performance to change the frequency sampling of PPMI (PPMI w/ FW) or compute SBOW PPMI on the whole WaCkypedia (all wiki) instead of the first 51.2 million tokens.",
"The similar trend can also be seen in TAB7 .",
"Note that AL 1 completely fails in HyperLex dataset using SBOW PPMI, which suggests that PPMI might not necessarily preserve the distributional inclusion property, even though it can have good performance on combination functions.Removing the PMI filter from DIVE slightly drops the overall precision while removing frequency weights on shifted PMI (w/o FW) leads to poor performances.",
"K-means (Freq NMF) produces similar AP compared with SBOW Freq, but has worse AL 1 scores.",
"Its best AP scores on different datasets are also significantly worse than the best AP of DIVE.",
"This means that only making word2vec (skip-grams with negative sampling) non-negative or naively accumulating topic distribution in contexts cannot lead to satisfactory embeddings.",
"In addition to hypernymy detection, BID0 show that the mixture of Gaussian distributions can also be used to discover multiple senses of each word.",
"In our qualitative experiment, we show that DIVE can achieve the similar goal without fixing the number of senses before training the embedding.Recall that each dimension roughly corresponds to one topic.",
"Given a query word, the higher embedding value on a dimension implies higher likelihood to observe the word in the context of the topic.",
"The embedding of a polysemy would have high values on different groups of topics/dimensions.",
"This allows us to discover the senses by clustering the topics/dimensions of the polysemy.",
"We use the embedding values as the feature each dimension, compute the pairwise similarity between dimensions, and apply spectral clustering BID41 to group topics as shown in the TAB9 .",
"See more implementation details in the supplementary materials.In the word sense disambiguation tasks, it is usually challenging to determine how many senses/clusters each word should have.",
"Many existing approaches fix the number of senses before training the embedding BID42 BID0 .",
"BID26 make the number of clusters approximately proportional to the diversity of the context, but the assumption does not always hold.",
"Furthermore, the training process cannot capture different granularity of senses.",
"For instance, race in the car context could share the same sense with the race in the game topic because they all mean contest, but the race in the car context actually refers to the specific contest of speed.",
"Therefore, they can also be viewed as separate senses (like the results in TAB9 ).",
"This means the correct number of clusters is not unique, and the methods, which fixes the cluster numbers, need to re-train the embedding many times to capture such granularity.In our approach, clustering dimensions is done after the training process of DIVE is completed, so it is fairly efficient to change the cluster numbers and hierarchical clustering is also an option.",
"Similar to our method, BID31 also discover word senses by graph-based clustering.",
"The main difference is that they cluster the top n words which are most related to the query word instead of topics.",
"However, choosing the hyper-parameter n is difficult.",
"Large n would make graph clustering algorithm inefficient, while small n would make less frequent senses difficult to discover.",
"Compressing unsupervised SBOW models into a compact representation is challenging while preserving the inclusion, generality, and similarity signals which are important for hypernym detection.",
"Our experiments suggest that simple baselines such as accumulating K-mean clusters and non-negative skip-grams do not lead to satisfactory performances in this task.To achieve this goal, we proposed an interpretable and scalable embedding method called distributional inclusion vector embedding (DIVE) by performing non-negative matrix factorization (NMF) on a weighted PMI matrix.",
"We demonstrate that scoring functions which measure inclusion and generality properties in SBOW can also be applied to DIVE to detect hypernymy, and DIVE performs the best on average, slightly better than SBOW while using many fewer dimensions.Our experiments also indicate that unsupervised scoring functions, which combine similarity and generality measurements, work the best in general, but no one scoring function dominates across all datasets.",
"A combination of unsupervised DIVE with the proposed scoring functions produces new state-of-the-art performances on many datasets under the unsupervised setup.Finally, a qualitative experiment shows that clusters of the topics discovered by DIVE often correspond to the word senses, which allow us to do word sense disambiguation without the need to know the number of senses before training the embeddings.",
"In addition to the unsupervised approach, we also compare DIVE with semi-supervised approaches.",
"When there are sufficient training data, there is no doubt that the semi-supervised embedding approaches such as HyperNet BID40 , H-feature detector BID33 , and HyperScore (Nguyen et al., 2017) can achieve better performance than all unsupervised methods.",
"However, in many domains such as scientific literature, there are often not many annotated hypernymy pairs (e.g. Medical dataset ).Since",
"we are comparing an unsupervised method with semi-supervised methods, it is hard to fairly control the experimental setups and tune the hyper-parameters. In TAB10",
", we only show several performances which are copied from the original paper when training data are limited 3 . As we can",
"see, the performance from DIVE is roughly comparable to the previous semi-supervised approaches trained on small amount of hypernym pairs. This demonstrates",
"the robustness of our approach and the difficulty of generalizing hypernymy annotations with semi-supervised approaches."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.045454539358615875,
0.08695651590824127,
0.052631575614213943,
0.3333333432674408,
0.2432432323694229,
0.2978723347187042,
0.1463414579629898,
0.1249999925494194,
0.19607841968536377,
0.17499999701976776,
0.2222222238779068,
0.04878048226237297,
0,
0.10526315122842789,
0.12121211737394333,
0.1428571343421936,
0.1666666567325592,
0.2222222238779068,
0.1249999925494194,
0.11320754140615463,
0.11764705181121826,
0.23255813121795654,
0.2631579041481018,
0.1666666567325592,
0.1249999925494194,
0.1428571343421936,
0.06666666269302368,
0.23728813230991364,
0.05405404791235924,
0.1111111044883728,
0.1599999964237213,
0.1111111044883728,
0.12765957415103912,
0.05714285373687744,
0.09999999403953552,
0.08695651590824127,
0.11538460850715637,
0.14084506034851074,
0.12903225421905518,
0.12903225421905518,
0,
0.16326530277729034,
0.07999999821186066,
0.06451612710952759,
0.13333332538604736,
0,
0.10810810327529907,
0.08888888359069824,
0.13636362552642822,
0.11764705181121826,
0.3414634168148041,
0.17142856121063232,
0.05882352590560913,
0.21276594698429108,
0.12765957415103912,
0.11428570747375488,
0.05128204822540283,
0.0624999962747097,
0.12244897335767746,
0.1621621549129486,
0.11764705181121826,
0.05882352590560913,
0.1395348757505417,
0.06896551698446274,
0,
0.260869562625885,
0.2028985470533371,
0.21621620655059814,
0.20000000298023224,
0.11428570747375488,
0.17241378128528595,
0.09302324801683426,
0.13333332538604736,
0.0952380895614624,
0.09302324801683426,
0.17142856121063232
] | SywMS6ZfM | true | [
"We propose a novel unsupervised word embedding which preserves the inclusion property in the context distribution and achieve state-of-the-art results on unsupervised hypernymy detection"
] |
[
"Continual learning aims to learn new tasks without forgetting previously learned ones.",
"This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity.",
"Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' \\textit{importance}.",
"In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB) where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks.",
"Uncertainty is a natural way to identify \\textit{what to remember} and \\textit{what to change} as we continually learn, and thus mitigate catastrophic forgetting.",
"We also show a variant of our model, which uses uncertainty for weight pruning \n",
"and retains task performance after pruning by saving binary masks per tasks.",
"We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches.",
"Additionally, we show that our model does not necessarily need task information at test time, i.e.~it does not presume knowledge of which task a sample belongs to.",
"Humans can easily accumulate and maintain knowledge gained from previously observed tasks, and continuously learn to solve new problems or tasks.",
"Artificial learning systems typically forget prior tasks when they cannot access all training data at once but are presented with task data in sequence.",
"Overcoming these challenges is the focus of continual learning, sometimes also referred to as lifelong learning or sequential learning.",
"Catastrophic forgetting (McCloskey & Cohen, 1989; McClelland et al., 1995) refers to the significant drop in the performance of a learner when switching from a trained task to a new one.",
"This phenomenon occurs because trained parameters on the initial task change in favor of learning new objectives.",
"This is the reason that naive finetuning intuitively suffers from catastrophic forgetting.",
"Given a network of limited capacity, one way to address this problem is to identify the importance of each parameter and penalize further changes to those parameters that were deemed to be important for the previous tasks Aljundi et al., 2018a; Zenke et al., 2017 ).",
"An alternative is to freeze the most important parameters and allow future tasks to only adapt the remaining parameters to new tasks (Mallya & Lazebnik, 2018) .",
"Such models rely on the explicit parametrization of importance.",
"We propose here implicit uncertainty-guided importance representation.",
"Bayesian approaches to neural networks (MacKay, 1992b) can potentially avoid some of the pitfalls of explicit parameterization of importance in regular neural networks.",
"Bayesian techniques, naturally account for uncertainty in parameters estimates.",
"These networks represent each parameter with a distribution defined by a mean and variance over possible values drawn from a shared latent probability distribution (Blundell et al., 2015) .",
"Variational inference can approximate posterior distributions using Monte Carlo sampling for gradient estimation.",
"These networks act like ensemble methods in that they reduce the prediction variance but only use twice the number of parameters present in a regular neural network.",
"We propose the use of the predicted mean and variance of the latent distributions to characterize the importance of each parameter.",
"We perform continual learning with Bayesian neural networks by controlling the learning rate of each parameter as a function of its uncertainty.",
"Figure 1 illustrates how posterior distributions evolve for certain and uncertain weight",
"Illustration of evolution of weight distributions through learning two tasks.",
"(a) circles represent weight parameters, initialized by distributions with mean and variance values randomly sampled from Ɲ(0,0.1).",
"As an example we show five color-coded and plot their distributions.",
"(b) Shows posterior distribution after learning Task 1.",
"While W1 and W2 exhibit lower uncertainties (more contributions in learning Task 1), W3, W4, and W5 appear to have larger uncertainties, with the highest STD in W5, making them available to learn more tasks.",
"(c) Task 2 is learned using higher learning rates for previously uncertain parameters (W3 and W4, W5) while learning rates for W1 and W2 are moderated according to their predicted low uncertainty after finishing task 1.",
"Figure 1: Illustration of the evolution of weight distributions -uncertain weights adapt more quicklywhen learning two tasks using UCB.",
"(a) weight parameter initialized by distributions initialized with mean and variance values randomly sampled from N (0, 0.1).",
"(b) posterior distribution after learning task 1; while θ 1 and θ 2 exhibit lower uncertainties after learning the first task, θ 3 , θ 4 , and θ 5 have larger uncertainties, making them available to learn more tasks.",
"(c) a second task is learned using higher learning rates for previously uncertain parameters (θ 1 , θ 2 , θ 3 , and θ 4 ) while learning rates for θ 1 and θ 2 are reduced.",
"Size of the arrows indicate the magnitude of the change of the distribution mean upon gradient update.",
"distributions while learning two consecutive tasks.",
"Intuitively, the more uncertain a parameter is, the more learnable it can be and therefore, larger gradient steps can be taken for it to learn the current task.",
"As a hard version of this regularization technique, we also show that pruning, i.e., preventing the most important model parameters from any change and learning new tasks with the remaining parameters, can be also integrated into UCB.",
"We refer to this method as UCB-P.",
"In this work, we propose a continual learning formulation with Bayesian neural networks, called UCB, that uses uncertainty predictions to perform continual learning: important parameters can be either fully preserved through a saved binary mask (UCB-P) or allowed to change conditioned on their uncertainty for learning new tasks (UCB).",
"We demonstrated how the probabilistic uncertainty distributions per weight are helpful to continually learning short and long sequences of benchmark datasets compared against baselines and prior work.",
"We show that UCB performs superior or on par with state-of-the-art models such as HAT across all the experiments.",
"Choosing between the two UCB variants depends on the application scenario: While UCB-P enforces no forgetting after the initial pruning stage by saving a small binary mask per task, UCB does not require additional memory and allows for more learning flexibility in the network by allowing small forgetting to occur.",
"UCB can also be used in a single head setting where the right subset of classes belonging to the task is not known during inference leading to a competitive model that can be deployed where it is not possible to distinguish tasks in a continuous stream of the data at test time.",
"UCB can also be deployed in a single head scenario and where tasks information is not available at test time.",
"A APPENDIX A.1",
"DATASETS Table 4 shows a summary of the datasets utilized in our work along with their size and number of classes.",
"In all the experiments we resized images to 32 × 32 × 3 if necessary.",
"For datasets with monochromatic images, we replicate the image across all RGB channels.",
"(LeCun et al., 1998) 10 60,000 10,000 CIFAR100 (Krizhevsky & Hinton, 2009 ) 100 50,000 10,000 NotMNIST (Bulatov, 2011) 10 16,853 1,873 SVHN (Netzer et al., 2011) 10 73,257 26,032 CIFAR10 (Krizhevsky & Hinton, 2009) 10 39,209 12,630 TrafficSigns (Stallkamp et al., 2011) 43 39,209 12,630 FashionMNIST (Xiao et al., 2017) 10 60,000 10,000"
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1538461446762085,
0,
0.32258063554763794,
0.20512820780277252,
0.060606054961681366,
0.0714285671710968,
0,
0.09756097197532654,
0.04999999701976776,
0.05882352590560913,
0.05405404791235924,
0.1875,
0.0476190447807312,
0.06451612710952759,
0,
0.1111111044883728,
0.05714285373687744,
0.08695651590824127,
0.0952380895614624,
0.3030303120613098,
0.17391303181648254,
0.04999999701976776,
0.14814814925193787,
0.10256409645080566,
0.13333332538604736,
0.29411762952804565,
0.07692307233810425,
0.08695651590824127,
0,
0,
0.09090908616781235,
0.08695651590824127,
0.17391303181648254,
0.1249999925494194,
0,
0.08695651590824127,
0.14999999105930328,
0,
0.09999999403953552,
0.1111111044883728,
0.039215683937072754,
0.0952380895614624,
0.20689654350280762,
0.09999999403953552,
0,
0.10526315122842789,
0.037735845893621445,
0,
0.11764705926179886,
0,
0.07407406717538834,
0,
0
] | HklUCCVKDB | true | [
"A regularization-based approach for continual learning using Bayesian neural networks to predict parameters' importance"
] |
[
"Humans have a natural curiosity to imagine what it feels like to exist as someone or something else.",
"This curiosity becomes even stronger for the pets we care for.",
"Humans cannot truly know what it is like to be our pets, but we can deepen our understanding of what it is like to perceive and explore the world like them.",
"We investigate how wearables can offer people animal perspective-taking opportunities to experience the world through animal senses that differ from those biologically natural to us.",
"To assess the potential of wearables in animal perspective-taking, we developed a sensory-augmenting wearable that gives wearers cat-like whiskers.",
"We then created a maze exploration experience where blindfolded participants utilized the whiskers to navigate the maze.",
"We draw on animal behavioral research to evaluate how the whisker activity supported authentically cat-like experiences, and discuss the implications of this work for future learning experiences.",
"Posthumanist philosophies characterize the human body as \"the original prosthesis we all learn to manipulate\" [22] , and suggest the idea that augmenting or substituting aspects of this prosthesis is the normal progression for humanity.",
"Technology allows humans to enhance senses that may be impaired, and to extend our bodies with added senses beyond what we would otherwise be biologically limited by-giving humans the ability to improve their quality of life [17, 18] .",
"\"In short, we are cyborgs\" [21] .",
"Scholars have investigated how immersive virtual environments can enhance social perspective-taking [20, 47] , and computer-augmented, embodied perspective-taking has been shown to encourage a productive \"learning stance\" [33] and to enhance both conceptual learning and engagement [15, 34] .",
"Some environmental education scholars [40, 54] and indigenous educational scholars [5] have suggested that building relational ties to non-human actors in nature may contribute to environmental and biology education.",
"In a few cases, educators have asked learners to take on the embodied experiences of insects such as bees [15] and animals such as polar bears [37] .",
"Danish found that children enacting a computer-augmented pollination activity embodying the roles of bees helped them learn nuances of individual and aggregate bee behavior; Lyons and colleagues found that wearable polar bear paws that simulated the feeling of traversing melting polar ice enabled people to show an empathetic understanding of the impacts of climate change.",
"For many people, the most common experience they will have with entities who have different sensory capabilities is through everyday interaction with pets or neighborhood animals.",
"For example, in noticing that our house cat is navigating a dark space where we would likely bump into something, we may recognize the limits of our own senses and consider how our pets' experiences are both similar to and different from our own.",
"Our work explores how embodied technology can mediate human experiences in ways that offer people opportunities to explore and relate to the animal-like behaviors of their pets.",
"We present the design of a cat-inspired whiskers wearable, the Whisker Beard, and an embodied navigation activity that provided a firsthand perspective-taking experience for participants curious about what it might be like to have whiskers.",
"In addition, we discuss our philosophy of what it means for an animal-imitating experience to be authentic and we present the evaluation framework we used to understand how our whiskers activity encouraged participants to behave like cats.",
"Our study addresses two research questions:",
"In what ways can we create technologies and environments that remediate human experiences to be like those of non-humans?",
"RQ2: What are humans' impressions of these technologically remediated experiences?",
"In this paper we describe (1) the design of a sensory augmentation whiskers wearable; (2) the creation of a maze exploration activity for testing the experience of wearing whiskers; (3) our analysis methods for evaluating the authenticity of an animallike experience; and (4) outline opportunities to extend this work, as well as discuss the implications of it.",
"The results of the Whisker Beard and maze activity show examples of participants exhibiting behaviors and strategies similar to those that animals perform when searching for resources in unfamiliar environments.",
"We separate our results into discussions about their physical behaviors and strategies, as well as their impressions of the experience.",
"As depicted in Figure 5 and Table 1 , as participants explored the maze, they alternated between periods of explorative and exploitative behavior as they switched between using their whiskers and using their hands.",
"Participants spent, on average, a longer amount of time exploring and moving through than maze than they spent hand swiping to look for mice.",
"These results are in line with animal foraging behaviors [9, 41] .",
"Benichou et al. says that animals searching for resources switch between periods of motion and periods of searching.",
"In addition, their work shows that intervals of exploration tend to be longer than intervals of exploitation.",
"This aligns with the amount of time our participants dedicated to these behaviors [9] .",
"While we cannot claim that participants would not have enacted similar exploration-exploitation behaviors without whiskers, we can say that the behaviors that they enacted with whiskers were in line with foraging behaviors.",
"Interestingly, several of the participants made use of the whiskers in ways that strikingly resembled cat behavior, as depicted in Figure 8 .",
"As participants moved down long passages, some used their whiskers to gauge the width of the passage by moving back and forth brushing each side of their whiskers on the opposing walls.",
"This demonstrates that participants used the whiskers to enhance their spatial awareness, one of the supposed evolutionary factors behind the presence of whiskers [11] .",
"We noticed that when participants used techniques B and C, they mimicked the behavior of cats who rub their olfactory face glands on objects to mark their scent, as well to get a sense for the physical properties of a specific object [48] .",
"While this behavior in cats is not necessarily used for navigation purposes, it is used for gauging the size and shape of an object.",
"Participants did this in order to look for hidden passageways and moveable obstacles.",
"Our observations of participants' geocentric and egocentric behaviors provided us with a fuller picture of how participants used the whiskers in tandem with other strategies during the activity.",
"Participants relied on the vibrotactile feedback from the Whisker Beard in determining their path of movement through the maze.",
"In addition to the vibrotactile feedback, we found that participants also relied on the sounds the whiskers made as they brushed against the maze's cardboard surfaces.",
"We validated this observation through think-aloud commentary that participants provided throughout the maze, and through post-maze group discussion.",
"The fact that participants relied on additional tactics beyond the vibrations is not an inauthentic outcome, but rather a reasonable one.",
"Participants' use of different egocentric and geocentric tactics is naturally aligned with how animals navigate the world-getting the most information from their environment by whatever means are accessible to them [41] .",
"The blindfolded maze procedure afforded participants the ability to experience the Whisker Beard in an unfamiliar environment.",
"As expected, due to the unfamiliarity of the environment, participants relied on more geocentric strategies.",
"These results are in line with animal navigation research which suggests that egocentric strategies are too difficult to use when exploring new terrain, and therefore animals rely more heavily on geocentric strategies to gather real-time physical feedback [41] .",
"In time, participants who revisited areas of the maze began to recognize their surroundings, which led them to use internal recall from their memory to identify their approximate position; however, because they were blindfolded they still had to rely on geocentric strategies as well.",
"Unsurprisingly, participants told us that being blindfolded and losing their sense of sight was disorienting for them; sight is one of humans', and cats', dominant senses for obtaining information about an environment [50] .",
"Participants described the open-space areas of the maze as \"disorienting\" and tended to try to find a wall as quickly as they could to reorient themselves.",
"The level of consistent wall-hugging that participants exhibited is in line with experiments where increased levels of anxiety correlated to higher levels of thigmotaxis.",
"Usually, animals' tendency to hug the wall would decrease as an experiment went on, except in circumstances where animals do not have enough time to fully process their environment [57] .",
"In our experiment, blindfolding the participants made it challenging for them to produce an accurate internal map of the space, leading them to continuously avoid open areas and rely on vibrotactile and audio feedback from the walls during navigation.",
"The participants' reflections during the maze and post-maze show promising beginnings to meaningful discussions of animal empathy, as many drew connections to prior experiences of pets who were blind, deaf, or had their whiskers cut off and discussed how disorienting and difficult it would be for them to navigate with a sense removed.",
"Participants described the whiskers as feeling like an extra sense, one that they were able to adapt to even in a short timeframe.",
"Although losing their sight was disorienting, they were able to utilize the whiskers as a substitute for being able to \"see.\"",
"The combination of the Whisker Beard and maze activity suggests that through disorienting them and having them behave like a cat, they were able to consider what it would be like to be a cat relying on its whiskers every day, and how challenging it would be for a cat who has no whiskers at all.",
"Wearable technologies and embodied learning experiences free humans from the confines of their biological limitations.",
"This enables researchers to provide low-cost opportunities that offer firsthand perspective-taking experiences for people, allowing people to experiment with new sensory interactions, including ones that non-human animals have access to.",
"We presented the design of the Whisker Beard, which does just that-provides humans with the opportunity to experience what it would be like to have a new sense, in this case, what it would be like to have whiskers.",
"We introduced concepts from animal behavioral science research and described how we applied it to evaluating the experiences of participants' while immersed in an animal perspective-taking activity.",
"Our observations of participants' enactment of animal-like behaviors, as well as their impressions about the experience suggest that they were immersed in the sensory experience of being a cat with whiskers.",
"We are actively iterating on the designs of our hardware to offer more customizability.",
"This will enable participants to design their own sensory augmenting technologies where they can explore their own curiosities about their pets' other senses.",
"In near-future experiments we will iterate on the design of the wearable activity to offer a more immersive experience where participants can continue to enact animal-like behaviors.",
"Our next steps will then be to investigate how participants developing increased awareness of animals' sensory experiences can support their enactment of empathetically-oriented design activities focused on improving animals' quality of life."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2702702581882477,
0.06666666269302368,
0.27272728085517883,
0.09302324801683426,
0.20512819290161133,
0.11428570747375488,
0.1304347813129425,
0.1538461446762085,
0.11320754140615463,
0,
0.11320754140615463,
0.09090908616781235,
0.13333332538604736,
0.0937499925494194,
0.13636362552642822,
0.10344827175140381,
0.17391303181648254,
0.307692289352417,
0.23076923191547394,
0,
0.20512819290161133,
0.06666666269302368,
0.1846153736114502,
0.0833333283662796,
0.052631575614213943,
0.12765957415103912,
0.09302324801683426,
0,
0.05714285373687744,
0.11428570747375488,
0.1764705777168274,
0.08888888359069824,
0.10256409645080566,
0.12765957415103912,
0.19999998807907104,
0.06896550953388214,
0.1463414579629898,
0.060606054961681366,
0.08888888359069824,
0.05405404791235924,
0.09302324801683426,
0,
0.04878048226237297,
0.11999999731779099,
0.0555555522441864,
0.11764705181121826,
0.0363636314868927,
0.06896550953388214,
0.08163265138864517,
0.09756097197532654,
0.1463414579629898,
0.08163265138864517,
0.1111111044883728,
0.11764705181121826,
0.1428571343421936,
0.09999999403953552,
0.1904761791229248,
0.05714285373687744,
0.21276594698429108,
0.2857142686843872,
0.17391303181648254,
0.1304347813129425,
0.11764705181121826,
0.19999998807907104,
0.13333332538604736,
0.12244897335767746
] | 1uOTdL2H9i | true | [
"This paper explores using wearable sensory augmenting technology to facilitate first-hand perspective-taking of what it is like to have cat-like whiskers."
] |
[
"Generalization error (also known as the out-of-sample error) measures how well the hypothesis learned from training data generalizes to previously unseen data.",
"Proving tight generalization error bounds is a central question in statistical learning theory. ",
"In this paper, we obtain generalization error bounds for learning general non-convex objectives, which has attracted significant attention in recent years. ",
"We develop a new framework, termed Bayes-Stability, for proving algorithm-dependent generalization error bounds. ",
"The new framework combines ideas from both the PAC-Bayesian theory and the notion of algorithmic stability. ",
"Applying the Bayes-Stability method, we obtain new data-dependent generalization bounds for stochastic gradient Langevin dynamics (SGLD) and several other noisy gradient methods (e.g., with momentum, mini-batch and acceleration, Entropy-SGD).",
"Our result recovers (and is typically tighter than) a recent result in Mou et al. (2018) and improves upon the results in Pensia et al. (2018). ",
"Our experiments demonstrate that our data-dependent bounds can distinguish randomly labelled data from normal data, which provides an explanation to the intriguing phenomena observed in Zhang et al. (2017a).",
"We also study the setting where the total loss is the sum of a bounded loss and an additiona l`2 regularization term.",
"We obtain new generalization bounds for the continuous Langevin dynamic in this setting by developing a new Log-Sobolev inequality for the parameter distribution at any time.",
"Our new bounds are more desirable when the noise level of the processis not very small, and do not become vacuous even when T tends to infinity.",
"Non-convex stochastic optimization is the major workhorse of modern machine learning.",
"For instance, the standard supervised learning on a model class parametrized by R d can be formulated as the following optimization problem:",
"where w denotes the model parameter, D is an unknown data distribution over the instance space Z, and F : R d × Z → R is a given objective function which may be non-convex.",
"A learning algorithm takes as input a sequence S = (z 1 , z 2 , . . . , z n ) of n data points sampled i.i.d. from D, and outputs a (possibly randomized) parameter configurationŵ ∈ R d .",
"A fundamental problem in learning theory is to understand the generalization performance of learning algorithms-is the algorithm guaranteed to output a model that generalizes well to the data distribution D?",
"Specifically, we aim to prove upper bounds on the generalization error err gen (S) = L(ŵ, D) − L(ŵ, S), where L(ŵ, D) = Ez∼D[L(ŵ, z)] and L(ŵ, S) = 1 n n i=1 L(ŵ, z i ) are the population and empirical losses, respectively.",
"We note that the loss function L (e.g., the 0/1 loss) could be different from the objective function F (e.g., the cross-entropy loss) used in the training process (which serves as a surrogate for the loss L).",
"Classical learning theory relates the generalization error to various complexity measures (e.g., the VC-dimension and Rademacher complexity) of the model class.",
"Directly applying these classical complexity measures, however, often fails to explain the recent success of over-parametrized neural networks, where the model complexity significantly exceeds the amount of available training data (see e.g., Zhang et al. (2017a) ).",
"By incorporating certain data-dependent quantities such as margin and compressibility into the classical framework, some recent work (e.g., Bartlett et al. (2017) ; Arora et al. (2018) ; Wei & Ma (2019) ) obtains more meaningful generalization bounds in the deep learning context.",
"An alternative approach to generalization is to prove algorithm-dependent bounds.",
"One celebrated example along this line is the algorithmic stability framework initiated by Bousquet & Elisseeff (2002) .",
"Roughly speaking, the generalization error can be bounded by the stability of the algorithm (see Section 2 for the details).",
"Using this framework, Hardt et al. (2016) study the stability (hence the generalization) of stochastic gradient descent (SGD) for both convex and non-convex functions.",
"Their work motivates recent study of the generalization performance of several other gradient-based optimization methods: Kuzborskij & Lampert (2018) ; London (2016); Chaudhari et al. (2017) ; Raginsky et al. (2017) ; Mou et al. (2018) ; Pensia et al. (2018) ; Chen et al. (2018) .",
"In this paper, we study the algorithmic stability and generalization performance of various iterative gradient-based method, with certain continuous noise injected in each iteration, in a non-convex setting.",
"As a concrete example, we consider the stochastic gradient Langevin dynamics (SGLD) (see Raginsky et al. (2017) ; Mou et al. (2018) ; Pensia et al. (2018) ).",
"Viewed as a variant of SGD, SGLD adds an isotropic Gaussian noise at every update step:",
"where g t (W t−1 ) denotes either the full gradient or the gradient over a mini-batch sampled from training dataset.",
"We also study a continuous version of (1), which is the dynamic defined by the following stochastic differential equation (SDE):",
"where B t is the standard Brownian motion."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.10256409645080566,
0.1818181723356247,
0.1463414579629898,
0.24242423474788666,
0.11428570747375488,
0.2916666567325592,
0.04878048226237297,
0.0416666604578495,
0.15789473056793213,
0.1904761791229248,
0.1395348757505417,
0.06666666269302368,
0.04999999329447746,
0.039215680211782455,
0.11320754140615463,
0.09090908616781235,
0.14814814925193787,
0.08163265138864517,
0.19999998807907104,
0.03703703358769417,
0.20000000298023224,
0.1428571343421936,
0,
0.1666666567325592,
0.1428571343421936,
0.0833333283662796,
0.1304347813129425,
0.09756097197532654,
0.11428570747375488,
0.052631575614213943,
0.10526315122842789,
0
] | SkxxtgHKPS | true | [
"We give some generalization error bounds of noisy gradient methods such as SGLD, Langevin dynamics, noisy momentum and so forth."
] |
[
"In this paper, a new intrinsic reward generation method for sparse-reward reinforcement learning is proposed based on an ensemble of dynamics models.",
"In the proposed method, the mixture of multiple dynamics models is used to approximate the true unknown transition probability, and the intrinsic reward is designed as the minimum of the surprise seen from each dynamics model to the mixture of the dynamics models.",
"In order to show the effectiveness of the proposed intrinsic reward generation method, a working algorithm is constructed by combining the proposed intrinsic reward generation method with the proximal policy optimization (PPO) algorithm.",
"Numerical results show that for representative locomotion tasks, the proposed model-ensemble-based intrinsic reward generation method outperforms the previous methods based on a single dynamics model.",
"Reinforcement learning (RL) with sparse reward is an active research area (Andrychowicz et al., 2017; de Abril & Kanai, 2018; Kim et al., 2018; Tang et al., 2017 ).",
"In typical model-free RL, an agent learns a policy to maximize the expected cumulative reward under the circumstance that the agent receives a non-zero reward from the environment for each action of the agent.",
"On the contrary, in sparse reward RL, the environment does not return a non-zero reward for every action of the agent but returns a non-zero reward only when certain conditions are met.",
"Such situations are encountered in many action control problems (Andrychowicz et al., 2017; .",
"As in conventional RL, exploration is important at the early stage of learning in sparse reward RL, whereas the balance between exploration and exploitation is required on the later stage.",
"Methods such as the -greedy strategy (Mnih et al., 2015; Van Hasselt et al., 2016) and the control of policy gradient with Gaussian random noise Schulman et al., 2015a) have been applied to various tasks for exploration.",
"However, these methods have been revealed to be insufficient for successful learning when reward is sparse (Achiam & Sastry, 2017) .",
"In order to overcome such difficulty, intrinsically motivated RL has been studied to stimulate better exploration by generating intrinsic reward for each action by the agent itself, even when reward is sparse.",
"Recently, many intrinsically-motivated RL algorithms have been devised to deal with the sparsity of reward, e.g., based on the notion of curiosity Pathak et al., 2017) and surprise (Achiam & Sastry, 2017) .",
"It is shown that these algorithms are successful and outperform the previous approaches.",
"In essence, these algorithms use a single estimation model for the next state or the environment dynamics to generate intrinsic reward.",
"In this paper, in order to further improve the performance of sparse reward model-free RL, we propose a new method to generate intrinsic reward based on an ensemble of estimation models for the environment dynamics.",
"The rationale behind our approach is that by using a mixture of several distributions, we can increase degrees of freedom for modeling the unknown underlying model dynamics and designing a better reward from the ensemble of estimation models.",
"Numerical results show that the proposed model-ensemble-based intrinsic reward generation method yields improved performance as compared to existing reward generation methods for continuous control with sparse reward setting.",
"In this paper, we have proposed a new intrinsic reward generation method based on an ensemble of dynamics models for sparse-reward reinforcement learning.",
"In the proposed method, the mixture of multiple dynamics models is used to better approximate the true unknown transition probability, and the intrinsic reward is designed as the minimum of the intrinsic reward computed from each dynamics model to the mixture to capture the most relevant surprise.",
"The proposed intrinsic reward generation method was combined with PPO to construct a working algorithm.",
"Ablation study has been performed to investigate the impact of the hyperparameters associated with the proposed ensemblebased intrinsic reward generation.",
"Numerical results show that the proposed model-ensemble-based intrinsic reward generation method outperforms major existing intrinsic reward generation methods in the considered sparse environments.",
"A PROOFS Proposition 1.",
"Let η(π) be the actual expected discounted sum of extrinsic rewards defined in (8).",
"Then, the following inequality holds:",
"where c is a positive constant.",
"Proof.",
"The inequality",
"(a) is trivial from the definition of π * , that is, π * is an optimal policy maximizing η(π).",
"The inequality",
"(b) holds since",
"Proposition 2.",
"Let P φ i (·|s, a), i = 1, . . . , K be the ensemble of model distributions, and P (·|s, a) be an arbitrary true transition probability distribution.",
"Then, the minimum of average KLD between P (·|s, a) and the mixture model P = i q i P φ i (·|s, a) over the mixture weights {q 1 , · · · , q K |q i ≥ 0, i q i = 1} is upper bounded by the minimum of average KLD between P and P φ i over {i}: i.e.,",
"Proof.",
"Here, (26) is valid due to the convexity of the KL divergence in terms of the second argument for a fixed first argument.",
"(27) is valid due to the linearity of expectation.",
"(28) is valid since the minimum in the right-hand side of (27) is achieved when we assign all the mass to q i that has the minimum value of E (s,a)∼ π * D KL P ||P φ i |(s, a) .",
"(Note that the optimal {q i } in (27) is not the same as the optimal {q i } achieving the minimum in (25).",
")",
"Note that each step in the proof is tight except (26) in which the convexity of the KL divergence in terms of the second argument is used.",
"This part involves the function f",
"(x) = − log x for 0 < x ≤ 1 since D KL (p 1 ||p 2 ) = p 1",
"(y) log p1(y",
") p2(y",
") dy, but the convexity of f (",
"x) = − log x for 0 < x ≤ 1 is not so severe if x is not so close to zero."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4285714328289032,
0.5833333134651184,
0.2666666507720947,
0.1818181723356247,
0.08888888359069824,
0.17391303181648254,
0.1304347813129425,
0,
0.1860465109348297,
0.15094339847564697,
0.14999999105930328,
0.20408162474632263,
0.15686273574829102,
0.12121211737394333,
0.29999998211860657,
0.3529411852359772,
0.25925925374031067,
0.2222222238779068,
0.3720930218696594,
0.5384615063667297,
0.17142856121063232,
0.2631579041481018,
0.1538461446762085,
0,
0.11764705181121826,
0.07999999821186066,
0.07692307233810425,
0.1621621549129486,
0,
0.13333332538604736,
0.14814814925193787,
0.20512819290161133,
0.27586206793785095,
0.18518517911434174,
0.2222222238779068,
0.19999998807907104,
0.07692307233810425,
0,
0,
0.1428571343421936,
0.10526315122842789
] | SyxJU64twr | true | [
"For sparse-reward reinforcement learning, the ensemble of multiple dynamics models is used to generate intrinsic reward designed as the minimum of the surprise."
] |
[
"Given the fast development of analysis techniques for NLP and speech\n",
"processing systems, few systematic studies have been conducted to\n",
"compare the strengths and weaknesses of each method.",
" As a step in\n",
"this direction we study the case of representations of phonology in\n",
"neural network models of spoken language.",
"We use two commonly applied\n",
"analytical techniques, diagnostic classifiers and representational\n",
"similarity analysis, to quantify to what extent neural activation\n",
"patterns encode phonemes and phoneme sequences.",
"We manipulate two\n",
"factors that can affect the outcome of analysis.",
"First, we investigate\n",
"the role of learning by comparing neural activations extracted from\n",
"trained versus randomly-initialized models.",
"Second, we examine the\n",
"temporal scope of the activations by probing both local activations\n",
"corresponding to a few milliseconds of the speech signal, and global\n",
"activations pooled over the whole utterance.",
"We conclude that\n",
"reporting analysis results with randomly initialized models is\n",
"crucial, and that global-scope methods tend to yield more consistent\n",
"and interpretable results and we recommend their use as a complement\n",
"to local-scope diagnostic methods.",
"As end-to-end architectures based on neural networks became the tool of choice for processing speech and language, there has been increased interest in techniques for analyzing and interpreting the representations emerging in these models.",
"A large array of analytical techniques have been proposed and applied to diverse tasks and architectures .",
"Given the fast development of analysis techniques for NLP and speech processing systems, relatively few systematic studies have been conducted to compare the strengths and weaknesses of each methodology and to assess the reliability and explanatory power of their outcomes in controlled settings.",
"This paper reports a step in this direction: as a case study, we examine the representation of phonology in neural network models of spoken language.",
"We choose three different models that process speech signal as input, and analyze their learned neural representations.",
"We use two commonly applied analytical techniques:",
"(i) diagnostic models and",
"(ii) representational similarity analysis to quantify to what extent neural activation patterns encode phonemes and phoneme sequences.",
"In our experiments, we manipulate two important factors that can affect the outcome of analysis.",
"One pitfall not always successfully avoided in work on neural representation analysis is the role of learning.",
"Previous work has shown that sometimes non-trivial representations can be found in the activation patterns of randomly initialized, untrained neural networks (Zhang and Bowman, 2018; ).",
"Here we investigate the representations of phonology in neural models of spoken language in light of this fact, as extant studies have not properly controlled for role of learning in these representations.",
"The second manipulated factor in our experiments is the scope of the extracted neural activations.",
"We control for the temporal scope, probing both local activations corresponding to a few milliseconds of the speech signal, as well as global activations pooled over the whole utterance.",
"When applied to global-scope representations, both the methods detect a robust difference between the trained and randomly initialized target models.",
"However we find that in our setting, RSA applied to local representations shows low correlations between phonemes and neural activation patterns for both trained and randomly initialized target models, and for one of the target models the local diagnostic classifier only shows a minor difference in the decodability of phonemes from randomly initialized versus trained network.",
"This highlights the importance of reporting analy-sis results with randomly initialized models as a baseline.",
"We carried out a systematic study of analysis methods for neural models of spoken language and offered some suggestions on best practices in this endeavor.",
"Nevertheless our work is only a first step, and several limitations remain.",
"The main challenge is that it is often difficult to completely control for the many factors of variation present in the target models, due to the fact that a particular objective function, or even a dataset, may require relatively important architectural modifications.",
"In future we plan to sample target models with a larger number of plausible combinations of factors.",
"Likewise, a choice of an analytical method may often entail changes in other aspects of the analysis: for example unlike a global diagnostic classifier, global RSA captures the sequential order of phonemes.",
"In future we hope to further disentangle these differences."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14814814925193787,
0,
0.0833333283662796,
0.09999999403953552,
0.38461539149284363,
0.5454545617103577,
0.0952380895614624,
0.09090908616781235,
0.0833333283662796,
0,
0.10526315867900848,
0.0833333283662796,
0,
0.1538461446762085,
0.09999999403953552,
0,
0.07999999821186066,
0.07407406717538834,
0,
0.10526315867900848,
0.1666666567325592,
0,
0,
0,
0.260869562625885,
0.19354838132858276,
0.11764705181121826,
0.42105263471603394,
0.24242423474788666,
0.17391303181648254,
0.09999999403953552,
0.0624999962747097,
0.06451612710952759,
0.1818181723356247,
0.1904761791229248,
0.380952388048172,
0.19999998807907104,
0.09756097197532654,
0.05714285373687744,
0.20689654350280762,
0.19354838132858276,
0.4000000059604645,
0.0714285671710968,
0.07692307233810425,
0.1875,
0.1395348757505417,
0
] | HZEg3S-Gppr | true | [
"We study representations of phonology in neural network models of spoken language with several variants of analytical techniques."
] |
[
"Super Resolution (SR) is a fundamental and important low-level computer vision (CV) task.",
"Different from traditional SR models, this study concentrates on a specific but realistic SR issue: How can we obtain satisfied SR results from compressed JPG (C-JPG) image, which widely exists on the Internet.",
"In general, C-JPG can release storage space while keeping considerable quality in visual.",
"However, further image processing operations, e.g., SR, will suffer from enlarging inner artificial details and result in unacceptable outputs.",
"To address this problem, we propose a novel SR structure with two specifically designed components, as well as a cycle loss.",
"In short, there are mainly three contributions to this paper.",
"First, our research can generate high-qualified SR images for prevalent C-JPG images.",
"Second, we propose a functional sub-model to recover information for C-JPG images, instead of the perspective of noise elimination in traditional SR approaches.",
"Third, we further integrate cycle loss into SR solver to build a hybrid loss function for better SR generation.",
"Experiments show that our approach achieves outstanding performance among state-of-the-art methods.",
"With the marvelous achievement of deep learning (DL) in computer vision (CV), Super Resolution (SR) attracts much attention for its crucial value as the basis of many high-level CV tasks He et al., 2016) .",
"Deep learning Super Resolution (DL-SR) algorithms (Kim et al., 2016; Lim et al., 2017; Haris et al., 2018; Zhang et al., 2018c; b) strive for finding the complex nonlinear mapping between low resolution (LR) images and their high resolution (HR) counterparts.",
"However, the learned model only reflects the inverse of down-scaled mapping, which is used to obtain LR images from their HR fathers.",
"In other words, if there are some spots/stains in LR inputs, the SR model will treat them as inherent elements, and the corresponding SR outputs will enlarge these undesirable details.",
"In reality, on the Internet, JPG compression is probably the most commonly used pattern for storage space reduction.",
"That is to say, the LR image will be further processed into a compressed JPG (C-JPG) image.",
"The quality of C-JPG will greatly drop, and the compression may yield unpleasant artifacts, for example, the presence of obvious partition lines, which vastly deteriorates the overall visual feeling.",
"Hence, directly solving high-level CV tasks with these C-JPG images will lead to poor performance.",
"In this paper, we propose a lossless SR model to obtain images with satisfying quality from the low-quality inputs (C-JPG).",
"The deterioration in C-JPG makes the SR processing a huge challenge.",
"In this paper, we focus on the more realistic C-JPG SR problem.",
"Many SR methods regarding to the real-world condition images have been already developed, such as Zhang et al. (2018a) ; Yuan et al. (2018) .",
"Among them, some models regard the noise as a kernel estimating problem which can be solved by addictive Gaussian noises.",
"However, the distribution of most real images are inconsistent with the hypothetical Gaussian distribution.",
"Taking C-JPG images as a example, the image compression operation is related to decreasing information from original image instead of adding specific noises.",
"Other models learn the related information from irrelevant LR-HR images to obtain similar representations by unsupervised strategy.",
"All of them cannot solve the problem well.",
"In general, most LR images are produced through performing traditional interpolation method (mostly bicubic) on their HR fathers.",
"The SR training process should recover this down-scaled mapping in a reverse manner.",
"Referring to our C-JPG SR issue, when searching images from Google, a lot of unpleasant details are displayed, especially in the edges of objects.",
"However, the low quality of image makes former SR methods fail to generate applicable images.",
"As shown in Fig. 1 , it is shown that the SR generations of traditional bicubic interpolation, leading SR algorithm RCAN, and RCAN with pre-denoising input all demonstrate poor quality with the low quality C-JPG inputs.",
"Damaged grids are apparently enlarged by the approaches designed for traditional non-JPG datasets.",
"More specialized analysis can be found in the research of Köhler et al. (2017) .",
"Note that the image pairs with fixed down-scaled kernel have been successfully learnt by SR models, such as SRGAN (Ledig et al., 2017) , EDSR (Lim et al., 2017) , and RDN (Zhang et al., 2018c) .",
"In this study, we deliberately build a more complicated dataset by adding JPG format LR images to the training data.",
"To be specific, we have three kinds of training inputs: C-JPG LR, LR, and HR images.",
"The whole training process includes two separate functional components: missing detail recuperative part (JPG recovering stage) and SR mapping learning part (SR generating stage).",
"In order to remove ring, checkerboard effects, as well as other noise, the former half sub-model is trained with pre-processed C-JPG LR images as inputs, and original LR ones as the supervised information.",
"The function of this stage is to recover LR image from its compression counterpart.",
"Hence, the outputs (LR(C − JP G)) of the first part are greatly improved and free of partition lines phenomenon.",
"Based on these improved LR images, the latter sub-model continues to learn the mapping between (LR(C −JP G)) and HR.",
"Therefore, an integrated pipeline for SR representation between C-JPG and HR images is achieved through the jointly two sub-models.",
"In short, there are mainly three contributions in this study:",
"• Our research can be regarded as an universal SR method that generates SR images from C-JPG inputs, which is empirically proved to be more difficult than SR with non-JPG inputs.",
"• We regard this specific SR task as a recovering information process for the inputs, compared with the former denoising assumption including down-sampling and degradation parts.",
"In this viewpoint, a recovering model is firstly introduced to generate satisfied intermediates from C-JPG inputs.",
"• We further propose an integrated SR model training pipeline with two-level data, i.e., C-JPG LR and LR images, as well as a new integrated loss function.",
"The experimental results demonstrate our method can surpass traditional SR models.",
"In this paper, we propose a lossless SISR model for low-quality C-JPG images which is extensive used on the Internet.",
"Based on our redefined C-JPG SR pipeline, two functional stages are integrated to fulfill the SR task on C-JPG images.",
"In addition, we employ cycle loss to guarantee the consistency after above two stages.",
"The intensive experiments demonstrate that our model can learn capable representations of LR inputs for C-JPG SR task and outperform other cutting edges in SISR.",
"More exploration should be executed on other CV tasks with C-JPG images inputs as the future work."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.1904761791229248,
0,
0,
0.0624999962747097,
0,
0.1666666567325592,
0.22857142984867096,
0.06666666269302368,
0,
0.08695651590824127,
0.0833333283662796,
0.1764705777168274,
0.09999999403953552,
0.13333332538604736,
0.13793103396892548,
0.10256409645080566,
0.0714285671710968,
0.24242423474788666,
0.1666666567325592,
0.1599999964237213,
0.17142856121063232,
0.12121211737394333,
0.23999999463558197,
0.22857142984867096,
0.19999998807907104,
0.2857142686843872,
0.06451612710952759,
0.07692307233810425,
0.2222222238779068,
0.2857142686843872,
0.13636362552642822,
0.1538461446762085,
0.14814814925193787,
0.1395348757505417,
0.24242423474788666,
0.1428571343421936,
0.11428570747375488,
0.09756097197532654,
0.07407406717538834,
0.12903225421905518,
0.0624999962747097,
0.25,
0,
0.09756097197532654,
0.21052631735801697,
0,
0.10256409645080566,
0.0833333283662796,
0.1818181723356247,
0.2666666507720947,
0.07407406717538834,
0.10526315122842789,
0.13333332538604736
] | r1l0VCNKwB | true | [
"We solve the specific SR issue of low-quality JPG images by functional sub-models."
] |
[
"Keyword spotting—or wakeword detection—is an essential feature for hands-free operation of modern voice-controlled devices.",
"With such devices becoming ubiquitous, users might want to choose a personalized custom wakeword.",
"In this work, we present DONUT, a CTC-based algorithm for online query-by-example keyword spotting that enables custom wakeword detection.",
"The algorithm works by recording a small number of training examples from the user, generating a set of label sequence hypotheses from these training examples, and detecting the wakeword by aggregating the scores of all the hypotheses given a new audio recording.",
"Our method combines the generalization and interpretability of CTC-based keyword spotting with the user-adaptation and convenience of a conventional query-by-example system.",
"DONUT has low computational requirements and is well-suited for both learning and inference on embedded systems without requiring private user data to be uploaded to the cloud.",
"In this paper, we proposed DONUT, an efficient algorithm for online query-by-example keyword spotting using CTC.",
"The algorithm learns a list of hypothetical label sequences from the user's speech during enrollment and uses these hypotheses to score audios at test time.",
"We showed that the model is interpretable, and thus easy to inspect, debug, and tweak, yet at the same time has high accuracy.Because training a wakeword model amounts to a simple beam search, it is possible to train a model on the user's device without uploading a user's private voice data to the cloud.Our technique is in principle applicable to any domain in which a user would like to teach a system to recognize a sequence of events, such as a melody (a sequence of musical notes) or a gesture (a sequence of hand movements).",
"It would be interesting to see how well the proposed technique transfers to these other domains."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.13793103396892548,
0,
0.11764705181121826,
0.17777776718139648,
0.060606054961681366,
0.09999999403953552,
0.12903225421905518,
0.19999998807907104,
0.12048192322254181,
0.06666666269302368
] | SkMRja6ssQ | true | [
"We propose an interpretable model for detecting user-chosen wakewords that learns from the user's examples."
] |
[
"To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed.",
"One way of constructing such representations is by focusing on the important events in a sequence.",
"In this paper, we propose a model that learns both to discover such key events (or keyframes) as well as to represent the sequence in terms of them. ",
"We do so using a hierarchical Keyframe-Inpainter (KeyIn) model that first generates keyframes and their temporal placement and then inpaints the sequences between keyframes.",
"We propose a fully differentiable formulation for efficiently learning the keyframe placement.",
"We show that KeyIn finds informative keyframes in several datasets with diverse dynamics.",
"When evaluated on a planning task, KeyIn outperforms other recent proposals for learning hierarchical representations.",
"When thinking about the future, humans focus their thoughts on the important things that may happen (When will the plane depart?) without fretting about the minor details that fill each intervening moment (What is the last word I will say to the taxi driver?).",
"Because the vast majority of elements in a temporal sequence contains redundant information, a temporal abstraction can make reasoning and planning both easier and more efficient.",
"How can we build such an abstraction?",
"Consider the example of a lead animator who wants to show what happens in the next scene of a cartoon.",
"Before worrying about every low-level detail, the animator first sketches out the story by keyframing, drawing the moments in time when the important events occur.",
"The scene can then be easily finished by other animators who fill in the rest of the sequence from the story laid out by the keyframes.",
"In this paper, we argue that learning to discover such informative keyframes from raw sequences is an efficient and powerful way to learn to reason about the future.",
"Our goal is to learn such an abstraction for future image prediction.",
"In contrast, much of the work on future image prediction has focused on frame-by-frame synthesis (Oh et al. (2015) ; Finn et al. (2016) ).",
"This strategy puts an equal emphasis on each frame, irrespective of the redundant content it may contain or its usefulness for reasoning relative to the other predicted frames.",
"Other recent work has considered predictions that \"jump\" more than one step into the future, but these approaches either used fixed-offset jumps (Buesing et al., 2018) or used heuristics to select the predicted frames (Neitz et al., 2018; Jayaraman et al., 2019; Gregor et al., 2019) .",
"In this work, we propose a method that selects the keyframes that are most informative about the full sequence, so as to allow us to reason about the sequence holistically while only using a small subset of the frames.",
"We do so by ensuring that the full sequence can be recovered from the keyframes with an inpainting strategy, similar to how a supporting animator finishes the story keyframed by the lead.",
"One possible application for a model that discovers informative keyframes is in long-horizon planning.",
"Recently, predictive models have been employed for model-based planning and control ).",
"However, they reason about every single future time step, limiting their applicability to short horizon tasks.",
"In contrast, we show that a model that reasons about the future using a small set of informative keyframes enables visual predictive planning for horizons much greater than previously possible by using keyframes as subgoals in a hierarchical planning framework.",
"Figure 1: Keyframing the future.",
"Instead of predicting one frame after the other, we propose to represent the sequence with the keyframes that depict the interesting moments of the sequence.",
"The remaining frames can be inpainted given the keyframes.",
"To discover informative frames in raw sequence data, we formulate a hierarchical probabilistic model in which a sequence is represented by a subset of its frames (see Fig. 1 ).",
"In this two-stage model, a keyframing module represents the keyframes as well as their temporal placement with stochastic latent variables.",
"The images that occur at the timepoints between keyframes are then inferred by an inpainting module.",
"We parametrize this model with a neural network and formulate a variational lower bound on the sequence log-likelihood.",
"Optimizing the resulting objective leads to a model that discovers informative future keyframes that can be easily inpainted to predict the full future sequence.",
"Our contributions are as follows.",
"We formulate a hierarchical approach for the discovery of informative keyframes using joint keyframing and inpainting (KEYIN).",
"We propose a soft objective that allows us to train the model in a fully differentiable way.",
"We first analyze our model on a simple dataset with stochastic dynamics in a controlled setting and show that it can reliably recover the underlying keyframe structure on visual data.",
"We then show that our model discovers hierarchical temporal structure on more complex datasets of demonstrations: an egocentric gridworld environment and a simulated robotic pushing dataset, which is challenging for current approaches to visual planning.",
"We demonstrate that the hierarchy discovered by KEYIN is useful for planning, and that the resulting approach outperforms other proposed hierarchical and non-hierarchical planning schemes on the pushing task.",
"Specifically, we show that keyframes predicted by KEYIN can serve as useful subgoals that can be reached by a low-level planner, enabling long-horizon, hierarchical control.",
"We presented KEYIN, a method for representing a sequence by its informative keyframes by jointly keyframing and inpainting.",
"KEYIN first generates the keyframes of a sequence and their temporal placement and then produces the full sequence by inpainting between keyframes.",
"We showed that KEYIN discovers informative keyframes on several datasets with stochastic dynamics.",
"Furthermore, by using the keyframes for planning, we showed our method outperforms several other hierarchical planning schemes.",
"Our method opens several avenues for future work.",
"First, an improved training procedure that allows end-to-end training is desirable.",
"Second, more powerful hierarchical planning approaches can be designed using the keyframe representation to scale to long-term real-world tasks.",
"Finally, the proposed keyframing method can be applied to a variety of applications, including video summarization, video understanding, and multi-stage hierarchical video prediction."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.29999998211860657,
0.2222222238779068,
0.4680851101875305,
0.3333333134651184,
0.25,
0.3030303120613098,
0.05714285373687744,
0.1071428507566452,
0.23255813121795654,
0,
0.21621620655059814,
0.0952380895614624,
0.1904761791229248,
0.3478260934352875,
0.1249999925494194,
0.0952380895614624,
0.1702127605676651,
0.1355932205915451,
0.3461538553237915,
0.2916666567325592,
0.3529411852359772,
0.0624999962747097,
0.1111111044883728,
0.29629629850387573,
0.1599999964237213,
0.3589743673801422,
0.20689654350280762,
0.35555556416511536,
0.1538461446762085,
0.1666666567325592,
0.3243243098258972,
0.44999998807907104,
0,
0.3243243098258972,
0.4444444477558136,
0.2916666567325592,
0.2181818187236786,
0.17777776718139648,
0.1428571343421936,
0.3888888955116272,
0.2631579041481018,
0.24242423474788666,
0.10810810327529907,
0.0714285671710968,
0.06666666269302368,
0.10526315122842789,
0.24390242993831635
] | BklfR3EYDH | true | [
"We propose a model that learns to discover informative frames in a future video sequence and represent the video via its keyframes."
] |
[
"This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder.",
"Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation's suitability for downstream tasks.",
"We further control characteristics of the representation by matching to a prior distribution adversarially.",
"Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures.",
"DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals.",
"One core objective of deep learning is to discover useful representations, and the simple idea explored here is to train a representation-learning function, i.e. an encoder, to maximize the mutual information (MI) between its inputs and outputs.",
"MI is notoriously difficult to compute, particularly in continuous and high-dimensional settings.",
"Fortunately, recent advances enable effective computation of MI between high dimensional input/output pairs of deep neural networks (Belghazi et al., 2018) .",
"We leverage MI estimation for representation learning and show that, depending on the downstream task, maximizing MI between the complete input and the encoder output (i.e., global MI) is often insufficient for learning useful representations.",
"Rather, structure matters: maximizing the average MI between the representation and local regions of the input (e.g. patches rather than the complete image) can greatly improve the representation's quality for, e.g., classification tasks, while global MI plays a stronger role in the ability to reconstruct the full input given the representation.Usefulness of a representation is not just a matter of information content: representational characteristics like independence also play an important role (Gretton et al., 2012; Hyvärinen & Oja, 2000; Hinton, 2002; Schmidhuber, 1992; Bengio et al., 2013; Thomas et al., 2017) .",
"We combine MI maximization with prior matching in a manner similar to adversarial autoencoders (AAE, Makhzani et al., 2015) to constrain representations according to desired statistical properties.",
"This approach is closely related to the infomax optimization principle (Linsker, 1988; Bell & Sejnowski, 1995) , so we call our method Deep InfoMax (DIM).",
"Our main contributions are the following:• We formalize Deep InfoMax (DIM), which simultaneously estimates and maximizes the mutual information between input data and learned high-level representations.•",
"Our mutual information maximization procedure can prioritize global or local information, which we show can be used to tune the suitability of learned representations for classification or reconstruction-style tasks.•",
"We use adversarial learning (à la Makhzani et al., 2015) to constrain the representation to have desired statistical characteristics specific to a prior.•",
"We introduce two new measures of representation quality, one based on Mutual Information Neural Estimation (MINE, Belghazi et al., 2018 ) and a neural dependency measure (NDM) based on the work by Brakel & Bengio (2017) , and we use these to bolster our comparison of DIM to different unsupervised methods.",
"In this work, we introduced Deep InfoMax (DIM), a new method for learning unsupervised representations by maximizing mutual information, allowing for representations that contain locally-consistent information across structural \"locations\" (e.g., patches in an image).",
"This provides a straightforward and flexible way to learn representations that perform well on a variety of tasks.",
"We believe that this is an important direction in learning higher-level representations.",
"Here we show the relationship between the Jensen-Shannon divergence (JSD) between the joint and the product of marginals and the pointwise mutual information (PMI).",
"Let p(x",
") and p(y",
") be two marginal densities, and define p(y|x) and p(x, y",
") = p(y|x)p(x",
") as the conditional and joint distribution, respectively. Construct",
"a probability mixture density, m(x, y) = 1 2",
"(p(x)p(y) +",
"p(x,",
"y)). It follows",
"that",
"m(x) = p(x), m(y)",
"= p(y),",
"and m(y|x",
") = 1 2",
"(p(y) + p(y|x)). Note that",
": DISPLAYFORM0",
"Discarding some constants: DISPLAYFORM1 The quantity inside the expectation of Eqn. 10 is a concave, monotonically",
"increasing function of the ratio p(y|x) p(y) , which is exactly e PMI(x,",
"y) . Note this relationship does not",
"hold for the JSD of arbitrary distributions, as the the joint and product of marginals are intimately coupled.We can verify our theoretical observation by plotting the JSD and KL divergences between the joint and the product of marginals, the latter of which is the formal definition of mutual information (MI). As computing the continuous MI",
"is difficult, we assume a discrete input with uniform probability, p(x) (e.g., these could be one-hot",
"variables indicating one of N i.i.d. random samples), and a randomly initialized N × M joint distribution, p(x, y), such that M j=1 p(x i , y j )",
"= 1 ∀i. For this joint distribution, we sample",
"from a uniform distribution, then apply dropout to encourage sparsity to simulate the situation when there is no bijective function between x and y, then apply a softmax. As the distributions are discrete, we",
"can compute the KL and JSD between p(x, y) and p(x)p(y).We ran these experiments",
"with matched",
"input",
"/ output",
"dimensions of 8, 16, 32, 64, and 128, randomly drawing 1000 joint distributions, and computed the KL and JSD divergences directly. Our results ( Figure A.1) indicate that the KL (traditional",
"definition of mutual information) and the JSD have an approximately monotonic relationship. Overall, the distributions with the highest mutual information",
"also have the highest JSD."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.260869562625885,
0.1249999925494194,
0.2631579041481018,
0.15094339847564697,
0.08888888359069824,
0.17543859779834747,
0.1666666567325592,
0.04444443807005882,
0.1818181723356247,
0.1428571343421936,
0.1599999964237213,
0.08163265138864517,
0.20408162474632263,
0.1538461446762085,
0.1702127605676651,
0.17142856121063232,
0.17241378128528595,
0.1463414579629898,
0.1111111044883728,
0.1428571343421936,
0.07407407462596893,
0.05882352590560913,
0,
0.12121211737394333,
0,
0,
0,
0,
0,
0.04999999701976776,
0.05405404791235924,
0,
0.1904761791229248,
0.0476190410554409,
0.039215680211782455,
0,
0.15094339847564697,
0.21052631735801697,
0.07692307233810425,
0.19512194395065308,
0.06896551698446274
] | Bklr3j0cKX | true | [
"We learn deep representation by maximizing mutual information, leveraging structure in the objective, and are able to compute with fully supervised classifiers with comparable architectures"
] |
[
"Estimating the importance of each atom in a molecule is one of the most appealing and challenging problems in chemistry, physics, and material engineering.",
"The most common way to estimate the atomic importance is to compute the electronic structure using density-functional theory (DFT), and then to interpret it using domain knowledge of human experts.",
"However, this conventional approach is impractical to the large molecular database because DFT calculation requires huge computation, specifically, O(n^4) time complexity w.r.t. the number of electrons in a molecule.",
"Furthermore, the calculation results should be interpreted by the human experts to estimate the atomic importance in terms of the target molecular property.",
"To tackle this problem, we first exploit machine learning-based approach for the atomic importance estimation.",
"To this end, we propose reverse self-attention on graph neural networks and integrate it with graph-based molecular description.",
"Our method provides an efficiently-automated and target-directed way to estimate the atomic importance without any domain knowledge on chemistry and physics.",
"In molecules, each atom has the importance in manifesting the entire molecular properties, and estimating such atomic importance plays a key role in interpreting molecular systems.",
"For these reasons, the atomic importance estimation has been consistently studied in the scientific communities (Yen & Winefordner, 1976; Tang et al., 2016; Pan et al., 2018) .",
"However, estimating the atomic importance is one of the most challenging tasks in chemistry and quantum mechanics because the importance of each atom is comprehensively determined based on atomic properties, neighbor atoms, bonding types, and target molecular property.",
"The most common approach for estimating the atomic importance is to interpret the electronic structure using density-function theory (DFT) (Sholl & Steckel, 2009) .",
"In this approach, the atomic importance is estimated through three steps:",
"1) A human expert selects appropriate functional and basis sets for a given molecule to apply DFT;",
"2) The electronic structure of the molecule is calculated based on DFT calculation;",
"3) The human expert estimates the atomic importance by interpreting the calculated electronic structure in terms of target molecular property.",
"Although some methods are developed to estimate relative contributions of atoms in molecules, their generality is typically limited to the molecular properties (Marenich et al., 2012; Glendening et al., 2019) .",
"For this reason, DFT that can generate a general description of the molecule has been most widely used to interpret the molecular systems and to reveal important atoms for target molecular property (Crimme et al., 2010; Lee et al., 2018b; Chibani et al., 2018) .",
"However, the conventional approach based on DFT has three fundamental limitations in efficiency, automation, and generality.",
"• Efficiency: As an example of the electronic structure computations, DFT calculation requires O(n 4 ) time complexity to compute the electronic structure, where n is the number of basis functions that describe electrons in the molecule.",
"Generally, molecules have more electrons than atoms.",
"• Automation: DFT cannot automatically generate all target-specified physical properties in principle, so human expert should manually select additional calculation method to com-pute target molecular property from the electronic distributions.",
"That is, domain knowledge of the human experts is necessarily required to estimate the atomic importance in terms of the target molecular property.",
"• Generality: For some molecular properties, the relationship between them and the electronic distributions is not clear.",
"Moreover, sometimes the estimation is impossible because the relationships between molecular property and molecular structure are not interpretable.",
"For these limitations, estimating the atomic importance is remaining as one of the most challenging problems in both science and engineering such as physics, chemistry, pharmacy, and material engineering.",
"To overcome the limitations of the conventional approach in estimating the atomic importance, we first exploit machine learning-based approach.",
"To this end, we propose a new concept of reverse self-attention and integrate it with the graph neural networks.",
"The self-attention mechanism was originally designed to determine important elements within the input data to accurately predict its corresponding target or label in natural language processing (Vaswani et al., 2017) .",
"Similarly, in graph neural networks, self-attention is used to determine important neighbor nodes within the input graph to generate more accurate node or graph embeddings (Velickovic et al., 2018) .",
"Our reverse self-attention is defined as the inverse of the self-attention to calculate how important a selected node is considered in the graph.",
"For a given molecule and target property, the proposed estimation method selects the atom that has the largest reverse self-attention score as the most important atom.",
"The proposed method estimates the target-directed atomic importance through two steps:",
"1) For the given molecular graphs and their corresponding target properties, self-attention-based graph neural network is trained;",
"2) After the training, the reverse self-attention scores are calculated, and then the atomic importance is estimated according to the reverse self-attention scores.",
"As shown in this estimation process, neither huge computation nor human experts in chemistry and physics is required.",
"Thus, the proposed method provides an efficient and fully-automated way to estimate the atomic importance in terms of the target molecular property via target-aware training of the graph self-attention.",
"To validate the effectiveness of the proposed method, we conducted comprehensive experiments and evaluated the estimation performance using both quantitative and qualitative analyses.",
"The contributions of this paper are summarized as:",
"• This paper first proposes a machine learning-based approach to estimate the atomic importance in the molecule.",
"• The proposed method drastically reduced the computational cost for the atomic importance estimation from O(n 4 ) time complexity to the practical time complexity of the graph-based deep learning.",
"• The proposed method provides a fully-automated and target-directed way to estimate the atomic importance.",
"• We comprehensively validated the effectiveness of the proposed method using both quantitative and qualitative evaluations.",
"However, since none of a labeled dataset for the atomic importance estimation and a systematic way to quantitatively evaluate the estimation accuracy, we devised a systematic quantitative evaluation method and validated the effectiveness of the proposed method using it.",
"This paper first exploited machine learning approach to estimate the atomic importance in molecules.",
"To this end, the reverse self-attention was proposed and integrated with graph attention network.",
"The proposed method is efficient and fully-automated.",
"Furthermore, it can estimate the atomic importance in terms of the given target molecular property without human experts.",
"However, the proposed method can estimate the importance of the group of atoms that consists of k-hop neighbor atoms only, even though some important group of atoms may have an arbitrary shape such as ring and bar.",
"As the future work, it is necessary to modify the proposed method to estimate the importance of the group of atoms with arbitrary shape.",
"Fig.",
"7 shows the original molecules and their selected sub-molecules with the extremely small error.",
"As shown in the results, even though the molecules have carbon rings, the important group of atoms was correctly captured because the molecules are characterized by nitrogen or oxygen.",
"On the other hand, Fig. 8 shows the original molecules and their sub-molecules with the extremely large error.",
"Chemically, double bond plays an important role in determining HOMO-LUMO gap.",
"However, as shown in the results of Fig. 8 , sub-structures that do not contain the double bonds are selected as an important group of atoms.",
"Thus, we need to develop a descriptor for the molecules that can emphasis the bond-features more strongly."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.24390242993831635,
0.21276594698429108,
0.11764705181121826,
0.19512194395065308,
0.2222222238779068,
0.41025641560554504,
0.2926829159259796,
0.23255813121795654,
0.1304347813129425,
0.26923075318336487,
0.1395348757505417,
0.1875,
0.10526315122842789,
0.23529411852359772,
0.19999998807907104,
0.08163265138864517,
0.1355932205915451,
0.21621620655059814,
0.07547169178724289,
0,
0.039215680211782455,
0.19512194395065308,
0.10810810327529907,
0.10810810327529907,
0.21739129722118378,
0.21621620655059814,
0.6000000238418579,
0.07843136787414551,
0.1666666567325592,
0.29999998211860657,
0.23255813121795654,
0.25,
0.21052631735801697,
0.31578946113586426,
0.052631575614213943,
0.3478260934352875,
0.1463414579629898,
0.06896550953388214,
0.2702702581882477,
0.17391303181648254,
0.3888888955116272,
0.2222222238779068,
0.23999999463558197,
0.22857142984867096,
0.2857142686843872,
0.1428571343421936,
0.21052631735801697,
0.1599999964237213,
0.14999999105930328,
0.11764705181121826,
0.08695651590824127,
0.10810810327529907,
0,
0.09090908616781235,
0.10810810327529907
] | rJxilTNtDB | true | [
"We first propose a fully-automated and target-directed atomic importance estimator based on the graph neural networks and a new concept of reverse self-attention."
] |
[
"Spectral clustering is a leading and popular technique in unsupervised data analysis. ",
"Two of its major limitations are scalability and generalization of the spectral embedding (i.e., out-of-sample-extension).",
"In this paper we introduce a deep learning approach to spectral clustering that overcomes the above shortcomings.",
"Our network, which we call SpectralNet, learns a map that embeds input data points into the eigenspace of their associated graph Laplacian matrix and subsequently clusters them.",
"We train SpectralNet using a procedure that involves constrained stochastic optimization.",
"Stochastic optimization allows it to scale to large datasets, while the constraints, which are implemented using a special purpose output layer, allow us to keep the network output orthogonal.",
"Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points.",
"To further improve the quality of the clustering, we replace the standard pairwise Gaussian affinities with affinities leaned from unlabeled data using a Siamese network. ",
"Additional improvement can be achieved by applying the network to code representations produced, e.g., by standard autoencoders.",
"Our end-to-end learning procedure is fully unsupervised.",
"In addition, we apply VC dimension theory to derive a lower bound on the size of SpectralNet. ",
"State-of-the-art clustering results are reported for both the MNIST and Reuters datasets.\n",
"Discovering clusters in unlabeled data is a task of significant scientific and practical value.",
"With technological progress images, texts, and other types of data are acquired in large numbers.",
"Their labeling, however, is often expensive, tedious, or requires expert knowledge.",
"Clustering techniques provide useful tools to analyze such data and to reveal its underlying structure.Spectral Clustering BID20 BID16 BID22 ) is a leading and highly popular clustering algorithm.",
"It works by embedding the data in the eigenspace of the Laplacian matrix, derived from the pairwise similarities between data points, and applying k-means to this representation to obtain the clusters.",
"Several properties make spectral clustering appealing: First, its embedding optimizes a natural cost function, minimizing pairwise distances between similar data points; moreover, this optimal embedding can be found analytically.",
"Second, spectral clustering variants arise as relaxations of graph balanced-cut problems BID22 .",
"Third, spectral clustering was shown to outperform other popular clustering algorithms such as k-means DCN, VaDE, DEPICT and IMSAT (bottom) on simulated datasets in 2D and 3D.",
"Our approach successfully finds these non-convex clusters, whereas the competing algorithms fail on all five examples.",
"(The full set of results for these algorithms is shown in FIG4 in Appendix A.)",
"BID22 , arguably due to its ability to handle non-convex clusters.",
"Finally, it has a solid probabilistic interpretation, since the Euclidean distance in the embedding space is equal to a diffusion distance, which, informally, measures the time it takes probability mass to transfer between points, via all the other points in the dataset BID15 BID5 .While",
"spectral embedding of data points can be achieved by a simple eigen-decomposition of their graph Laplacian matrix, with large datasets direct computation of eigenvectors may be prohibitive. Moreover",
", generalizing a spectral embedding to unseen data points, a task commonly referred to as out-of-sample-extension (OOSE), is a non-trivial task; see, for example, BID1 BID2 BID9 BID6 ).In this",
"work we introduce SpectralNet, a deep learning approach to spectral clustering, which addresses the scalability and OOSE problems pointed above. Specifically",
", SpectralNet is trained in a stochastic fashion, which allows it to scale. Moreover, once",
"trained, it provides a function, implemented as a feed-forward network, that maps each input data point to its spectral embedding coordinates. This map can easily",
"be applied to new test data. Unlike optimization",
"of standard deep learning models, SpectralNet is trained using constrained optimization, where the constraint (orthogonality of the net outputs) is enforced by adding a linear layer, whose weights are set by the QR decomposition of its inputs. In addition, as good",
"affinity functions are crucial for the success of spectral clustering, rather than using the common Euclidean distance to compute Gaussian affinity, we show how Siamese networks can be trained from the given unlabeled data to learn more informative pairwise distances and consequently significantly improve the quality of the clustering. Further improvement",
"can be achieved if our network is applied to transformed data obtained by an autoencoder (AE). On the theoretical",
"front, we utilize VC-dimension theory to derive a lower bound on the size of neural networks that compute spectral clustering. Our experiments indicate",
"that our network indeed approximates the Laplacian eigenvectors well, allowing the network to cluster challenging non-convex point sets, which recent deep network based methods fail to handle; see examples in Figure 1. Finally, SpetralNet achieves",
"competitive performance on MNIST handwritten digit dataset and state-of-the-art on the Reuters document dataset, whose size makes standard spectral clustering inapplicable.",
"We have introduced SpectralNet, a deep learning approach for approximate spectral clustering.",
"The stochastic training of SpectralNet allows us to scale to larger datasets than what vanilla spectral clustering can handle, and the parametric map obtained from the net enables straightforward out of sample extension.",
"In addition, we propose to use unsupervised Siamese networks to compute distances, and empirically show that this results in better performance, comparing to standard Euclidean distances.",
"Further improvement are achieved by applying our network to code representations produced with a standard stacked autoencoder.",
"We present a novel analysis of the VC dimension of spectral clustering, and derive a lower bound on the size of neural nets that compute it.",
"In addition, we report state of the art results on two benchmark datasets, and show that SpectralNet outperforms existing methods when the clusters cannot be contained in non overlapping convex shapes.",
"We believe the integration of spectral clustering with deep learning provides a useful tool for unsupervised deep learning."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.09999999403953552,
0.08695651590824127,
0.25,
0,
0.1111111044883728,
0.0624999962747097,
0.0952380895614624,
0.06666666269302368,
0,
0,
0,
0.09999999403953552,
0,
0,
0,
0.060606058686971664,
0,
0.11428570747375488,
0.21052631735801697,
0.125,
0,
0,
0,
0,
0.0624999962747097,
0.05714285373687744,
0.1428571343421936,
0,
0.06451612710952759,
0,
0.0952380895614624,
0.1538461446762085,
0,
0.2666666507720947,
0.052631575614213943,
0.14814814925193787,
0.31578946113586426,
0.10810810327529907,
0.06451612710952759,
0,
0.13793103396892548,
0,
0.260869562625885
] | HJ_aoCyRZ | true | [
"Unsupervised spectral clustering using deep neural networks"
] |
[
"Meta-learning methods, most notably Model-Agnostic Meta-Learning (Finn et al, 2017) or MAML, have achieved great success in adapting to new tasks quickly, after having been trained on similar tasks.\n",
"The mechanism behind their success, however, is poorly understood.\n",
"We begin this work with an experimental analysis of MAML, finding that deep models are crucial for its success, even given sets of simple tasks where a linear model would suffice on any individual task.\n",
"Furthermore, on image-recognition tasks, we find that the early layers of MAML-trained models learn task-invariant features, while later layers are used for adaptation, providing further evidence that these models require greater capacity than is strictly necessary for their individual tasks.\n",
"Following our findings, we propose a method which enables better use of model capacity at inference time by separating the adaptation aspect of meta-learning into parameters that are only used for adaptation but are not part of the forward model.\n",
"We find that our approach enables more effective meta-learning in smaller models, which are suitably sized for the individual tasks.\n",
"Meta-learning or learning to learn is an appealing notion due to its potential in addressing important challenges when applying machine learning to real-world problems.",
"In particular, learning from prior tasks but being able to to adapt quickly to new tasks improves learning efficiency, model robustness, etc.",
"A promising set of techiques, Model-Agnostic Meta-Learning (Finn et al., 2017) or MAML, and its variants, have received a lot of interest (Nichol et al., 2018; Lee & Choi, 2018; Grant et al., 2018) .",
"However, despite several efforts, understanding of how MAML works, either theoretically or in practice, has been lacking Fallah et al., 2019 ).",
"For a model that meta-learns, its parameters need to encode not only the common knowledge extracted from the tasks it has seen, which form a task-general inductive bias, but also the capability to adapt to new test tasks (similar to those it has seen) with task-specific knowledge.",
"This begs the question: how are these two sets of capabilities represented in a single model and how do they work together?",
"In the case of deep learning models, one natural hypothesis is that while knowledge is represented distributedly in parameters, they can be localized -for instance, lower layers encode task-general inductive bias and the higher layers encode adaptable task-specific inductive bias.",
"This hypothesis is consistent with one of deep learning's advantages in learning representations (or feature extractors) using its bottom layers.",
"Then we must ask, in order for a deep learning model to meta-learn, does it need more depth than it needs for solving the target tasks?",
"In other words, is having a large capacity to encode knowledge that is unnecessary post-adaptation the price one has to pay in order to be adaptable?",
"Is there a way to have a smaller (say, less deep) meta-learnable model which still adapts well?",
"This question is of both scientific interest and practical importance -a smaller model has a smaller (memory) footprint, faster inference and consumes less resources.",
"In this work, through empirical studies on both synthetic datasets and benchmarks used in the literature, we investigate these questions by analyzing how well different learning models can meta-learn and adapt.",
"We choose to focus on MAML due to its popularity.",
"Our observations suggest depth is indeed necessary for meta-learning, despite the tasks being solvable using a shallower model.",
"Thus, applying MAML to shallower models does not result in successful meta-learning models that can adapt well.",
"Moreover, our studies also show that higher layers are responsible more for adapting to new tasks while the lower layers are responsible for learning task-general features.",
"Our findings prompt us to propose a new method for meta-learning.",
"The new approach introduces a meta-optimizer which learns to guide the (parameter) optimization process of a small model.",
"The small model is used for solving the tasks while the optimizer bears the burden of extracting the knowledge of how to adapt.",
"Empirical results show that despite using smaller models, the proposed algorithm with small models attains similar performance to larger models which use MAML to meta-learn and adapt.",
"We note that a recent and concurrent work to ours addresses questions in this line of inquiry (Raghu et al., 2019) .",
"They reach similar conclusions through different analysis and likewise, they propose a different approach for improving MAML.",
"We believe our work is complementary to theirs.",
"We introduce our approach by analyzing the success and failure modes of optimization-based metalearning methods.",
"Namely, we find that, when successful, these methods tend to learn task-general features in early layers and adaptable parameters/update functions in the later layers.",
"Moreover, we find that this learning fails when model size is reduced, indicating that optimization-based metalearning methods rely on the ability to encode task-general features and/or adaptable parameters, even when the model itself is adequate for learning on the individual tasks.",
"As such, we introduce our method for decomposing modelling from adaptation using factored meta-optimizers.",
"These meta-optimizers enable the forward model to use more capacity on learning task-specific features, while the expressiveness of their updates allows the forward model to adapt quickly to different tasks.",
"We find that our approach is able to enable successful meta-learning in models that do not work with traditional optimization-based meta-learning methods."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07999999821186066,
0,
0.3214285671710968,
0.17241378128528595,
0.3214285671710968,
0.5238094925880432,
0.0952380895614624,
0.051282044500112534,
0.07999999821186066,
0.09090908616781235,
0.13793103396892548,
0.2380952388048172,
0.145454540848732,
0.09756097197532654,
0.2222222238779068,
0.1818181723356247,
0.21621620655059814,
0.1395348757505417,
0.11764705181121826,
0.20000000298023224,
0.10256409645080566,
0.3243243098258972,
0.1860465109348297,
0.375,
0.15789473056793213,
0.09999999403953552,
0.30434781312942505,
0.3255814015865326,
0.2702702581882477,
0.20689654350280762,
0.1111111044883728,
0.1860465109348297,
0.14814814925193787,
0.11428570747375488,
0.04444443807005882,
0.39024388790130615
] | BkljIlHtvS | true | [
"We find that deep models are crucial for MAML to work and propose a method which enables effective meta-learning in smaller models."
] |
[
"Convolutional Neural Networks continuously advance the progress of 2D and 3D image and object classification.",
"The steadfast usage of this algorithm requires constant evaluation and upgrading of foundational concepts to maintain progress.",
"Network regularization techniques typically focus on convolutional layer operations, while leaving pooling layer operations without suitable options.",
"We introduce Wavelet Pooling as another alternative to traditional neighborhood pooling.",
"This method decomposes features into a second level decomposition, and discards the first-level subbands to reduce feature dimensions.",
"This method addresses the overfitting problem encountered by max pooling, while reducing features in a more structurally compact manner than pooling via neighborhood regions.",
"Experimental results on four benchmark classification datasets demonstrate our proposed method outperforms or performs comparatively with methods like max, mean, mixed, and stochastic pooling.",
"Convolutional Neural Networks (CNNs) have become the standard-bearer in image and object classification BID18 .",
"Due to the layer structures conforming to the shape of the inputs, CNNs consistently classify images, objects, videos, etc. at a higher accuracy rate than vector-based deep learning techniques BID18 .",
"The strength of this algorithm motivates researchers to constantly evaluate and upgrade foundational concepts to continue growth and progress.",
"The key components of CNN, the convolutional layer and pooling layer, consistently undergo modifications and innovations to elevate accuracy and efficiency of CNNs beyond previous benchmarks.Pooling has roots in predecessors to CNN such as Neocognitron, which manual subsampling by the user occurs BID5 , and Cresceptron, which introduces the first max pooling operation in deep learning BID28 .",
"Pooling subsamples the results of the convolutional layers, gradually reducing spatial dimensions of the data throughout the network.",
"The benefits of this operation are to reduce parameters, increase computational efficiency, and regulate overfitting BID1 .Methods",
"of pooling vary, with the most popular form being max pooling, and secondarily, average pooling BID18 BID13 . These forms",
"of pooling are deterministic, efficient, and simple, but have weaknesses hindering the potential for optimal network learning BID13 BID30 . Other pooling",
"operations, notably mixed pooling and stochastic pooling, use probabilistic approaches to correct some of the issues of the prior methods BID30 BID31 .However, one commonality",
"all these pooling operations employ a neighborhood approach to subsampling, reminiscent of nearest neighbor interpolation in image processing. Neighborhood interpolation",
"techniques perform fast, with simplicity and efficiency, but introduce artifacts such as edge halos, blurring, and aliasing BID20 . Minimizing discontinuities",
"in the data are critical to aiding in network regularization, and increasing classification accuracy.We propose a wavelet pooling algorithm that uses a second-level wavelet decomposition to subsample features. Our approach forgoes the nearest",
"neighbor interpolation method in favor of an organic, subband method that more accurately represents the feature contents with less artifacts. We compare our proposed pooling",
"method to max, mean, mixed, and stochastic pooling to verify its validity, and ability to produce near equal or superior results. We test these methods on benchmark",
"image classification datasets such as Mixed National Institute of Standards and Technology (MNIST) BID12 , Canadian Institute for Advanced Research (CIFAR-10) BID11 , Street House View Numbers (SHVN) BID17 , and Karolinska Directed Emotional Faces (KDEF) (Lundqvist et al., 1998) . We perform all simulations in MATLAB",
"R2016b.The rest of this paper organizes as follows: Section 2 gives the background, Section 3 describes the proposed methods, Section 4 discusses the experimental results, and Section 5 gives the summary and conclusion.",
"All CNN experiments use MatConvNet BID26 .",
"All training uses stochastic gradient descent BID0 .",
"For our proposed method, the wavelet basis is the Haar wavelet, mainly for its even, square subbands.",
"All experiments are run on a 64-bit operating system, with an Intel Core i7-6800k CPU @ 3.40 GHz processor, with 64.0 GB of RAM.",
"We utilize two GeForce Titan X Pascal GPUs with 12 GB of video memory for all training.",
"All CNN structures except for MNIST use a network loosely based on Zeilers network BID31 .",
"We repeat the experiments with Dropout BID22 and replace Local Response Normalization BID11 with Batch Normalization BID7 for CIFAR-10 and SHVN (Dropout only) to examine how these regularization techniques change the pooling results.",
"To test the effectiveness of each pooling method on each dataset, we solely pool with that method for all pooling layers in that network.",
"All pooling methods use a 2x2 window for an even comparison to the proposed method.",
"Figure 6 gives a selection of each of the datasets.",
"We prove wavelet pooling has potential to equal or eclipse some of the traditional methods currently utilized in CNNs.",
"Our proposed method outperforms all others in the MNIST dataset, outperforms all but one in the CIFAR-10 and KDEF datasets, and performs within respectable ranges of the pooling methods that outdo it in the SHVN dataset.",
"The addition of dropout and batch normalization show our proposed methods response to network regularization.",
"Like the non-dropout cases, it outperforms all but one in both the CIFAR-10 & KDEF datasets, and performs within respectable ranges of the pooling methods that outdo it in the SHVN dataset.",
"Our results confirm previous studies proving that no one pooling method is superior, but some perform better than others depending on the dataset and network structure BID1 ; BID13 .",
"Furthermore, many networks alternate between different pooling methods to maximize the effectiveness of each method.Future work and improvements in this area could be to vary the wavelet basis to explore which basis performs best for the pooling.",
"Altering the upsampling and downsampling factors in the decomposition and reconstruction can lead to better image feature reductions outside of the 2x2 scale.Retention of the subbands we discard for the backpropagation could lead to higher accuracies and fewer errors.",
"Improving the method of FTW we use could greatly increase computational efficiency.",
"Finally, analyzing the structural similarity (SSIM) of wavelet pooling versus other methods could further prove the vitality of using our approach."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.07407406717538834,
0.06896550953388214,
0,
0.25,
0,
0.05405404791235924,
0,
0,
0.04999999701976776,
0.06666666269302368,
0.06557376682758331,
0.14814814925193787,
0.06666666269302368,
0.06451612710952759,
0.060606054961681366,
0.1111111044883728,
0.1249999925494194,
0,
0,
0.05405404791235924,
0,
0.0363636314868927,
0.05128204822540283,
0,
0,
0.06896550953388214,
0.052631575614213943,
0.06666666269302368,
0,
0,
0.060606054961681366,
0,
0.09090908616781235,
0.1249999925494194,
0.04878048226237297,
0.0714285671710968,
0.04999999701976776,
0.0476190447807312,
0.04444444179534912,
0.045454543083906174,
0.07999999821186066,
0.1249999925494194
] | rkhlb8lCZ | true | [
"Pooling is achieved using wavelets instead of traditional neighborhood approaches (max, average, etc)."
] |
[
"Dynamic ridesharing services (DRS) play a major role in improving the efficiency of urban transportation.",
"User satisfaction in dynamic ridesharing is determined by multiple factors such as travel time, cost, and social compatibility with co-passengers.",
"Existing DRS optimize profit by maximizing the operational value for service providers or minimize travel time for users but they neglect the social experience of riders, which significantly influences the total value of the service to users.",
"We propose DROPS, a dynamic ridesharing framework that factors the riders' social preferences in the matching process so as to improve the quality of the trips formed.",
"Scheduling trips for users is a multi-objective optimization that aims to maximize the operational value for the service provider, while simultaneously maximizing the value of the trip for the users.",
"The user value is estimated based on compatibility between co-passengers and the ride time.",
"We then present a real-time matching algorithm for trip formation.",
"Finally, we evaluate our approach empirically using real-world taxi trips data, and a population model including social preferences based on user surveys.",
"The results demonstrate improvement in riders' social compatibility, without significantly affecting the vehicle miles for the service provider and travel time for users.",
"Dynamic ridesharing services, such as UberPool and LyftLine, are becoming an increasingly popular means of commute, especially in large cities BID6 BID2 .",
"Dynamic ridesharing is characterized by matching multiple requests that arrive in real-time, for a one-way and one-time trip.",
"We consider a setting in which a service provider operates a vehicle fleet and schedules cars to pick up and drop off passengers in response to a stream of requests, which includes matching requests with each other.",
"There are two important factors that explain the growing attractiveness of DRS for customers:",
"(i) cost effectiveness and",
"(ii) ease of finding a ride in large cities where it is comparatively hard to find a taxi otherwise.",
"For a service provider, dynamic ridesharing helps serve customers with possibly fewer vehicles, thus reducing their operational cost.A common objective for optimizing riders' satisfaction in existing ridesharing systems is to minimize travel time BID14 BID0 BID2 .",
"In practice, however, there are many other factors that affect user satisfaction in dynamic ridesharing, apart from travel time.",
"Since a user could be traveling with strangers in the ride, their compatibility plays a major role in the user's satisfaction.",
"In fact, there is growing evidence that desire for personal space and security when riding with strangers pose a major barrier to using ridesharing for many users (Tao and Wu 2008; BID0 . For example, a female passenger may prefer to ride only with female co-passengers. The user may have a different set of preferences depending on the time of day and the location -preferences are tripspecific and not necessarily user-specific. Consider a scenario with three requests where r 1 and r 2 are male and r 3 is a female passenger. Let these requests arrive at the same time FIG0 ), such that optimizing the operational value for the service provider forms a trip with these requests (1(a)).",
"However, this may violate the users' social preferences and the trip may need to be altered to satisfy the preferences, such as the following:• If the passengers prefer riding with co-passengers of the same gender but are indifferent to riding with copassengers of a different gender, then it is desirable to minimize their ride time overlap in the vehicle by altering the pick up and drop off order FIG0 ); and If the service does not provide a mechanism to express such social preferences and forms trips that violate these preferences (as in 1(a)), the customers may not use the service.",
"Current DRS, however, do not account for social preferences in their optimization, despite being indicated as a major concern for users in several surveys BID0 BID15 BID10 BID21 .",
"We present DROPS (Dynamic Ridesharing Optimization using social PreferenceS), a dynamic ridesharing framework that facilitates incorporating social preferences of the users in the trip formation process.",
"A weight vector over preferences indicates the importance of each factor in determining the trip value to the user.",
"The goal is to form trips that optimize both operational value for the service provider and value of the trip to the passengers, which incentivizes them to continue using the service and benefits the service provider.",
"The value of a trip to a user is calculated based on their social compatibility with other co-passengers, the ride time, and ride cost.",
"We solve this bi-objective optimization problem using scalarization BID18 , which solves a linear combination of the multiple objectives.",
"The relative importance of each objective can be controlled using the weight vector for the objectives.",
"Given a set of riders, we evaluate their potential shared trip using an optimal trajectory planning algorithm.",
"Candidate trips are formed using our real-time greedy algorithm that adds customers to a trip only if the trip's value is above a certain threshold.We consider three basic social factors -age, gender, and user rating-along with a time preference indicating if the user is in a rush.",
"The viability of factoring social preferences into the trips scheduling process is evaluated empirically.",
"The experiments examine the impact of matching with social preferences (social matching) on users and the service provider.",
"We test our approach on a real-world taxi trips dataset and compare the results with that of three baselines, each focusing on optimizing different components of the objective for trip formation.",
"The population model and preferences used in our experiments were acquired using webbased user surveys, which was conducted in two phases and had 489 responses.",
"The survey was conducted specifically to determine how different potential riders evaluate social ridesharing.",
"Our results show that incorporating social preferences of users in the trip formation improves the overall user satisfaction, without significantly affecting the operational cost for the service provider.Our primary contributions are:",
"(i) presenting DROPS, a system for dynamic ridesharing with social preferences;",
"(ii) proposing a real-time greedy algorithm for trip formation; and",
"(iii) extensive empirical evaluation showing the benefits of social matching in dynamic ridesharing using real-world taxi data and a population model based on user surveys.",
"Dynamic ridesharing is an increasingly appealing commuter option.",
"However, numerous surveys have indicated that users' concerns, primarily about the social characteristics of co-passengers, pose a major barrier to using ridesharing for a segment of the population.",
"We present the DROPS system for optimizing dynamic ridesharing with social preferences and present an efficient real-time matching algorithm that can handle effectively high density zones.",
"Our results demonstrate that factoring social preferences into the matching process helps improve the user value, without significantly affecting the operational value to the service provider.",
"Furthermore, survey results indicate that services that perform social matching are likely to incentivize more individuals to use the service.",
"We conclude that while social matching is beneficial overall, it is not always guaranteed to result in improved performance.",
"Factoring social preferences into the matching process is most beneficial in zones with a high request density per decision cycle and greater compatibility among ridesharing users.In the future, we aim to examine ways to extend the matching model to consider nearby trips that have already been dispatched and are currently en-route.",
"We will also consider more complex ways to factor the competing objectives using more general multi-objective planning algorithms BID23 .",
"Additionally, based on the performance analysis of our approach with that of a hindsight trip formation, we aim to employ a predictive model for future requests to improve the user value.",
"While we anticipate some performance gains, we do not expect the relative benefits of social matching to diminish."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.13333332538604736,
0.19999998807907104,
0.2711864411830902,
0.4814814627170563,
0.3461538553237915,
0.1818181723356247,
0.14999999105930328,
0.23076923191547394,
0.23529411852359772,
0.07692307233810425,
0.25,
0.23333333432674408,
0.13636362552642822,
0.05882352590560913,
0.0833333283662796,
0.21212120354175568,
0.12244897335767746,
0.1249999925494194,
0.22807016968727112,
0.24242423474788666,
0.1428571343421936,
0.37037035822868347,
0.21276594698429108,
0.4363636374473572,
0.26923075318336487,
0.12244897335767746,
0.08888888359069824,
0.04255318641662598,
0.28169015049934387,
0.3181818127632141,
0.25531914830207825,
0.24137930572032928,
0.11320754140615463,
0.13636362552642822,
0.3103448152542114,
0.24390242993831635,
0.14999999105930328,
0.2545454502105713,
0.052631575614213943,
0.290909081697464,
0.3272727131843567,
0.49056604504585266,
0.2083333283662796,
0.1666666567325592,
0.28947368264198303,
0.1249999925494194,
0.24561403691768646,
0.12765957415103912
] | SJeGE-cqFE | true | [
"We propose a novel dynamic ridesharing framework to form trips that optimizes both operational value for the service provider and user value to the passengers by factoring the users' social preferences into the decision-making process."
] |
[
"Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.",
"To better understand such attacks, a characterization is needed of the properties of regions (the so-called `adversarial subspaces') in which adversarial examples lie.",
"We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID).",
"LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors.",
"We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks.",
"As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets.",
"Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.",
"Deep Neural Networks (DNNs) are highly expressive models that have achieved state-of-the-art performance on a wide range of complex problems, such as speech recognition and image classification BID18 .",
"However, recent studies have found that DNNs can be compromised by adversarial examples (Szegedy et al., 2013; BID8 BID27 .",
"These intentionally-perturbed inputs can induce the network to make incorrect predictions at test time with high confidence, even when the examples are generated using different networks BID24 BID3 BID29 .",
"The amount of perturbation required is often small, and (in the case of images) imperceptible to human observers.",
"This undesirable property of deep networks has become a major security concern in real-world applications of DNNs, such as self-driving cars and identity recognition BID5 BID34 .",
"In this paper, we aim to further understand adversarial attacks by characterizing the regions within which adversarial examples reside.Each adversarial example can be regarded as being surrounded by a connected region of the domain (the 'adversarial region' or 'adversarial subspace') within which all points subvert the classifier in a similar way.",
"Adversarial regions can be defined not only in the input space, but also with respect to the activation space of different DNN layers (Szegedy et al., 2013) .",
"Developing an understanding of the properties of adversarial regions is a key requirement for adversarial defense.",
"Under the assumption that data can be modeled in terms of collections of manifolds, several works have attempted to characterize the properties of adversarial subspaces, but no definitive method yet exists which can reliably discriminate adversarial regions from those in which normal data can be found.",
"Szegedy et al. (2013) argued that adversarial subspaces are low probability regions (not naturally occurring) that are densely scattered in the high dimensional representation space of DNNs.",
"However, a linear formulation argues that adversarial subspaces span a contiguous multidimensional space, rather than being scattered randomly in small pockets BID8 Warde-Farley et al., 2016) .",
"Tanay & Griffin (2016) further emphasize that adversarial subspaces lie close to (but not on) the data submanifold.",
"Similarly, it has also been found that the boundaries of adversarial subspaces are close to legitimate data points in adversarial directions, and that the higher the number of orthogonal adversarial directions of these subspaces, the more transferable they are to other models (Tramèr et al., 2017) .",
"To summarize, with respect to the manifold model of data, the known properties of adversarial subspaces are: (1) they are of low probability, (2) they span a contiguous multidimensional space, (3) they lie off (but are close to) the data submanifold, and (4) they have class distributions that differ from that of their closest data submanifold.Among adversarial defense/detection techniques, Kernel Density (KD) estimation has been proposed as a measure to identify adversarial subspaces BID7 .",
"BID2 demonstrated the usefulness of KD-based detection, taking advantage of the low probability density generally associated with adversarial subspaces.",
"However, in this paper we will show that kernel density is not effective for the detection of some forms of attack.",
"In addition to kernel density, there are other density-based measures, such as the number of nearest neighbors within a fixed distance, and the mean distance to the k nearest neighbors (k-mean distance).",
"Again, these measures have limitations for the characterization of local adversarial regions.",
"For example, in FIG0 the three density measures fail to differentiate an adversarial example (red star) from a normal example (black cross), as the two examples are locally surrounded by the same number of neighbors FORMULA8 , and have the same k-mean distance (KM=0.19) and kernel density (KD=0.92).As",
"an alternative to density measures, FIG0 leads us to consider expansion-based measures of intrinsic dimensionality as a potentially effective method of characterizing adversarial examples. Expansion",
"models of dimensionality assess the local dimensional structure of the data -such models have been successfully employed in a wide range of applications, such as manifold learning, dimension reduction, similarity search and anomaly detection BID0 BID13 . Although",
"earlier expansion models characterize intrinsic dimensionality as a property of data sets, the Local Intrinsic Dimensionality (LID) fully generalizes this concept to the local distance distribution from a reference point to its neighbors BID13 BID7 -the dimensionality of the local data submanifold in the vicinity of the reference point is revealed by the growth characteristics of the cumulative distribution function. In this",
"paper, we use LID to characterize the intrinsic dimensionality of adversarial regions, and attempt to test how well the estimates of LID can be used to distinguish adversarial examples. Note that",
"the main goal of LID is to characterize properties of adversarial examples, instead of being applied as a pure defense method, which requires stronger assumptions on the current threat model. In FIG0 ,",
"the estimated LID of the adversarial example (LID ≈ 4.36) is much higher than that of the referenced normal data sample (LID ≈ 1.53), illustrating that the estimated LID can efficiently capture the intrinsic dimensional properties of adversarial regions. In this paper",
", we aim to study the LID properties of adversarial examples generated using state-of-the-art attack methods. In particular",
", our contributions are:• We propose LID for the characterization of adversarial regions of deep networks. We discuss how",
"adversarial perturbation can affect the LID characteristics of an adversarial region, and empirically show that the characteristics of test examples can be estimated effectively using a minibatch of training data.• Our study reveals",
"that the estimated LID of adversarial examples considered in this paper 1 is significantly higher than that of normal data examples, and that this difference becomes more pronounced in deeper layers of DNNs.• We empirically demonstrate",
"that the LID characteristics of adversarial examples generated using five state-of-the-art attack methods can be easily discriminated from those of normal examples, and provide a baseline classifier with features based on LID estimates that generally outperforms several existing detection measures on five attacks across three benchmark datasets. Though the adversarial examples",
"considered here are not guaranteed to be the strongest with careful parameter tuning, these preliminary results firmly demonstrate the usefulness of LID measurement.• We show that the adversarial regions",
"generated by different attacks share similar dimensional properties, in that LID characteristics of a simple attack can potentially be used to detect other more complex attacks. We also show that a naive LID-based detector",
"is robust to the normal low confidence Optimization-based attack of BID2 .",
"In this paper, we have addressed the challenge of understanding the properties of adversarial regions, particularly with a view to detecting adversarial examples.",
"We characterized the dimensional properties of adversarial regions via the use of Local Intrinsic Dimensionality (LID), and showed how these could be used as features in an adversarial example detection process.",
"Our empirical results suggest that LID is a highly promising measure for the characterization of adversarial examples, one that can be used to deliver state-of-the-art discrimination performance.",
"From a theoretical perspective, we have provided an initial intuition as to how LID is an effective method for characterizing adversarial attack, one which complements the recent theoretical analysis showing how increases in LID effectively diminish the amount of perturbation required to move a normal example into an adversarial region (with respect to 1-NN classification) BID1 .",
"Further investigation in this direction may lead to new techniques for both adversarial attack and defense.In the learning process, the activation values at each layer of the LID-based detector can be regarded as a transformation of the input to a space in which the LID values have themselves been transformed.",
"A full understanding of LID characteristics should take into account the effect of DNN transformations on these characteristics.",
"This is a challenging question, since it requires a better understanding of the DNN learning processes themselves.",
"One possible avenue for future research may be to model the dimensional characteristics of the DNN itself, and to empirically verify how they influence the robustness of DNNs to adversarial attacks.Another open issue for future research is the empirical investigation of the effect of LID estimation quality on the performance of adversarial detection.",
"As evidenced by the improvement in perfor-mance observed when increasing the minibatch size from 100 to 1000 ( Figure 5 in Appendix A.3), it stands to reason that improvements in estimator quality or sampling strategies could both be beneficial in practice.",
"FIG3 illustrates LID characteristics of the most effective attack strategy known to date, Opt, on the MNIST and SVHN datasets.",
"On both datasets, the LID scores of adversarial examples are significantly higher than those of normal or noisy examples.",
"In the right-hand plot, the LID scores of normal examples and its noisy counterparts appear superimposed due to their similarities.",
"Figure 5 shows the discrimination power (detection AUC) of LID characteristics estimated using two different minibatch sizes: the default setting of 100, and a larger size of 1000.",
"The horizontal axis represents different choices of the neighborhood size k, from 10% to 90% percent to the batch size.",
"We note that the peak AUC is higher for the larger minibatch size."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.045454539358615875,
0.307692289352417,
0.6857143044471741,
0.1111111044883728,
0.2222222238779068,
0.13793103396892548,
0.1304347813129425,
0.04444443807005882,
0.10810810327529907,
0.08888888359069824,
0.11764705181121826,
0.0952380895614624,
0.1666666567325592,
0.13636362552642822,
0.25806450843811035,
0.22641508281230927,
0.2857142686843872,
0.1395348757505417,
0.17142856121063232,
0.18518517911434174,
0.1315789371728897,
0.23529411852359772,
0.1621621549129486,
0.09090908616781235,
0.20689654350280762,
0.16393442451953888,
0.14999999105930328,
0.1599999964237213,
0.26229506731033325,
0.2857142686843872,
0.21276594698429108,
0.20408162474632263,
0.2857142686843872,
0.23529411852359772,
0.17777776718139648,
0.25,
0.13333332538604736,
0.17777776718139648,
0.1666666567325592,
0.1428571343421936,
0.2702702581882477,
0.5333333015441895,
0.1395348757505417,
0.1269841194152832,
0.1355932205915451,
0.12121211737394333,
0.12121211737394333,
0.1428571343421936,
0.07407406717538834,
0.1111111044883728,
0.23529411852359772,
0.1666666567325592,
0.0952380895614624,
0.1764705777168274,
0.13793103396892548
] | B1gJ1L2aW | true | [
"We characterize the dimensional properties of adversarial subspaces in the neighborhood of adversarial examples via the use of Local Intrinsic Dimensionality (LID)."
] |
[
"In this paper, we design and analyze a new zeroth-order (ZO) stochastic optimization algorithm, ZO-signSGD, which enjoys dual advantages of gradient-free operations and signSGD.",
"The latter requires only the sign information of gradient estimates but is able to achieve a comparable or even better convergence speed than SGD-type algorithms.",
"Our study shows that ZO signSGD requires $\\sqrt{d}$ times more iterations than signSGD, leading to a convergence rate of $O(\\sqrt{d}/\\sqrt{T})$ under mild conditions, where $d$ is the number of optimization variables, and $T$ is the number of iterations.",
"In addition, we analyze the effects of different types of gradient estimators on the convergence of ZO-signSGD, and propose two variants of ZO-signSGD that at least achieve $O(\\sqrt{d}/\\sqrt{T})$ convergence rate.",
"On the application side we explore the connection between ZO-signSGD and black-box adversarial attacks in robust deep learning. ",
"Our empirical evaluations on image classification datasets MNIST and CIFAR-10 demonstrate the superior performance of ZO-signSGD on the generation of adversarial examples from black-box neural networks.",
"Zeroth-order (gradient-free) optimization has attracted an increasing amount of attention for solving machine learning (ML) problems in scenarios where explicit expressions for the gradients are difficult or infeasible to obtain.",
"One recent application of great interest is to generate prediction-evasive adversarial examples,",
"e.g., crafted images with imperceptible perturbations to deceive a well-trained image classifier into misclassification.",
"However, the black-box optimization nature limits the practical design of adversarial examples, where internal configurations and operating mechanism of public ML systems (e.g., Google Cloud Vision API) are not revealed to practitioners and the only mode of interaction with the system is via submitting inputs and receiving the corresponding predicted outputs BID31 BID27 BID36 BID17 BID3 .",
"It was observed in both white-box and black-box settings 1 that simply leveraging the sign information of gradient estimates of an attacking loss can achieve superior empirical performance in generating adversarial examples BID13 BID28 BID16 .",
"Spurred by that, this paper proposes a zeroth-order (ZO) sign-based descent algorithm (we call it 'ZO-signSGD') for solving black-box optimization problems,",
"e.g. design of black-box adversarial examples.",
"The convergence behavior and algorithmic stability of the proposed ZO-signSGD algorithm are carefully studied in both theory and practice.In the first-order setting, a sign-based stochastic gradient descent method, known as signSGD, was analyzed by BID2 BID1 .",
"It was shown in BID2 that signSGD not only reduces the per iteration cost of communicating gradients, but also could yield a faster empirical convergence speed than SGD BID19 .",
"That is because although the sign operation compresses the gradient using a single bit, it could mitigate the negative effect of extremely components of gradient noise.",
"Theoretically, signSGD achieves O(1/ √ T ) convergence rate under the condition of a sufficiently large mini-batch size, where T denotes the total number of iterations.",
"The work in BID1 established a connection between signSGD and Adam with restrictive convex analysis.",
"Prior to BID2 BID1 , although signSGD was not formally defined, the fast gradient sign method BID13 to generate white-box adversarial examples actually obeys the algorithmic protocol of signSGD.",
"The effectiveness of signSGD has been witnessed by robust adversarial training of deep neural networks (DNNs) BID28 .",
"Given the advantages of signSGD, one may wonder if it can be generalized for ZO optimization and what the corresponding convergence rate is.",
"In this paper, we answer these questions affirmatively.Contributions We summarize our key contributions as follows.•",
"We propose a new ZO algorithm, 'ZO-signSGD', and rigorously prove its convergence rate of O( √ d/ √ T ) under mild conditions. •",
"Our established convergence analysis applies to both mini-batch sampling schemes with and without replacement. In",
"particular, the ZO sign-based gradient descent algorithm can be treated as a special case in our proposed ZO-signSGD algorithm.• We",
"carefully study the effects of different types of gradient estimators on the convergence of ZO-signSGD, and propose three variants of ZO-signSGD for both centralized and distributed ZO optimization.• We",
"conduct extensive synthetic experiments to thoroughly benchmark the performance of ZO-signSGD and to investigate its parameter sensitivity. We also",
"demonstrate the superior performance of ZO-signSGD for generating adversarial examples from black-box DNNs.Related work Other types of ZO algorithms have been developed for convex and nonconvex optimization, where the full gradient is approximated via a random or deterministic gradient estimate BID18 BID29 BID11 BID9 BID10 BID34 BID15 BID12 BID22 BID25 . Examples",
"include ZO-SGD BID11 , ZO stochastic coordinate descent (ZO-SCD) BID22 , and ZO stochastic variance reduced gradient descent (ZO-SVRG) BID26 a; BID14 . Both ZO-SGD",
"and ZO-SCD can achieve O( DISPLAYFORM0 And ZO-SVRG can further improve the iteration complexity to O(d/T ) but suffers from an increase of function query complexity due to the additional variance reduced step, known as 'gradient blending' BID26 ), compared to ZO-SGD. The existing",
"work showed that ZO algorithms align with the iteration complexity of their first-order counterparts up to a slowdown effect in terms of a small-degree polynomial of the problem size d.",
"Motivated by the impressive convergence behavior of (first-order) signSGD and the empirical success in crafting adversarial examples from black-box ML models, in this paper we rigorously prove the O( √ d/ √ T ) convergence rate of ZO-signSGD and its variants under mild conditions.",
"Compared to signSGD, ZO-signSGD suffers a slowdown (proportional to the problem size d) in convergence rate, however, it enjoys the gradient-free advantages.",
"Compared to other ZO algorithms, we corroborate the superior performance of ZO-signSGD on both synthetic and real-word datasets, particularly for its application to black-box adversarial attacks.",
"In the future, we would like to generalize our analysis to nonsmooth and nonconvex constrained optimization problems.",
"BID2 FIG1 , we assume that the ZO gradient estimate of f (x) and its first-order gradient ∇f (x) = x suffer from a sparse noise vector v, where v1 ∈ N (0, 1002 ), and vi = 0 for i ≥ 2.",
"As a result, the used descent direction at iteration t is given bŷ ∇f (xt) + v or ∇f (xt) + v. FIG1 presents the convergence performance of 5 algorithms: SGD, signSGD, ZO-SGD, ZO-signSGD and its variant using the central difference based gradient estimator (10).",
"Here we tune a constant learning rate finding 0.001 best for SGD and ZO-SGD and 0.01 best for signSGD and its ZO variants.",
"As we can see, sign-based first-order and ZO algorithms converge much faster than the stochastic gradient-based descent algorithms.",
"This is not surprising since the presence of extremely noisy component v1 leads to an inaccurate gradient value, and thus degrades the convergence of SGD and ZO-SGD.",
"By contrast, the sign information is more robust to outliers and thus leads to better convergence performance of sign SGD and its variants.",
"We also note that the convergence trajectory of ZO-signSGD using the gradient estimator FORMULA1 coincides with that using the gradient estimator FORMULA6 given by the forward difference of two function values.",
"FIG1 : Comparison of different gradient-based and gradient sign-based first-order and ZO algorithms in the example of sparse noise perturbation.",
"The solid line represents the loss averaged over 10 independent trials with random initialization, and the shaded region indicates the standard deviation of results over random trials.",
"Left: Loss value against iterations for SGD, signSGD, ZO-SGD, ZO-signSGD and ZO-signSGD using the central difference based gradient estimator (10).",
"Right: Local regions to highlight the effect of the gradient estimators (3) and (10) on the convergence of ZO-signSGD."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.43478259444236755,
0.0833333283662796,
0.145454540848732,
0.1249999925494194,
0.4878048598766327,
0.17391303181648254,
0.1538461446762085,
0.17142856121063232,
0.10256409645080566,
0.16438356041908264,
0.1428571343421936,
0.1818181723356247,
0.20000000298023224,
0.13793103396892548,
0.07692307233810425,
0.04444443807005882,
0.04347825422883034,
0.21052631735801697,
0.08163265138864517,
0.1538461446762085,
0.08888888359069824,
0.04999999329447746,
0.260869562625885,
0.10526315122842789,
0.1395348757505417,
0.1666666567325592,
0.19512194395065308,
0.14084506034851074,
0.0952380895614624,
0.06451612710952759,
0.12244897335767746,
0.1666666567325592,
0.1395348757505417,
0.2916666567325592,
0.1538461446762085,
0.09677419066429138,
0.09677419066429138,
0.1860465109348297,
0.09999999403953552,
0.08510638028383255,
0.1860465109348297,
0.04347825422883034,
0.09756097197532654,
0.04444443807005882,
0.0476190410554409,
0.10256409645080566
] | BJe-DsC5Fm | true | [
"We design and analyze a new zeroth-order stochastic optimization algorithm, ZO-signSGD, and demonstrate its connection and application to black-box adversarial attacks in robust deep learning"
] |
[
"The non-stationarity characteristic of the solar power renders traditional point forecasting methods to be less useful due to large prediction errors.",
"This results in increased uncertainties in the grid operation, thereby negatively affecting the reliability and resulting in increased cost of operation.",
"This research paper proposes a unified architecture for multi-time-horizon solar forecasting for short and long-term predictions using Recurrent Neural Networks (RNN).",
"The paper describes an end-to-end pipeline to implement the architecture along with methods to test and validate the performance of the prediction model.",
"The results demonstrate that the proposed method based on the unified architecture is effective for multi-horizon solar forecasting and achieves a lower root-mean-squared prediction error compared to the previous best performing methods which use one model for each time-horizon.",
"The proposed method enables multi-horizon forecasts with real-time inputs, which have a high potential for practical applications in the evolving smart grid.",
"Today's power grid has become dynamic in nature mainly because of three changes in the modern grid:",
"1. Higher penetration level of renewables,",
"2. Introduction (and rapidly increasing deployment) of storage devices, and",
"3. Loads becoming active (by participating in demand response).",
"This dynamic modern grid faces the challenge of strong fluctuations due to uncertainty.",
"There is a critical need of gaining real time observability, control, and improving renewable generation forecast accuracy to enhance the resiliency and keep the operational costs sustainable.",
"Independent system operators (ISOs) with higher renewable penetration on the grid have already been facing challenges with the uncertainties associated with short-term forecasting errors.",
"In year 2016, California ISO doubled its frequency regulation service requirements (causing a sharp rise in the cost of requirements) to manage the recurring short-term forecasting errors in renewable generation BID0 .",
"The Western Electricity Coordinating Council (WECC) could achieve $5 billion savings per year by integrating wind and solar forecasts into unit commitment, according to the study conducted by Lew et al BID1 .",
"Thus, it is clear that the increased grid penetration levels of solar with its inherent variability (a combination of intermittence, high-frequency and non-stationarity) poses problems with grid reliability and cost of operating the grid on various time-scales.",
"For example, day-ahead solar forecast accuracy plays a significant role in the effectiveness of Unit Commitment (UC); very-short-term solar forecasts errors due to fluctuations caused by the passing clouds lead to sudden changes in PV plant outputs that can cause strain to the grid by inducing voltage-flickers and real-time balancing issues.",
"Thus, solar power generation forecast becomes an area of paramount research, as the need for robust forecast for all timescales (weekly, day-ahead, hourly and intra-hour) is critical for effectively incorporating increasing amount of solar energy resources at a global level and contributing to the evolution of the smart grid.",
"Moreover, improving the accuracy of solar forecast is one of the lowest cost methods of efficiently integrating solar energy into the grid.The rest of the paper is organized as follows.",
"The literature is reviewed and the significant shortcomings of the current forecasting approaches are recognized in Section II.",
"Section II further introduces the capabilities of the proposed unified architecture and the novel algorithm to fill in the gap between the need to improve the forecasting techniques and the existing approaches.",
"Section III introduces the proposed unified architecture based on RNN and the training algorithms utilized for implementing the neural network.",
"Exploratory data analysis, evaluation metric and structure of input data, and the proposed algorithm are presented in Section IV.",
"Section V discusses the results and their interpretation.",
"The paper is concluded with Section VI, which also identifies the future avenue of research in this method of solar forecasting..",
"The algorithm is trained using the data for the year 2010 and 2011 from the SURFRAD observations sites in Boulder, CO; Desert Rock, NV; Fort Peck, MT; Sioux Falls, SD; Bondville, IL; Goodwin Creek, MS; and Penn State, PA.",
"The test year for each respective site was chosen to be 2009 for the purpose of benchmarking against BID28 and other previously reported results in the literature.",
"Results from the two methods proposed in this paper are presented below:",
"Short-term solar forecasting is of great importance for optimizing the operational efficiencies of smart grids, as the uncertainties in the power systems are ever-increasing, spanning from the generation arena to the demand-side domain.",
"A number of methods and applications have been developed for solar forecasting, with some level of predictive success.",
"The main limitation of the approaches developed so far is their specificity with a given temporal and/or spatial resolution.",
"For predictive analysis problems, the field of AI has become promising with the recent advances in optimization techniques, parallelism, and GPUs.",
"AI (especially deep neural networks) thrives on data, and with decreasing cost of sensor and measurement equipment, plethora of solar data is getting available.",
"Data availability is only going to keep increasing in the coming years.",
"The proposed novel Unified Recurrent Neural Network Architecture harnesses the power of AI to form a high-fidelity solar forecasting engine.",
"This architecture has the potential to be implemented as a complete forecasting system, which spans the entire spectrum of spatial and temporal horizons with a capability to take real-time data as input to produce multi-time-scale (intra-hour, hourly and day-ahead scales) predictions.",
"In addition, the proposed algorithm outperforms traditional Machine Learning methods in terms of quality of the forecast and its low forward inference time makes it a robust real-time solar forecasting engine.Although a deeper neural network will have more capacity, we experimentally observed that it leads to high variance in the model and therefore a reduced generalization power for the particular problem dealt in this paper.",
"The performance of the proposed method can be further improved in several ways including hyper-parameter tuning and architectural changes like the activation functions used or the type of layers.",
"Extension of the proposed architecture with LSTM cells and intra-hour forecasting horizons are potential future research avenues in this domain."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.1463414579629898,
0.5,
0.22727271914482117,
0.23333333432674408,
0.1304347813129425,
0.04999999701976776,
0,
0.05882352590560913,
0,
0.10810810327529907,
0.16326530277729034,
0.13333332538604736,
0.15094339847564697,
0.1090909019112587,
0.1111111044883728,
0.14705881476402283,
0.1875,
0.21276594698429108,
0.1463414579629898,
0.1249999925494194,
0.1428571343421936,
0.0952380895614624,
0.1249999925494194,
0.1818181723356247,
0.09999999403953552,
0.20408162474632263,
0.1666666567325592,
0.1538461446762085,
0.19512194395065308,
0.09302324801683426,
0.09090908616781235,
0.08695651590824127,
0.0555555522441864,
0.40909090638160706,
0.16949151456356049,
0.22499999403953552,
0.11999999731779099,
0.13636362552642822
] | ryZzs4HqM | true | [
"This paper proposes a Unified Recurrent Neural Network Architecture for short-term multi-time-horizon solar forecasting and validates the forecast performance gains over the previously reported methods"
] |
[
"The ResNet and the batch-normalization (BN) achieved high performance even when only a few labeled data are available.",
"However, the reasons for its high performance are unclear.",
"To clear the reasons, we analyzed the effect of the skip-connection in ResNet and the BN on the data separation ability, which is an important ability for the classification problem.",
"Our results show that, in the multilayer perceptron with randomly initialized weights, the angle between two input vectors converges to zero in an exponential order of its depth, that the skip-connection makes this exponential decrease into a sub-exponential decrease, and that the BN relaxes this sub-exponential decrease into a reciprocal decrease.",
"Moreover, our analysis shows that the preservation of the angle at initialization encourages trained neural networks to separate points from different classes.",
"These imply that the skip-connection and the BN improve the data separation ability and achieve high performance even when only a few labeled data are available.",
"The architecture of a neural network heavily affects its performance especially when only a few labeled data are available.",
"The most famous example of one such architecture is the convolutional neural network (CNN) BID6 .",
"Even when convolutional layers of CNN were randomly initialized and kept fixed and only the last fully-connected layer was trained, it achieved a competitive performance compared with the traditional CNN BID5 BID14 .",
"Recent other examples are the ResNet BID3 and the batch-normalization (BN) BID4 .",
"The ResNet and the BN are widely used in few-shot learning problems and achieved high performance BID8 BID9 .One",
"reason for the success of neural networks is that their architectures enable its feature vector to capture prior knowledge about the problem. The",
"convolutional layer of CNN enable its feature vector to capture statistical properties of data such as the shift invariance and the compositionality through local features, which present in images BID13 . However",
", effects of the skip-connection in ResNet and the BN on its feature vector are still unclear.To clear the effects of the skip-connection and the BN, we analyzed the transformations of input vectors by the multilayer perceptron, the ResNet, and the ResNet with BN. Our results",
"show that the skip-connection and the BN preserve the angle between input vectors. This preservation",
"of the angle is a desirable ability for the classification problem because the last output layer should separate points from different classes and input vectors in different classes have a large angle BID11 BID10 . Moreover, our analysis",
"shows that the preservation of the angle at initialization encourages trained neural networks to separate points from different classes. These imply that the skip-connection",
"and the BN improve the data separation ability and achieve high performance even when only a few labeled data are available.",
"The ResNet and the BN achieved high performance even when only a few labeled data are available.",
"To clear the reasons for its high performance, we analyzed effects of the skip-connection in ResNet and the BN on the transformation of input vectors through layers.",
"Our results show that the skip-connection and the BN preserve the angle between input vectors, which is a desirable ability for the classification problem.",
"Moreover, our analysis shows that the preservation of the angle at initialization encourages trained neural networks to separate points from different classes.",
"These results imply that the skip-connection and the BN improve the data separation ability and achieve high performance even when only a few labeled data are available."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3888888955116272,
0.07407406717538834,
0.3255814015865326,
0.17543859779834747,
0.1538461446762085,
0.3499999940395355,
0.277777761220932,
0.24242423474788666,
0.12765957415103912,
0.27586206793785095,
0.277777761220932,
0.19999998807907104,
0.21276594698429108,
0.16326530277729034,
0.12903225421905518,
0.20408162474632263,
0.1538461446762085,
0.3888888955116272,
0.34285715222358704,
0.19512194395065308,
0.20512819290161133,
0.1538461446762085,
0.3414634168148041
] | S1xU74med4 | true | [
"The Skip-connection in ResNet and the batch-normalization improve the data separation ability and help to train a deep neural network."
] |
[
"Learning representations of data is an important issue in machine learning.",
"Though GAN has led to significant improvements in the data representations, it still has several problems such as unstable training, hidden manifold of data, and huge computational overhead.",
"GAN tends to produce the data simply without any information about the manifold of the data, which hinders from controlling desired features to generate.",
"Moreover, most of GAN’s have a large size of manifold, resulting in poor scalability.",
"In this paper, we propose a novel GAN to control the latent semantic representation, called LSC-GAN, which allows us to produce desired data to generate and learns a representation of the data efficiently.",
"Unlike the conventional GAN models with hidden distribution of latent space, we define the distributions explicitly in advance that are trained to generate the data based on the corresponding features by inputting the latent variables that follow the distribution.",
"As the larger scale of latent space caused by deploying various distributions in one latent space makes training unstable while maintaining the dimension of latent space, we need to separate the process of defining the distributions explicitly and operation of generation.",
"We prove that a VAE is proper for the former and modify a loss function of VAE to map the data into the pre-defined latent space so as to locate the reconstructed data as close to the input data according to its characteristics.",
"Moreover, we add the KL divergence to the loss function of LSC-GAN to include this process.",
"The decoder of VAE, which generates the data with the corresponding features from the pre-defined latent space, is used as the generator of the LSC-GAN.",
"Several experiments on the CelebA dataset are conducted to verify the usefulness of the proposed method to generate desired data stably and efficiently, achieving a high compression ratio that can hold about 24 pixels of information in each dimension of latent space.",
"Besides, our model learns the reverse of features such as not laughing (rather frowning) only with data of ordinary and smiling facial expression.",
"Developing generative model is a crucial issue in artificial intelligence.",
"Creativity was a human proprietary, but many recent studies have attempted to make machines to mimic it.",
"There has been an extensive research on generating data and one of them, generative adversarial network (GAN), has led to significant achievements, which might be helpful to deep learning model because, in general, lots of data result in good performance BID12 .",
"Many approaches to creating data as better quality as possible have been studied: for example, variational auto-encoder (VAE) BID9 and GAN BID4 .",
"The former constructs an explicit density, resulting in an explicit likelihood which can be maximized, and the latter constructs an implicit density BID3 .",
"Both can generate data from manifold which is hidden to us so that we cannot control the kind of data that we generate.Because it is costly to structure data manually, we need not only data generation but also automatically structuring data.",
"Generative models produce only data from latent variable without any other information so that we cannot control what we want to generate.",
"To cope with this problem, the previous research generated data first and found distributions of features on latent space by investigating the model with data, since the manifold of data is hidden in generative models.",
"This latent space is deceptive for finding an area which represents a specific feature of our interest; it would Figure 1 : Examples of the manifold.",
"Left: a complex manifold which can be seen in general models, Right: a relatively simple manifold in the proposed model.",
"The midpoint M of A and B can be easily calculated in the right manifold, but not in the left one.",
"The midpoint of A and B is computed as N in the left manifold, which is incorrect.",
"take a long time even if we can find that area.",
"Besides, in the most of research, generative models had a large latent space, resulting in a low compression rate which leads to poor scalability.",
"To work out these problems, we propose a model which can generate the data whose type is what we want and learn a representation of data with a higher compression rate, as well.",
"Our model is based on VAE and GAN.",
"We pre-define distributions corresponding to each feature and modify the loss function of VAE so as to generate the data from the latent variable which follows the specific distribution according to its features.",
"However, this method makes the latent space to become a more complex multimodal distribution which contains many distributions, resulting in an instability in training the LSC-GAN.",
"We prove that this problem can be solved and even made more efficiently by using an auto-encoder model with the theorem in Section 3.",
"Although the proposed model compresses the data into small manifold, it is well-defined with Euclidean distance as shown in Fig. 1 , which compares the manifolds in general models and in our model.",
"The distance can be calculated with Euclidean distance in adjacent points but not in far points at the left manifold in Fig. 1 .",
"However, in the right manifold, we can calculate the distance between points regardless of the distance of them, where we can recognize the manifold more easily as shown in the left side.",
"Thanks to a relatively simple manifold, it can produce neutral features regardless of their location in latent space, so that all features can be said as independent to each other.",
"Our main contribution is summarized as follows.•",
"We propose a method to improve the stability of a LSC-GAN with LSC-VAE by performing the weight initialization, and prove it theoretically.•",
"We achieve conditional generation without additional parameters by controlling the latent space itself, rather than adding additional inputs like the existing model for condition generation.•",
"We propose a novel model that automatically learns the ability to process data continuously through latent space control.•",
"Finally, we achieve an efficient compression rate with LSC-GAN based on weight initialization of LSC-VAE.The rest of the paper is organized as follows. Section",
"2 reviews the related works and the proposed LSC-GAN model is illustrated in Section 3. In Section",
"4, we evaluate the performance of the proposed method with some generated data. The conclusion",
"and discussion are presented in Section 5.",
"In this paper, we address some of significant issues in generative models: unstable training, hidden manifold of data, and extensive hardware resource.",
"To generate a data whose type is what we want, we propose a novel model LSC-GAN which can control a latent space to generate the data that we want.",
"To deal with a larger scale of latent space cause by deploying various distributions in one latent space, we use the LSC-VAE and theoretically prove that it is a proper method.",
"Also, we confirm that the proposed model can generate data which we want by controlling the latent space.",
"Unlike the existing generative model, the proposed model deals with features continuously, not discretely and compresses the data efficiently.Based on the present findings, we hope to extend LSC-GAN to more various datasets such as ImageNet or voice dataset.",
"In future work, we plan to conduct more experiments with various parameters to confirm the stability of model.",
"We will also experiment by reducing the dimension of the latent space to verify that the proposed model is efficient.",
"Besides, since the encoder can project the data to the latent space according to the features inherent in data, it could be used as a classifier."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14999999105930328,
0.178571417927742,
0.2800000011920929,
0.1428571343421936,
0.28070175647735596,
0.3333333134651184,
0.19999998807907104,
0.36666667461395264,
0.1395348757505417,
0.3265306055545807,
0.3636363446712494,
0.31372547149658203,
0.20512820780277252,
0.13333332538604736,
0.1846153736114502,
0.07999999821186066,
0.0833333283662796,
0.32786884903907776,
0.23999999463558197,
0.33898305892944336,
0.18518517911434174,
0.17391303181648254,
0.2083333283662796,
0.13333332538604736,
0.09999999403953552,
0.2745097875595093,
0.24137930572032928,
0.054054051637649536,
0.28070175647735596,
0.22641508281230927,
0.22641508281230927,
0.17543859779834747,
0.2083333283662796,
0.11538460850715637,
0.25,
0,
0.2800000011920929,
0.19230768084526062,
0.4166666567325592,
0.11320754140615463,
0.13636362552642822,
0.1860465109348297,
0.1111111044883728,
0.11999999731779099,
0.3461538553237915,
0.27586206793785095,
0.2666666507720947,
0.28125,
0.21739129722118378,
0.3829787075519562,
0.31372547149658203
] | Hyg1Ls0cKQ | true | [
"We propose a generative model that not only produces data with desired features from the pre-defined latent space but also fully understands the features of the data to create characteristics that are not in the dataset."
] |
[
"With the ever increasing demand and the resultant reduced quality of services, the focus has shifted towards easing network congestion to enable more efficient flow in systems like traffic, supply chains and electrical grids.",
"A step in this direction is to re-imagine the traditional heuristics based training of systems as this approach is incapable of modelling the involved dynamics.",
"While one can apply Multi-Agent Reinforcement Learning (MARL) to such problems by considering each vertex in the network as an agent, most MARL-based models assume the agents to be independent.",
"In many real-world tasks, agents need to behave as a group, rather than as a collection of individuals.",
"In this paper, we propose a framework that induces cooperation and coordination amongst agents, connected via an underlying network, using emergent communication in a MARL-based setup.",
"We formulate the problem in a general network setting and demonstrate the utility of communication in networks with the help of a case study on traffic systems.",
"Furthermore, we study the emergent communication protocol and show the formation of distinct communities with grounded vocabulary.",
"To the best of our knowledge, this is the only work that studies emergent language in a networked MARL setting.",
"Co-existing intelligent agents affect each other in non-trivial ways.",
"Consider for example, two agents playing a modified variant of archery in two dimensions.",
"Agent A controls the speed at which the arrow is released but it can only shoot along y-axis.",
"Agent B controls wind speed along x-axis.",
"The arrow drifts along the direction of wind with a magnitude proportional to wind speed.",
"A target is specified by (x, y) coordinates and the agents must act cooperatively to shoot the target.",
"In this setup, the optimal action for agent A depends on the current policy of agent B and vice versa.",
"Any change in one agent's policy modifies the other agent's perception about the environment dynamics.",
"Formally, this issue is referred to as non-stationarity of environment in a multi-agent setup.",
"This non-stationarity makes the learning problem hard and approaches that try to independently learn optimal behavior for agents do not perform well in practice (Tan, 1993) .",
"Thus, it is important to develop models that have been tailored towards training multiple agents simultaneously.",
"In this paper, we focus on a specific multi-agent setup where the agents are connected to each other via an underlying fixed network topology.",
"We cast the problem in the multi-agent reinforcement learning (MARL) framework and assume that agents are rewarded by the environment based on their actions and their goal is to cooperatively maximize their rewards.",
"We further assume that the agents have been endowed with the ability to communicate with one another along the network edges to achieve cooperation.",
"However, the communication protocol is not fixed and the agents must learn a protocol to communicate with each other in order to maximize their rewards.",
"Communication is essential in a multi-agent setup.",
"In many practical scenarios, agents may only observe a small portion of the global environment state and they must take actions based on their local observations.",
"As discussed above, agents affect each other in non-trivial ways through their actions.",
"Thus, for achieving long term cooperation, it is essential for agents to be able to share their intents to complement the information provided by the local observation of each agent.",
"Communication provides the ability to do so to the agents.",
"Many real world problems can be cast in this framework and we provide a number of concrete examples after formally defining the problem setup in Section 2.",
"For clarity of exposition and to be more concrete, in this paper, we focus on a particular real world problem as a case study, the problem of intelligently managing traffic.",
"We present the traffic management problem as a particular instantiation of the abstract multi-agent reinforcement learning problem that we have informally defined above (see Section 2 for a formal definition).",
"In this context, the agents correspond to traffic lights and the underlying network is the network of roads.",
"Agents receive rewards from the environment based on factors like queue length at the traffic junction and must communicate with each other to cooperatively maximize their rewards, thereby ensuring a smooth flow of traffic.",
"We propose a MARL-based traffic system that allows coordination between traffic signals (agents) via:",
"(i) inter-agent communication; and",
"(ii) a cooperative reward structure.",
"At each time-step, the agents communicate with their immediate neighbours in the underlying network by broadcasting a message in the form of a discrete symbol (each message corresponds to a word, represented by a binary vector in our experiments).",
"Over time, the agents are trained to exploit this broadcasted message to coordinate with each other and maximize their rewards.",
"As the agents are trained, a language emerges between pairs of agents.",
"Since the agents learn the communication protocol themselves, our approach is different from methods that use a fixed protocol for communication, like smart-grids.",
"We empirically demonstrate the utility of communication in this setup and also investigate a cooperative reward structure over the network of traffic junctions.",
"Our model uses a query-based soft attention mechanism to help the agents come up with more complex cooperative strategies.",
"We perform extensive experimental evaluation to demonstrate that",
"(i) our method outperforms baseline approaches;",
"(ii) communication is useful,",
"(iii) communication is grounded in actions taken by the agents, and",
"(iv) the cooperative reward structure promotes communication and hence coordination.",
"In this paper, we proposed an approach to mitigate network congestion with the help of traffic networks.",
"Though extensive experiments, we demonstrated the benefit of emergent communication to optimize traffic flow over existing MARL approaches.",
"Additionally, we performed qualitative studies on the emergent language and showed that it is grounded in actions.",
"Human communication is discrete in nature and can, in general, be represented by categorical variables.",
"Additionally, discrete variables are more interpretable which makes it well suited for real life problems like traffic management, where one needs transparency.",
"However, the use of discrete latent variables render the neural network non-differentiable.",
"The Gumbel Softmax gives a differentiable sample from a discrete distribution by approximating the hard one-hot vector into a soft version.",
"The Gumbel distribution has two parameters: µ and β.",
"The standard Gumbel distribution where µ and β are 0,1 respectively has probability density function: G(0, 1) = e −(z+e −z ) .",
"Suppose, for a given agent, our model (Communicator) outputs a Multinomial distribution of message bits with logits : p = (p 1 , . . . , p d ) where d = 8.",
"These logits are functions of inputs and weights which need to be trained.",
"A simple way to draw samples from a discrete distribution would be to use the Gumbel-max trick (Jang et al., 2016) as shown,",
"Here, the noise in form of z i is independently sampled from standard Gumbel distribution and are obtained as z i = − log(− log(u i )), u i being i.",
"β is the temperature parameter, which, in our experiments, is set to 0.5.",
"As β > 0, we obtain welldefined gradients ∂m i /∂p i with respect to parameters p i .",
"Gumbel-Softmax can produce a 0.9999-hot vector instead of a hard one-hot vector depending on β.",
"Since we want the communication to be discrete, we employ the Straight-Through version of the Gumbel-Softmax estimator with a simple reformulation of the form,",
"Here,m i is the one-hot vector ofm i i.e. such thatm i = I{arg max im i = k, k ≤ k }.",
"We use binary 8-bit messages, therefore k = 1 and k is fixed during the training process.",
"Now, detach prevents the gradients from flowing through that node, hence, ∇ p i m i = ∇ p imi .",
"This makes the communication discrete in the forward pass even as the gradients flow is smooth during the backward pass.",
"The final output message of agent is given as m = (m 1 , . . . , m d ).",
"A.3",
"COMMUNICATION .",
"Action embeddings from matrix U (orthonormal matrix) of agent 0 corresponding to messages from all agents.",
"As highlighted by the red circles, the action embeddings of agent i in response to messages from neighbours U i (as highlighted in gray at the centres of the plots) are overlapped.",
"The color bar represents different agents.",
"Here, we plots are for agents 0, 1, 8, 9.",
"Similar trends are observed for rest of the agents as well."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.045454543083906174,
0.11764705181121826,
0.04878048226237297,
0.06896550953388214,
0.31578946113586426,
0.17142856121063232,
0.13793103396892548,
0.25,
0.09090908616781235,
0.23076923191547394,
0.06666666269302368,
0,
0.07407406717538834,
0.06896550953388214,
0.12903225421905518,
0.07692307233810425,
0.29629629850387573,
0.1538461446762085,
0,
0.1621621549129486,
0.24390242993831635,
0,
0.17142856121063232,
0.4000000059604645,
0.05128204822540283,
0.07692307233810425,
0.05128204822540283,
0,
0.20512820780277252,
0.09999999403953552,
0.25,
0,
0.04444444179534912,
0.07692307233810425,
0,
0.1111111044883728,
0.09302324801683426,
0,
0.0833333283662796,
0.1764705777168274,
0.23529411852359772,
0.0624999962747097,
0,
0,
0.11764705181121826,
0.1666666567325592,
0.08695651590824127,
0,
0.12903225421905518,
0.13333332538604736,
0.14814814925193787,
0.05714285373687744,
0,
0.0624999962747097,
0,
0,
0.09999999403953552,
0,
0.1111111044883728,
0.05128204822540283,
0.07692307233810425,
0,
0.07407406717538834,
0.1249999925494194,
0,
0,
0,
0.13793103396892548,
0,
0,
0.052631575614213943,
0,
0.08695651590824127,
0.0833333283662796
] | H1lUp1BYDH | true | [
"A framework for studying emergent communication in a networked multi-agent reinforcement learning setup."
] |
[
"We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data.",
"Translation equivariance is an important inductive bias for many learning problems including time series modelling, spatial data, and images.",
"The model embeds data sets into an infinite-dimensional function space, as opposed to finite-dimensional vector spaces.",
"To formalize this notion, we extend the theory of neural representations of sets to include functional representations, and demonstrate that any translation-equivariant embedding can be represented using a convolutional deep-set.",
"We evaluate ConvCNPs in several settings, demonstrating that they achieve state-of-the-art performance compared to existing NPs.",
"We demonstrate that building in translation equivariance enables zero-shot generalization to challenging, out-of-domain tasks.",
"Neural Processes (NPs; Garnelo et al., 2018b; a) are a rich class of models that define a conditional distribution p(y|x, Z, θ) over output variables y given input variables x, parameters θ, and a set of observed data points in a context set Z = {x m , y m } M m=1 .",
"A key component of NPs is the embedding of context sets Z into a representation space through an encoder Z → E(Z), which is achieved using a DEEPSETS function approximator (Zaheer et al., 2017 ).",
"This simple model specification allows NPs to be used for",
"(i) meta-learning (Thrun & Pratt, 2012; Schmidhuber, 1987) , since predictions can be generated on the fly from new context sets at test time; and",
"(ii) multi-task or transfer learning (Requeima et al., 2019) , since they provide a natural way of sharing information between data sets.",
"Moreover, conditional NPs (CNPs; Garnelo et al., 2018a) , a deterministic variant of NPs, can be trained in a particularly simple way with maximum likelihood learning of the parameters θ, which mimics how the system is used at test time, leading to strong performance (Gordon et al., 2019) .",
"Natural application areas of NPs include time series, spatial data, and images with missing values.",
"Consequently, such domains have been used extensively to benchmark current NPs (Garnelo et al., 2018a; b; Kim et al., 2019) .",
"Often, ideal solutions to prediction problems in such domains should be translation equivariant: if the data are translated in time or space, then the predictions should be translated correspondingly (Kondor & Trivedi, 2018; Cohen & Welling, 2016) .",
"This relates to the notion of stationarity.",
"As such, NPs would ideally have translation equivariance built directly into the modelling assumptions as an inductive bias.",
"Unfortunately, current NP models must learn this structure from the data set instead, which is sample and parameter inefficient as well as impacting the ability of the models to generalize.",
"The goal of this paper is to build translation equivariance into NPs.",
"Famously, convolutional neural networks (CNNs) added translation equivariance to standard multilayer perceptrons (LeCun et al., 1998; Cohen & Welling, 2016) .",
"However, it is not straightforward to generalize NPs in an analogous way:",
"(i) CNNs require data to live \"on the grid\" (e.g. image pixels form a regularly spaced grid), while many of the above domains have data that live \"off the grid\" (e.g. time series data may be observed irregularly at any time t ∈ R).",
"(ii) NPs operate on partially observed context sets whereas CNNs typically do not.",
"(iii) NPs rely on embedding sets into a finite-dimensional vector space for which the notion of equivariance with respect to input translations is not natural, as we detail in Section 3.",
"In this work, we introduce the CONVCNP, a new member of the NP family that accounts for translation equivariance.",
"1 This is achieved by extending the theory of learning on sets to include functional representations, which in turn can be used to express any translation-equivariant NP model.",
"Our key contributions can be summarized as follows.",
"(i) We provide a representation theorem for translation-equivariant functions on sets, extending a key result of Zaheer et al. (2017) to functional embeddings, including sets of varying size.",
"(ii) We extend the NP family of models to include translation equivariance.",
"(iii) We evaluate the CONVCNP and demonstrate that it exhibits excellent performance on several synthetic and real-world benchmarks.",
"We have introduced CONVCNP, a new member of the CNP family that leverages embedding sets into function space to achieve translation equivariance.",
"The relationship to",
"(i) the NP family, and",
"(ii) representing functions on sets, each imply extensions and avenues for future work.",
"Deep sets.",
"Two key issues in the existing theory on learning with sets (Zaheer et al., 2017; Qi et al., 2017a; Wagstaff et al., 2019) are",
"(i) the restriction to fixed-size sets, and",
"(ii) that the dimensionality of the embedding space must be no less than the cardinality of the embedded sets.",
"Our work implies that by considering appropriate embeddings into a function space, both issues are alleviated.",
"In future work, we aim to further this analysis and formalize it in a more general context.",
"Point-cloud models.",
"Another line of related research focuses on 3D point-cloud modelling (Qi et al., 2017a; b) .",
"While original work focused on permutation invariance (Qi et al., 2017a; Zaheer et al., 2017) , more recent work has considered translation equivariance as well (Wu et al., 2019) , leading to a model closely resembling CONVDEEPSETS.",
"The key differences with our work are the following: (i) Wu et al. (2019) Correlated samples and consistency under marginalization.",
"In the predictive distribution of CON-VCNP (Equation (2)), predicted ys are conditionally independent given the context set.",
"Consequently, samples from the predictive distribution lack correlations and appear noisy.",
"One solution is to instead define the predictive distribution in an autoregressive way, like e.g. PixelCNN++ (Salimans et al., 2017) .",
"Although samples are now correlated, the quality of the samples depends on the order in which the points are sampled.",
"Moreover, the predicted ys are then not consistent under marginalization (Garnelo et al., 2018b; Kim et al., 2019) .",
"Consistency under marginalization is more generally an issue for neural autoregressive models (Salimans et al., 2017; Parmar et al., 2018) , although consistent variants have been devised (Louizos et al., 2019) .",
"To overcome the consistency issue for CONVCNP, exchangeable neural process models (e.g. Korshunova et al., 2018; Louizos et al., 2019) may provide an interesting avenue.",
"Another way to introduce dependencies between ys is to employ latent variables as is done in neural processes (Garnelo et al., 2018b) .",
"However, such an approach only achieves conditional consistency: given a context set, the predicted ys will be dependent and consistent under marginalization, but this does not lead to a consistent joint model that also includes the context set itself.",
"For each dataset, an image is randomly sampled, the first row shows the given context points while the second is the mean of the estimated conditional distribution.",
"From left to right the first seven columns correspond to a context set with 3, 1%, 5%, 10%, 20%, 30%, 50%, 100% randomly sampled context points.",
"In the last two columns, the context sets respectively contain all the pixels in the left and top half of the image.",
"CONVCNPXL is shown for all datasets besides ZSMM, for which we show the fully translation equivariant CONVCNP."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.060606054961681366,
0.13333332538604736,
0.2790697515010834,
0.13333332538604736,
0.2142857164144516,
0.10169491171836853,
0.043478257954120636,
0.0833333283662796,
0.10256409645080566,
0.0555555522441864,
0.035087715834379196,
0.13793103396892548,
0.060606054961681366,
0.08888888359069824,
0.0952380895614624,
0.0624999962747097,
0.09999999403953552,
0.1538461446762085,
0.11428570747375488,
0.07692307233810425,
0.039215683937072754,
0.07407406717538834,
0.08888888359069824,
0.0624999962747097,
0.19512194395065308,
0,
0.19999998807907104,
0.38461539149284363,
0.12903225421905518,
0.2222222238779068,
0.11764705926179886,
0.10526315122842789,
0.07407406717538834,
0.05714285373687744,
0.1904761791229248,
0.06896550953388214,
0.06666666269302368,
0.12903225421905518,
0,
0.08888888359069824,
0.05882352590560913,
0,
0.07999999821186066,
0.0555555522441864,
0,
0,
0,
0,
0.05714285373687744,
0.08163265138864517,
0,
0.052631575614213943,
0.1249999925494194,
0.13333332538604736
] | Skey4eBYPS | true | [
"We extend deep sets to functional embeddings and Neural Processes to include translation equivariant members"
] |
[
"Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately.",
"Recent work shows that convolutional neural networks (CNNs) can be trained to predict V1 activity more accurately, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance.",
"Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations.",
"We present a framework for identifying common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations.",
"We fit this rotation-equivariant CNN to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging.",
"We show that our rotation-equivariant network outperforms a regular CNN with the same number of feature maps and reveals a number of common features, which are shared by many V1 neurons and are pooled sparsely to predict neural activity.",
"Our findings are a first step towards a powerful new tool to study the nonlinear functional organization of visual cortex.",
"The mammalian retina processes image information using a number of distinct parallel channels consisting of functionally, anatomically, and transcriptomically defined distinct cell types.",
"In the mouse, there are 14 types of bipolar cells BID8 , which provide input to 30-50 types of ganglion cells BID2 BID23 .",
"In visual cortex, in contrast, it is currently unknown whether excitatory neurons are similarly organized into functionally distinct cell types.",
"A functional classification of V1 neurons would greatly facilitate understanding its computations just like it has for the retina, because we could focus our efforts on identifying the function of a small number of cell types instead of characterizing thousands of anonymous neurons.Recent work proposed a framework for learning functional cell types from data in an unsupervised fashion while optimizing predictive performance of a model that employs a common feature space shared among many neurons BID16 .",
"The key insight in this work is that all neurons that perform the same computation but have their receptive fields at different locations, can be represented by a feature map in a convolutional network.",
"Unfortunately, this approach cannot be applied directly to neocortical areas.",
"Neurons in area V1 extract local oriented features such as edges at different orientations, and most image features can appear at arbitrary orientations -just like they can appear at arbitrary locations.",
"Thus, to define functional cell types in V1, we would like to treat orientation as a nuisance parameter (like receptive field location) and learn features independent of orientation.In the present paper, we work towards this goal.",
"While we do not answer the biological question whether there are indeed well-defined clusters of functional cell types in V1, we provide the technical foundation by extending the work of Klindt and colleagues BID16 and introducing a rotation-equivariant convolutional neural network model of V1.",
"We train this model directly on the responses of 6000 mouse V1 neurons to learn a shared feature space, whose features are independent of orientation.",
"We show that this model outperforms state-of-the-art CNNs for system identification and allows predicting V1 responses of thousands of neurons with only 16 learned features.",
"Moreover, for most neurons, pooling from only a small number of features is sufficient for accurate predictions.",
"We developed a rotation-equivariant convolutional neural network model of V1 that allows us to characterize and study V1 computation independent of orientation preference.",
"Although the visual system is not equivariant to rotation -there are known biases in the distribution of preferred orientations -, enforcing weight sharing across orientations allowed us to fit larger, more expressive models given a limited dataset.",
"While our work lays out the technical foundation, we only scratched the surface of the many biological questions that can now be addressed.",
"Future work will have to investigate the learned features in much more detail, test to what extent they generalize across recording sessions and animals, whether they are consistent across changes in the architecture andmost importantly -whether neurons in V1 indeed cluster into distinct, well-defined functional types and this organization finds any resemblance in anatomical or genetic properties BID27 of the neurons recorded."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.09302324801683426,
0.1666666567325592,
0.19354838132858276,
0.09090908616781235,
0.21052631735801697,
0.3265306055545807,
0.11764705181121826,
0.1111111044883728,
0.05714285373687744,
0.05714285373687744,
0.1818181723356247,
0.08695651590824127,
0,
0.09999999403953552,
0.12244897335767746,
0.22641509771347046,
0.20512820780277252,
0.3589743673801422,
0.06451612710952759,
0.3333333432674408,
0.08163265138864517,
0.1111111044883728,
0.1492537260055542
] | H1fU8iAqKX | true | [
"A rotation-equivariant CNN model of V1 that outperforms previous models and suggest functional groupings of V1 neurons."
] |
[
"In classic papers, Zellner (1988, 2002) demonstrated that Bayesian inference could be derived as the solution to an information theoretic functional. ",
"Below we derive a generalized form of this functional as a variational lower bound of a predictive information bottleneck objective. ",
"This generalized functional encompasses most modern inference procedures and suggests novel ones.",
"Consider a data generating process φ ∼ p(φ) from which we have some N draws that constitute our training set, x P = {x 1 , x 2 , . . . , x N } ∼ p(x|φ).",
"We can also imagine (potentially infinitely many) future draws from this same process x F = {x N +1 , . . . } ∼ p(x|φ).",
"The predictive information I(x P ; x F ) 1 gives a unique measure of the complexity of a data generating process (Bialek et al., 2001 ).",
"The goal of learning is to capture this complexity.",
"To perform learning, we form a global representation of the dataset p(θ|x P ).",
"This can be thought of as a learning algorithm, that, given a set of observations, produces a summary statistic of the dataset that we hope is useful for predicting future draws from the same process.",
"This algorithm could be deterministic or more generally, stochastic.",
"For example, imagine training a neural network on some data with stochastic gradient descent.",
"Here the training data would be x P , the test data x F and the neural network parameters would be θ.",
"Our training procedure implicitly samples from the distribution p(θ|x P ).",
"How do we judge the utility of this learned global representation?",
"The mutual information I(θ; x F ) quantifies the amount of information our representation captures about future draws.",
"2 To maximize learning we therefore aim to maximize this quantity.",
"1. We use I(x; y) for the mutual information between two random variables:",
"2. It is interesting to note that in the limit of an infinite number of future draws, I(θ; x F ) approaches I(θ; φ).",
"Therefore, the amount of information we have about an infinite number of future draws from This is, of course, only interesting if we constrain how expressive our global representation is, for otherwise we could simply retain the full dataset.",
"The amount of information retained about the observed data: I(θ; x P ) is a direct measure of our representation's complexity.",
"The bits a learner extracts from data provides upper bounds on generalization (Bassily et al., 2017) .",
"We have shown that a wide range of existing inference techniques are variational lower bounds on a single predictive information bottleneck objective.",
"This connection highlights the drawbacks of these traditional forms of inference.",
"In all cases considered in the previous section, we made two choices that loosened our variational bounds.",
"First, we approximated p(x P |θ), with a factorized approximation q(x P |θ) = i q(x i |θ).",
"Second, we approximated the future conditional marginal p(θ|x F ) = dx P p(θ|x P )p(x P |x F ) as an unconditional \"prior\".",
"Neither of these approximations is necessary.",
"For example, consider the following tighter \"prior\":",
"q(θ|x F ) ∼ dx P p(θ|x P )q(x P |x F ).",
"Here we reuse a tractable global representation p(θ|x P ) and instead create a variational approximation to the density of alternative datasets drawn from the same process: q(x P |x F ).",
"We believe this information-theoretic, representation-first perspective on learning has the potential to motivate new and better forms of inference.",
"7"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.17142856121063232,
0.32258063554763794,
0.1599999964237213,
0.09302324801683426,
0.05405404791235924,
0.15789473056793213,
0.09090908616781235,
0.2222222238779068,
0.1395348757505417,
0,
0.07407406717538834,
0,
0.0833333283662796,
0.1666666567325592,
0.13333332538604736,
0,
0.07692307233810425,
0.11428570747375488,
0.21739129722118378,
0.1818181723356247,
0.13333332538604736,
0.4117647111415863,
0.17391303181648254,
0,
0.07407406717538834,
0.0624999962747097,
0.10526315122842789,
0,
0,
0.19512194395065308,
0.1249999925494194
] | SkewFJnEtH | true | [
"Rederive a wide class of inference procedures from an global information bottleneck objective."
] |
[
"In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant.\n",
"The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target.",
"In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input.",
"This is typically the case when we have a standard conditioning input, such as a state observation, and a ``privileged'' input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent.",
"In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication).",
"Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access.",
"In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not.",
"We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information.",
"A model that generalizes effectively should be able to pick up on relevant cues in the input while ignoring irrelevant distractors.",
"For example, if one want to cross the street, one should only pay attention to the positions and velocities of the cars, disregarding their color.",
"The information bottleneck (Tishby et al., 2000) formalizes this in terms of minimizing the mutual information between the bottleneck representation layer with the input, while maximizing its mutual information with the correct output.",
"This type of input compression can improve generalization (Tishby et al., 2000) , and has recently been extended to deep parametric models, such as neural networks where it has been shown to improve generalization (Achille & Soatto, 2016; Alemi et al., 2016) .",
"The information bottleneck is generally intractable, but can be approximated using variational inference (Alemi et al., 2016) .",
"This variational approach parameterizes the information bottleneck model using a neural network (i.e., an encoder).",
"While the variational bound makes it feasible to train (approximate) information bottleneck layers with deep neural networks, the encoder in these networks -the layer that predicts the bottleneck variable distribution conditioned on the input -must still process the full input, before it is compressed and irrelevant information is removed.",
"The encoder itself can therefore fail to generalize, and although the information bottleneck minimizes mutual information with the input on the training data, it might not compress successfully on new inputs.",
"To",
"We demonstrated how the proposed variational bandwidth bottleneck (VBB) helps in generalization over the standard variational information bottleneck, in the case where the input is divided into a standard and privileged component.",
"Unlike the VIB, the VBB does not actually access the privileged input before deciding how much information about it is needed.",
"Our experiments show that the VBB improves generalization and can achieve similar or better performance while accessing the privileged input less often.",
"Hence, the VBB provides a framework for adaptive computation in deep network models, and further study applying it to domains where reasoning about access to data and computation is an exciting direction for future work.",
"Current limitation of the proposed method is that it assumes independence between standard input and the privileged input but we observe in practice assuming independence does not seem to hurt the results.",
"Future work would be to investigate how we can remove this assumption."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.10526315122842789,
0.10256409645080566,
0,
0.04255318641662598,
0.1395348757505417,
0.1818181723356247,
0.1111111044883728,
0.1538461446762085,
0.05882352590560913,
0,
0.1538461446762085,
0.0833333283662796,
0.19354838132858276,
0.13333332538604736,
0.14814814925193787,
0.25,
0.1538461446762085,
0.0624999962747097,
0.11764705181121826,
0.09090908616781235,
0,
0.07999999821186066
] | Hye1kTVFDS | true | [
"Training agents with adaptive computation based on information bottleneck can promote generalization. "
] |
[
"Breathing exercises are an accessible way to manage stress and many mental illness symptoms.",
"Traditionally, learning breathing exercises involved in-person guidance or audio recordings.",
"The shift to mobile devices has led to a new way of learning and engaging in breathing exercises as seen in the rise of multiple mobile applications with different breathing representations.",
"However, limited work has been done to investigate the effectiveness of these visual representations in supporting breathing pace as measured by synchronization.",
"We utilized a within-subjects study to evaluate four common breathing visuals to understand which is most effective in providing breathing exercise guidance.",
"Through controlled lab studies and interviews, we identified two representations with clear advantages over the others.",
"In addition, we found that auditory guidance was not preferred by all users.",
"We identify potential usability issues with the representations and suggest design guidelines for future development of app-supported breathing training."
] | [
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.05405404791235924,
0.12121211737394333,
0.20408162474632263,
0.13333332538604736,
0.9302325248718262,
0,
0.0555555522441864,
0.0952380895614624
] | Trnjb5ANow | false | [
"We utilized a within-subjects study to evaluate four paced breathing visuals common in mobile apps to understand which is most effective in providing breathing exercise guidance."
] |
[
"Current work on neural code synthesis consists of increasingly sophisticated architectures being trained on highly simplified domain-specific languages, using uniform sampling across program space of those languages for training.",
"By comparison, program space for a C-like language is vast, and extremely sparsely populated in terms of `useful' functionalities; this requires a far more intelligent approach to corpus generation for effective training.",
"We use a genetic programming approach using an iteratively retrained discriminator to produce a population suitable as labelled training data for a neural code synthesis architecture.",
"We demonstrate that use of a discriminator-based training corpus generator, trained using only unlabelled problem specifications in classic Programming-by-Example format, greatly improves network performance compared to current uniform sampling techniques.",
"Automated code synthesis is increasingly being studied as a way to lower the entry bar for nonexperts to create computer software, and to aid in generally taming the complexity of large-scale systems by allowing engineers to specify their intentions at a higher level of abstraction.",
"The approach of neural code synthesis in particular has recently gained a lot of attention, applying advances in neural networks to the problem of automated synthesis.",
"We specifically study the approach of programming by example, in which a small set of input-output examples are presented to the system to serve as a guide to the desired functionality of a program.",
"Based on an analysis of these examples the synthesis system returns a source-code program able to replicate that functionality.",
"Recent research in this field demonstrates promising results, including DeepCoder Balog et al. (2017) and Zohar & Wolf (2018) .",
"However, research to date is limited to using domain-specific languages and often linear sequential programs without conditions or loops.",
"We also take a neural-network-based approach to this problem in an attempt to gain inter-program inference across the training examples given to our system, potentially allowing the system to learn general aspects of programming to help synthesize new programs from unseen input/output examples.",
"Unlike existing recent work, however, we target a general-purpose low-level programming language for code synthesis with a much larger search space of possible programs.",
"This presents a major challenge in generating a training corpus for the neural network.",
"Where related research has used uniform sampling methods through program search space (Sun et al. (2018) Chen et al. (2017) ), or even enumerative approaches (Balog et al. (2017) ), such approaches are wholly inadequate over larger search volumes -with sparse sampling producing very poor inference results.",
"To solve this training corpus generation problem we propose a novel discriminator-based system, in which new sub-corpora are iteratively created, continually measuring their functional properties against those of the problems it is attempting to solve.",
"This process works by learning how similar the I/O mappings of generated programs are to I/O problems requested by users; by selecting programs which result in increasingly similar I/O mappings we simultaneously choose programs with similar underlying source code features, until we are able to solve I/O problems requested by users.",
"We demonstrate that the resultant training corpus is greatly superior to a conventionally generated corpus via uniform sampling, when using a more generalised programming language for synthesis.",
"We measure the performance of our approach by comparing against similar research on neural code synthesis which uses uniform or enumerative sampling for training, demonstrating that our discriminator-informed corpus generation approach far exceeds uniform sampling, by a factor of 2, in terms of find-rates.",
"We also compare against a general baseline using genetic programming (GP); this baseline produces a surprising result that GP has a broader range of programs found, although its probability of resolving any given user-provided problem is worse.",
"Our approach offers an effective way to generate a training corpus for a high-dimensional program search space, capable of finding a wide range of unseen useful programs based only on input/output examples, without any labelled training data.",
"At a high level our research also demonstrates that the structure of the training corpus provided to a neural network greatly affects its performance on general purpose code generation tasks, and we argue that it should therefore represent a core focus of the code synthesis community's efforts alongside work on neural network and language structures.",
"In the remainder of this paper we firstly assess the literature in the field, focusing on neural code synthesis and specifically its corpus generation techniques.",
"In Sec. 3 we then present the methodology we use to build our system, based on both a synthesis network and a discriminator network for corpus generation.",
"We then evaluate our approach in Sec. 4 by comparing it against traditional training corpus generation approaches for neural code synthesis.",
"[code to reproduce our results will be made open-source should this paper be accepted, and this line will be changed to the link to the repository]",
"This paper has presented a discriminator-based corpus generation technique, which iteratively seeks to generate training programs drawn from the same distribution as the programs it is attempting to solve.",
"It works without the need for labelled training data, generating its own based purely on supplied features of I/O examples and the underlying properties of the language itself.",
"We show that it is greatly superior to one which does not use a discriminator for selecting training examples.",
"Once generation has completed, our framework can also return a collated training corpus, allowing training of a single large neural network.",
"We show that this collated network is also significantly stronger, in terms of quality of trained network, to one trained using random sampling techniques.",
"Based on our results, we argue that the way in which training corpora are generated for neural program synthesis deserves significant further study -and may be of equal importance to the design of the neural network used for synthesis itself.",
"In future work we will further explore the ability of discriminator-style networks to identify specific features of code likely to be involved in solving a particular problem, as well as more advanced architectures for synthesis and discriminator networks."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3636363446712494,
0.1702127605676651,
0.4878048598766327,
0.25531914830207825,
0.2142857164144516,
0.2631579041481018,
0.09302324801683426,
0.2222222238779068,
0,
0.11428570747375488,
0.1111111044883728,
0.19999998807907104,
0.2666666507720947,
0,
0.11764705181121826,
0.07692307233810425,
0.2857142686843872,
0.2181818187236786,
0.07999999821186066,
0.3199999928474426,
0.2295081913471222,
0.19999998807907104,
0.2926829159259796,
0.2631579041481018,
0.0555555522441864,
0.1860465109348297,
0.1428571343421936,
0.277777761220932,
0.1666666567325592,
0.1538461446762085,
0.31372547149658203,
0.23529411852359772
] | rkxDon4Yvr | true | [
"A way to generate training corpora for neural code synthesis using a discriminator trained on unlabelled data"
] |
[
"Neural reading comprehension models have recently achieved impressive gener- alisation results, yet still perform poorly when given adversarially selected input.",
"Most prior work has studied semantically invariant text perturbations which cause a model’s prediction to change when it should not.",
"In this work we focus on the complementary problem: excessive prediction undersensitivity where input text is meaningfully changed, and the model’s prediction does not change when it should.",
"We formulate a noisy adversarial attack which searches among semantic variations of comprehension questions for which a model still erroneously pro- duces the same answer as the original question – and with an even higher prob- ability.",
"We show that – despite comprising unanswerable questions – SQuAD2.0 and NewsQA models are vulnerable to this attack and commit a substantial frac- tion of errors on adversarially generated questions.",
"This indicates that current models—even where they can correctly predict the answer—rely on spurious sur- face patterns and are not necessarily aware of all information provided in a given comprehension question.",
"Developing this further, we experiment with both data augmentation and adversarial training as defence strategies: both are able to sub- stantially decrease a model’s vulnerability to undersensitivity attacks on held out evaluation data.",
"Finally, we demonstrate that adversarially robust models gener- alise better in a biased data setting with a train/evaluation distribution mismatch; they are less prone to overly rely on predictive cues only present in the training set and outperform a conventional model in the biased data setting by up to 11% F1.",
"Neural networks can be vulnerable to adversarial input perturbations (Szegedy et al., 2013; Kurakin et al., 2016) .",
"In Natural Language Processing (NLP), which operates on discrete symbol sequences, adversarial attacks can take a variety of forms (Ettinger et al., 2017; Alzantot et al., 2018) including character perturbations (Ebrahimi et al., 2018) , semantically invariant reformulations (Ribeiro et al., 2018b; Iyyer et al., 2018b) or-specifically in Reading Comprehension (RC)-adversarial text insertions (Jia & Liang, 2017; Wang & Bansal, 2018) .",
"A model's inability to handle adversarially chosen input text puts into perspective otherwise impressive generalisation results for in-distribution test sets (Seo et al. (2017) ; Yu et al. (2018) ; ; inter alia) and constitutes an important caveat to conclusions drawn regarding a model's language understanding abilities.",
"While semantically invariant text transformations can remarkably alter a model's predictions, the converse problem of model undersensitivity is equally troublesome: a model's text input can often be drastically changed in meaning while retaining the original prediction.",
"In particular, previous works (Feng et al., 2018; Ribeiro et al., 2018a; Sugawara et al., 2018) show that even after deletion of all but a small fraction of input words, models often produce the same output.",
"However, such reduced inputs are usually unnatural to a human reader, and it is both unclear what behaviour we should expect from natural language models evaluated on unnatural text, and how to use such unnatural inputs to improve models.",
"In this work, we show that in RC undersensitivity can be probed with automatically generated natural language questions.",
"In turn, we use these to both make RC models more sensitive when they should be, and more robust in the presence of biased training data.",
"Fig.",
"1 shows an examples for a BERT LARGE model ) trained on SQuAD2.0 (Rajpurkar et al., 2018) that is given a text and a comprehension question, i.e. \"What was Fort Caroline renamed to after the Spanish attack?\" which it correctly answers as \"San Mateo\" with 98% confidence.",
"Altering this question, however, can increase model confidence for this same prediction to 99%, even though the new question is unanswerable given the same context.",
"That is, we observe an increase in model probability, despite removing relevant question information and replacing it with irrelevant content.",
"We formalise the process of finding such questions as an adversarial search in a discrete input space arising from perturbations of the original question.",
"There are two types of discrete perturbations that we consider, based on part-of-speech and named entities, with the aim of obtaining grammatical and semantically consistent alternative questions that do not accidentally have the same correct answer.",
"We find that SQuAD2.0 and NewsQA (Trischler et al., 2017 ) models can be attacked on a substantial proportion of samples, even with a limited computational adversarial search budget.",
"The observed undersensitivity correlates negatively with standard performance metrics (EM/F 1 ), suggesting that this phenomenon -where present -is a reflection of a model's lack of question comprehension.",
"When training models to defend against undersensitivity attacks with data augmentation and adversarial training, we observe that they can generalise their robustness to held out evaluation data without sacrificing standard performance.",
"Furthermore, we notice they are also more robust in a learning scenario that has dataset bias with a train/evaluation distribution mismatch, increasing their performance by up to 11%F 1 .",
"In summary, our contributions are as follows:",
"• We propose a new type of adversarial attack targeting the undersensitivity of neural RC models, and show that current models are vulnerable to it.",
"• We compare two defence strategies, data augmentation and adversarial training, and show their effectiveness at reducing undersensitivity errors on held-out data, without sacrificing standard performance.",
"• We demonstrate that robust models generalise better in a biased data scenario, improving their ability to answer questions with many possible answers when trained on questions with only one.",
"We have investigated a problematic behaviour of RC models -being overly stable in their predictions when given semantically altered questions.",
"This undersensitivity can be drastically reduced with appropriate defences, such as adversarial training, and results in more robust models without sacrificing standard performance.",
"Future work should study in more detail the causes and better defences to model undersensitivity, which we believe provides an alternative viewpoint on evaluating a model's RC capabilities.",
"5 approximate as we stratify by article 6 We also include an experiment with the setup used in (Lewis & Fan, 2019) Table 7 : Breakdown of undersensitivity error rate on NewsQA with a held-out attack space (lower is better).",
"A APPENDIX: POS PERTURBATION DETAILS.",
"We exclude these PoS-tags when computing perturbations:"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11999999731779099,
0.03999999538064003,
0.1428571343421936,
0.25,
0.24137930572032928,
0.16393442451953888,
0.23333333432674408,
0.2222222238779068,
0.043478257954120636,
0.05063290521502495,
0.11267605423927307,
0.13114753365516663,
0.06451612710952759,
0.06557376682758331,
0.1249999925494194,
0.145454540848732,
0.23076923191547394,
0.19230768084526062,
0.1599999964237213,
0.11538460850715637,
0.16129031777381897,
0.19999998807907104,
0.1071428507566452,
0.16949151456356049,
0.10344827175140381,
0,
0.25925925374031067,
0.1818181723356247,
0.20689654350280762,
0.07999999821186066,
0.15094339847564697,
0.17241378128528595,
0.17391303181648254,
0,
0.054054051637649536
] | HkgxheBFDS | true | [
"We demonstrate vulnerability to undersensitivity attacks in SQuAD2.0 and NewsQA neural reading comprehension models, where the model predicts the same answer with increased confidence to adversarially chosen questions, and compare defence strategies."
] |
[
"This paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters.",
"This representation is language-independent with no need of pretraining and produces an encoding with no information loss.",
"It provides an adequate description of the morphology of text, as it is able to represent prefixes, declensions, and inflections with similar vectors and are able to represent even unseen words on the training dataset.",
"Similarly, as it is compact yet sparse, is ideal for speed up training times using tensor processing libraries.",
"As part of this paper, we show that this technique is especially effective when coupled with convolutional neural networks (CNNs) for text classification at character-level.",
"We apply two variants of CNN coupled with it.",
"Experimental results show that it drastically reduces the number of parameters to be optimized, resulting in competitive classification accuracy values in only a fraction of the time spent by one-hot encoding representations, thus enabling training in commodity hardware.",
"Document classification is one of the principal tasks addressed in the context of natural language processing BID22 .",
"It implies associating a document -or any text fragment, for that matter-with a category or label relying on their content.",
"The increasing availability of texts in digital form, especially through the Internet, has called for the development of statistical and artificial intelligence tools for automating this process.",
"Spam detectors, sentiment analysis, news archiving, among many others, demand high-quality text classifiers.There is a broad range of approaches to document classification (see BID22 BID1 BID11 BID15 ).",
"An important portion of them relies on a representation that handles words as the atomic element of text.",
"Consequently, those methods carry out their analysis through statistics of words occurrence .",
"However, the variability of words and structures belonging to a language hinders the viability of this method.",
"That is why, these models have a superior performance in specific domains and applications, where the vocabulary is or can be restricted to a relatively small number of words, possibly chosen by a specialist.",
"Furthermore, such modeling becomes specific to a language, causing the replication process in another language to be carried out from scratch .In",
"recent years, we have experienced a revolution in the machine learning with the advent of deep learning methods BID8 . The",
"development of convolutional neural networks (CNNs) BID17 coupled with the popularization of parallel computing libraries (e. g. Theano BID2 , Tensorflow BID0 , Keras BID7 , etc.) that simplify general-purpose computing on graphics processing units (GPGPU) BID21 has been successful in tackling image classification problem BID16 quickly becoming the state of the art of the field.As it could be expected, the success of deep learning and CNNs in the image classification domain has prompted the interest to extend the deep learning principles to the document classification domain. Some",
"existing methods have been updated but the clear majority are still based on the to-kenization of words and the inference of their statistics. Bag",
"of Words (BoW) BID13 and Word2vec BID20 are some of the most popular strategies.It can be argued that the replication of image classification success in the documents domain faces as main challenge the difficulty of representing text as numerical tensors.To address this issue, suggested a groundbreaking approach that considers the characters as the atomic elements of a text. In",
"particular, they represented the text as a sequence of one-hot encoded characters. This",
"encoding provides a robust, language-independent representation of texts as matrices, that are then used as inputs of different CNNs. Their",
"experimental results showed that this approach was able to attain and, in some cases, improve the state of the art results in complex text classification problems. More",
"recently, BID25 improved those results by combining CNNs with Long Short-Term Memories (LSTMs) BID10 . In spite",
"of that, the impact of this idea is hampered by the large computational demands of the approach, since its training can take days per epoch in relatively complex problems.Character-level representations have the potential of being more robust than word-level ones. On the other",
"hand, they are computationally more expensive because detecting syntactic and semantic relationships at the character-level is more expensive (Blunsom et al., 2017) . One possible",
"solution could be a word representation that incorporates the character-level information.In this paper, we propose an efficient character-level encoding of word to represent texts derived from the Tagged Huffman BID23 information compression technique. This encoding",
"takes into account the character appearance frequency in the texts in order to assign shorter codes to the most frequently used ones. This novel text",
"encoding makes the idea put forward by more computationally accessible by reducing its training requirements in terms of time and memory.The proposed encoding makes possible to represent larger portions of texts in a less sparse form, without any loss of information, while preserving the ability to encode any word, even those not present in the training dataset ones. In order to study",
"the impact of this encoding, we coupled it with two CNN architectures. The experimental",
"studies performed showed that we managed to achieve a performance similar or in some cases better than the state of the art at a fraction of the training time even if we employed a simpler hardware setup.Our main contribution is to show that this novel character-level text encoding produces a reduced input matrix, leading to a substantial reduction in training times while producing comparable or better results in terms of accuracy than the original approach by . This opens the door",
"to more complex applications, the use of devices with lower computational power and the exploration of other approaches that can be coupled with input representation.The rest of the paper is structured as follows. In the next section",
", we deal with the theoretical foundations and motivation that are required for the ensuing discussions. There we also analyze",
"the alternatives to character-level text compression that were taken into account for producing our proposal. After that, in Section",
"3, we describe the encoding procedure and the neural network architectures that will take part of the experiments. Subsequently, in Section",
"4, we replicate the experiments of in order to contrast our proposal with theirs under comparable conditions. Finally, in Section 5, we",
"provide some final remarks, conclusive comments and outline our future work directions."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1428571343421936,
0.24242423474788666,
0.21276594698429108,
0.17142856121063232,
0.1428571343421936,
0.14814814925193787,
0.1538461446762085,
0.12121211737394333,
0.10810810327529907,
0.1428571343421936,
0.1702127605676651,
0.17142856121063232,
0.06666666269302368,
0.24242423474788666,
0.20408162474632263,
0.10256409645080566,
0.1111111044883728,
0.0714285671710968,
0.10256409645080566,
0.1230769157409668,
0.12903225421905518,
0.1666666567325592,
0.0952380895614624,
0,
0.1090909019112587,
0.09756097197532654,
0.16326530277729034,
0.051282044500112534,
0.14705881476402283,
0.1249999925494194,
0.15189872682094574,
0.19999998807907104,
0.1111111044883728,
0.10810810327529907,
0.10810810327529907,
0.10526315122842789,
0.06666666269302368
] | SkYXvCR6W | true | [
"Using Compressing tecniques to Encoding of Words is a possibility for faster training of CNN and dimensionality reduction of representation"
] |
[
"Sequence prediction models can be learned from example sequences with a variety of training algorithms.",
"Maximum likelihood learning is simple and efficient, yet can suffer from compounding error at test time. \n",
"Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency.",
"A rich set of other algorithms, such as data noising, RAML, and softmax policy gradient, have also been developed from different perspectives. \n",
"In this paper, we present a formalism of entropy regularized policy optimization, and show that the apparently distinct algorithms, including MLE, can be reformulated as special instances of the formulation.",
"The difference between them is characterized by the reward function and two weight hyperparameters.\n",
"The unifying interpretation enables us to systematically compare the algorithms side-by-side, and gain new insights into the trade-offs of the algorithm design.\n",
"The new perspective also leads to an improved approach that dynamically interpolates among the family of algorithms, and learns the model in a scheduled way.",
"Experiments on machine translation, text summarization, and game imitation learning demonstrate superiority of the proposed approach.",
"Sequence prediction problem is ubiquitous in many applications, such as generating a sequence of words for machine translation Sutskever et al., 2014 ), text summarization (Hovy & Lin, 1998; Rush et al., 2015) , and image captioning Karpathy & Fei-Fei, 2015) , or taking a sequence of actions to complete a task.",
"In these problems (e.g., Mnih et al., 2015; Ho & Ermon, 2016) , we are often given a set of sequence examples, from which we want to learn a model that sequentially makes the next prediction (e.g., generating the next token) given the current state (e.g., the previous tokens).",
"A standard training algorithm is based on supervised learning which seeks to maximize the loglikelihood of example sequences (i.e., maximum likelihood estimation, MLE).",
"Despite the computational simplicity and efficiency, MLE training can suffer from compounding error (Ranzato et al., 2016; Ross & Bagnell, 2010) in that mistakes at test time accumulate along the way and lead to states far from the training data.",
"Another line of approaches overcome the training/test discrepancy issue by resorting to the reinforcement learning (RL) techniques (Ranzato et al., 2016; Rennie et al., 2017) .",
"For example, Ranzato et al. (2016) used policy gradient (Sutton et al., 2000) to train a text generation model with the task metric (e.g., BLEU) as reward.",
"However, RL-based approaches can face challenges of prohibitively poor sample efficiency and high variance.",
"To this end, a diverse set of methods has been developed that is in a middle ground between the two paradigms of MLE and RL.",
"For example, RAML adds reward-aware perturbation to the MLE data examples; SPG (Ding & Soricut, 2017) leverages reward distribution for effective sampling of policy gradient.",
"Other approaches such as data noising (Xie et al., 2017 ) also show improved results.",
"In this paper, we establish a unifying perspective of the above distinct learning algorithms.",
"Specifically, we present a generalized entropy regularized policy optimization framework, and show that the diverse algorithms, such as MLE, RAML, data noising, and SPG, can all be re-formulated as special cases of the framework, with the only difference being the choice of reward and the values of two weight hyperparameters (Figure 1 ).",
"In particular, we show MLE is equivalent to using a Delta-function reward which returns 1 to model samples that match training examples exactly, and −∞ to any other samples.",
"Such extremely restricted reward has literally disabled any exploration of the model beyond training data, yielding brittle prediction behaviors.",
"Other algorithms essentially use various locally-relaxed rewards, joint with the model distribution, for broader (and more costly) exploration during training.",
"Besides the new views of the existing algorithms, the unifying perspective also leads to new algorithms for improved learning.",
"We develop interpolation algorithm, which, as training proceeds, gradually expands the exploration space by annealing both the reward function and the weight hyperparameters.",
"The annealing in effect dynamically interpolates among the existing algorithms from left to right in Figure 1 .",
"We conduct experiments on the tasks of text generation including machine translation and text summarization, and game imitation learning.",
"The interpolation algorithm shows superior performance over various previous methods.",
"We have presented a unifying perspective of a variety of learning algorithms for sequence prediction problems.",
"The framework is based on a generalized entropy regularized policy optimization formulation, and we show the distinct algorithms are equivalent to specifying the reward and weight hyperparameters.",
"The new consistent treatment provides systematic understanding and comparison across the algorithms, and inspires further improved learning.",
"The proposed interpolation algorithm shows consistent improvement in machine translation, text summarization, and game imitation learning.",
"Ranzato et al. (2016) made an early attempt to address the exposure bias problem by exploiting the policy gradient algorithm (Sutton et al., 2000) .",
"Policy gradient aims to maximizes the expected reward:",
"where RP G is usually a common reward function (e.g., BLEU).",
"Taking gradient w.r.t θ gives:",
"We now reveal the relation between the ERPO framework we present and the policy gradient algorithm.",
"Starting from the M-step of Eq.(2) and setting (α = 1, β = 0) as in SPG (section ??), we use p θ n as the proposal distribution and obtain the importance sampling estimate of the gradient (we omit the superscript n for notation simplicity):",
"Eq [∇ θ log p θ (y)] = Ep θ q(y) p θ (y) ∇ θ log p θ (y) = 1/Z θ · Ep θ exp{R(y|y * )} · ∇ θ log p θ (y) ,",
"where Z θ = y exp{log p θ + R} is the normalization constant of q, which can be considered as adjusting the step size of gradient descent.",
"We can see that Eq.(11) recovers Eq.(10) if we further set R = log RP G, and omit the scaling factor Z θ . In",
"other words, policy gradient can be seen as a special instance of the general ERPO framework with (R = log RP G, α = 1, β = 0) and with Z θ omitted.",
"The MIXER algorithm (Ranzato et al., 2016) incorporates an annealing strategy that mixes between MLE and policy gradient training.",
"Specifically, given a ground-truth example y * , the first m tokens y * 1:m are used for evaluating MLE loss, and starting from step m + 1, policy gradient objective is used.",
"The m value decreases as training proceeds.",
"With the relation between policy gradient and ERPO as established above, MIXER can be seen as a specific instance of the proposed interpolation algorithm (section 4) that follows a restricted annealing strategy for token-level hyperparameters (λ1, λ2, λ3).",
"That is, for t < m in Eq.4 (i.e.,the first m steps), (λ1, λ2, λ3) is set to (0, 0, 1) and c = 1, namely the MLE training; while for t > m, (λ1, λ2, λ3) is set to (0.5, 0.5, 0) and c = 2."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2380952388048172,
0.09090908616781235,
0.09302324801683426,
0.19999998807907104,
0.2545454502105713,
0.0476190447807312,
0.2083333283662796,
0.19607841968536377,
0.3255814015865326,
0.17142856121063232,
0.14705881476402283,
0.19230768084526062,
0.0317460261285305,
0.07999999821186066,
0.18518517911434174,
0.09756097197532654,
0.1599999964237213,
0.07692307233810425,
0.09302324801683426,
0.19512194395065308,
0.22857142984867096,
0.07547169178724289,
0.08695651590824127,
0.08510638028383255,
0.23255813121795654,
0.0833333283662796,
0.04651162400841713,
0.3636363446712494,
0.10810810327529907,
0.2926829159259796,
0.307692289352417,
0.1860465109348297,
0.3255814015865326,
0.08163265138864517,
0,
0.04999999701976776,
0,
0.1463414579629898,
0.0624999962747097,
0,
0.038461532443761826,
0.07692307233810425,
0.17543859779834747,
0.12765957415103912,
0.1071428507566452,
0,
0.19354838132858276,
0.060606054961681366
] | B1gX8JrYPr | true | [
"An entropy regularized policy optimization formalism subsumes a set of sequence prediction learning algorithms. A new interpolation algorithm with improved results on text generation and game imitation learning."
] |
[
"Origin-Destination (OD) flow data is an important instrument in transportation studies.",
"Precise prediction of customer demands from each original location to a destination given a series of previous snapshots helps ride-sharing platforms to better understand their market mechanism.",
"However, most existing prediction methods ignore the network structure of OD flow data and fail to utilize the topological dependencies among related OD pairs.",
"In this paper, we propose a latent spatial-temporal origin-destination (LSTOD) model, with a novel convolutional neural network (CNN) filter to learn the spatial features of OD pairs from a graph perspective and an attention structure to capture their long-term periodicity.",
"Experiments on a real customer request dataset with available OD information from a ride-sharing platform demonstrate the advantage of LSTOD in achieving at least 6.5% improvement in prediction accuracy over the second best model.",
"Spatial-temporal prediction of large-scale network-based OD flow data plays an important role in traffic flow control, urban routes planning, infrastructure construction, and the policy design of ridesharing platforms, among others.",
"On ride-sharing platforms, customers keep sending requests with origins and destinations at each moment.",
"Knowing the exact original location and destination of each future trip allows platforms to prepare sufficient supplies in advance to optimize resource utilization and improve users' experience.",
"Given the destinations of prospective demands, platforms can predict the number of drivers transferring from busy to idle status.",
"Prediction of dynamic demand flow data helps ride-sharing platforms to design better order dispatch and fleet management policies for achieving the demand-supply equilibrium as well as decreased passenger waiting times and increased driver serving rates.",
"Many efforts have been devoted to developing traffic flow prediction models in the past few decades.",
"Before the rise of deep learning, traditional statistical and machine learning approaches dominate this field (Li et al., 2012; Lippi et al., 2013; Moreira-Matias et al., 2013; Shekhar & Williams, 2007; Idé & Sugiyama, 2011; Zheng & Ni, 2013) .",
"Most of these models are linear and thus ignore some important non-linear correlations among the OD flows.",
"Some other methods (Kwon & Murphy, 2000; Yang et al., 2013) further use additional manually extracted external features, but they fail to automatically extract the spatial representation of OD data.",
"Moreover, they roughly combine the spatial and temporal features when fitting the prediction model instead of dynamically modelling their interactions.",
"The development of deep learning technologies brings a significant improvement of OD flow prediction by extracting non-linear latent structures that cannot be easily covered by feature engineering.",
"(Xingjian et al., 2015; Ke et al., 2017; Zhou et al., 2018) .",
"Zhang et al. (2016; modeled the whole city are as an entire image and employed residual neural network to capture temporal closeness.",
"and also learned traffic as images but they used LSTM instead to obtain the temporal dependency.",
"Yao et al. (2018b) proposed a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relationships.",
"However, using standard convolution filters suffers from the problem that some OD flows covered by a receptive field of regular CNNs are not spatially important.",
"Graph-based neural net-works (GNN) (Kipf & Welling, 2016; Defferrard et al., 2016; Veličković et al., 2017) are proved to be powerful tools in modelling spatial-temporal network structures Li et al., 2017) .",
"However, none of these frameworks are directly applicable here since both the historical observations and responses to predict are vertex-level variables.",
"On the contrary, the OD flows we discuss in this paper are generated in the edge space by our definition.",
"The aim of this paper is to introduce a hierarchical Latent Spatial-Temporal Origin-Destination (LSTOD) prediction model to jointly extract the complex spatial-temporal features of OD data by using some well-designed CNN-based architectures.",
"Instead of modelling the dynamic OD networks as a sequence of images and applying standard convolution filters to capture their spatial information, we introduce a novel Vertex Adjacent Convolution Network (VACN) that uses an irregular convolution filter to cover the most related OD flows that share common vertecies with the target one.",
"The OD flows connected by common starting and/or ending vertexes, which may fall into different regions of the flow map, can be spatially correlated and topologically connected.",
"Moreover, for most ride-sharing platforms, a passenger is more likely to send a new request from the location where his/her last trip ends in.",
"To learn such sequential dependency, we introduce a temporal gated CNN (TGCNN) and integrate it with VACN by using the sandwich-structured STconv block in order to collectively catch the evolutionary mechanism of dynamic OD flow systems.",
"A periodically shifted attention mechanism is also used to capture the shift in the long-term periodicity.",
"Finally, the combined short-term and long-term representations are fed into the final prediction layer to complete the architecture.",
"Our contributions are summarized as follow:",
"• To the best of our knowledge, it is the first time that we propose purely convolutional structures to learn both short-term and long-term spatio-temporal features simultaneously from dynamic origin-destination flow data.",
"• We propose a novel VACN architecture to capture the graph-based semantic connections and functional similarities among correlated OD flows by modeling each OD flow map as an adjacency matrix.",
"• We design a periodically shift attention mechanism to model the long-term periodicity when using convolutional architecture TGCNN in learning temporal features.",
"• Experimental results on two real customer demand data sets obtained from a ride-sharing platform demonstrate that LSTOD outperforms many state-of-the-art methods in OD flow prediction, with 7.94% to 15.14% improvement of testing RMSE."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0,
0.14999999105930328,
0.052631575614213943,
0.30188679695129395,
0.1249999925494194,
0,
0.06666666269302368,
0.04878048226237297,
0.12121211737394333,
0.04081632196903229,
0.0624999962747097,
0,
0.060606054961681366,
0.04255318641662598,
0.05714285373687744,
0.04878048226237297,
0,
0.052631575614213943,
0.0624999962747097,
0.17142856121063232,
0.09756097197532654,
0.0952380895614624,
0.1111111044883728,
0.060606054961681366,
0.17391303181648254,
0.13333332538604736,
0.0476190410554409,
0.10256409645080566,
0.19607841968536377,
0.19354838132858276,
0.0624999962747097,
0,
0.21276594698429108,
0.2222222238779068,
0.3684210479259491,
0.11538460850715637
] | BkgZxpVFvH | true | [
"We propose a purely convolutional CNN model with attention mechanism to predict spatial-temporal origin-destination flows. "
] |
[
"Adversarial training has been demonstrated as one of the most effective methods for training robust models to defend against adversarial examples.",
"However, adversarially trained models often lack adversarially robust generalization on unseen testing data.",
"Recent works show that adversarially trained models are more biased towards global structure features.",
"Instead, in this work, we would like to investigate the relationship between the generalization of adversarial training and the robust local features, as the robust local features generalize well for unseen shape variation.",
"To learn the robust local features, we develop a Random Block Shuffle (RBS) transformation to break up the global structure features on normal adversarial examples.",
"We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples.",
"To demonstrate the generality of our argument, we implement RLFAT in currently state-of-the-art adversarial training frameworks.",
"Extensive experiments on STL-10, CIFAR-10 and CIFAR-100 show that RLFAT significantly improves both the adversarially robust generalization and the standard generalization of adversarial training.",
"Additionally, we demonstrate that our models capture more local features of the object on the images, aligning better with human perception.",
"Deep learning has achieved a remarkable performance breakthrough on various challenging benchmarks in machine learning fields, such as image classification (Krizhevsky et al., 2012) and speech recognition .",
"However, recent studies (Szegedy et al., 2014; Goodfellow et al., 2015) have revealed that deep neural network models are strikingly susceptible to adversarial examples, in which small perturbations around the input are sufficient to mislead the predictions of the target model.",
"Moreover, such perturbations are almost imperceptible to humans and often transfer across diverse models to achieve black-box attacks (Papernot et al., 2017; Liu et al., 2017; Lin et al., 2020) .",
"Though the emergence of adversarial examples has received significant attention and led to various defend approaches for developing robust models Dhillon et al., 2018; Wang & Yu, 2019; Zhang et al., 2019a) , many proposed defense methods provide few benefits for the true robustness but mask the gradients on which most attacks rely (Carlini & Wagner, 2017a; Athalye et al., 2018; Uesato et al., 2018; Li et al., 2019) .",
"Currently, one of the best techniques to defend against adversarial attacks (Athalye et al., 2018; Li et al., 2019 ) is adversarial training Zhang et al., 2019a) , which improves the adversarial robustness by injecting adversarial examples into the training data.",
"Among substantial works of adversarial training, there still remains a big robust generalization gap between the training data and the testing data Zhang et al., 2019b; Ding et al., 2019; Zhai et al., 2019) .",
"The robustness of adversarial training fails to generalize on unseen testing data.",
"Recent works (Geirhos et al., 2019; Zhang & Zhu, 2019) further show that adversarially trained models capture more on global structure features but normally trained models are more biased towards local features.",
"In intuition, global structure features tend to be robust against adversarial perturbations but hard to generalize for unseen shape variations, instead, local features generalize well for unseen shape variations but are hard to generalize on adversarial perturbation.",
"It naturally raises an intriguing question for adversarial training:",
"For adversarial training, is it possible to learn the robust local features , which have better adversarially robust generalization and better standard generalization?",
"To address this question, we investigate the relationship between the generalization of adversarial training and the robust local features, and advocate for learning robust local features for adversarial training.",
"Our main contributions are as follows:",
"• To our knowledge, this is the first work that sheds light on the relationship between adversarial training and robust local features.",
"Specifically, we develop a Random Block Shuffle (RBS) transformation to study such relationship by breaking up the global structure features on normal adversarial examples.",
"• We propose a novel method called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features, and then transfers the information of robust local features into the training on normal adversarial examples.",
"• To demonstrate the generality of our argument, we implement RLFAT in two currently stateof-the-art adversarial training frameworks, PGD Adversarial Training (PGDAT) and TRADES (Zhang et al., 2019a) .",
"Empirical results show consistent and substantial improvements for both adversarial robustness and standard accuracy on several standard datasets.",
"Moreover, the salience maps of our models on images tend to align better with human perception.",
"Differs to existing adversarially trained models that are more biased towards the global structure features of the images, in this work, we hypothesize that robust local features can improve the generalization of adversarial training.",
"To validate this hypothesis, we propose a new stream of adversarial training approach called Robust Local Features for Adversarial Training (RLFAT) and implement it in currently state-of-the-art adversarial training frameworks, PGDAT and TRADES.",
"We provide strong empirical support for our hypothesis and show that the proposed methods based on RLFAT not only yield better standard generalization but also promote the adversarially robust generalization.",
"Furthermore, we show that the salience maps of our models on images tend to align better with human perception, uncovering certain unexpected benefit of the robust local features for adversarial training.",
"Our findings open a new avenue for improving adversarial training, whereas there are still a lot to explore along this avenue.",
"First, is it possible to explicitly disentangle the robust local features from the perspective of feature disentanglement?",
"What is the best way to leverage the robust local features?",
"Second, from a methodological standpoint, the discovered relationship may also serve as an inspiration for new adversarial defenses, where not only the robust local features but also the global information is taken into account, as the global information is useful for some tasks.",
"These questions are worth investigation in future work, and we hope that our observations on the benefit of robust local features will inspire more future development."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.2978723347187042,
0.1538461446762085,
0.09756097197532654,
0.290909081697464,
0.15686273574829102,
0.5806451439857483,
0.1860465109348297,
0.5416666865348816,
0.12765957415103912,
0.07407406717538834,
0.1269841194152832,
0.038461532443761826,
0.1463414579629898,
0.17241378128528595,
0.2857142686843872,
0.1538461446762085,
0.0714285671710968,
0.11320754140615463,
0.1111111044883728,
0.2916666567325592,
0.3333333432674408,
0,
0.25,
0.11764705181121826,
0.5333333015441895,
0.25,
0.23255813121795654,
0.09302324801683426,
0.2857142686843872,
0.5964912176132202,
0.3272727131843567,
0.25,
0.17391303181648254,
0.1395348757505417,
0.10810810327529907,
0.19672130048274994,
0.19230768084526062
] | H1lZJpVFvr | true | [
"We propose a new stream of adversarial training approach called Robust Local Features for Adversarial Training (RLFAT) that significantly improves both the adversarially robust generalization and the standard generalization."
] |
[
"The verification of planning domain models is crucial to ensure the safety, integrity and correctness of planning-based automated systems.",
"This task is usually performed using model checking techniques. ",
"However, directly applying model checkers to verify planning domain models can result in false positives, i.e. counterexamples that are unreachable by a sound planner when using the domain under verification during a planning task.",
"In this paper, we discuss the downside of unconstrained planning domain model verification.",
"We then propose a fail-safe practice for designing planning domain models that can inherently guarantee the safety of the produced plans in case of undetected errors in domain models. ",
"In addition, we demonstrate how model checkers, as well as state trajectory constraints planning techniques, should be used to verify planning domain models so that unreachable counterexamples are not returned.",
"Planning and task scheduling techniques are increasingly applied to real-world problems such as activity sequencing, constraint solving and resource management.",
"These processes are implemented in planning-based automated systems which are already used in space missions BID14 BID3 BID0 , search and rescue BID12 , logistics BID19 and many other domains.",
"Since the failure of such systems could have catastrophic consequences, these applications are regarded as safety-critical.",
"Therefore, verification methods that are robust, trustworthy and systematic are crucial to gain confidence in the safety, integrity and correctness of these systems.The literature is rich with studies on verification of planning systems.",
"For instance, BID17 carried out scenario-based testing and model-based validation of the remote agent that controlled the Deep Space 1 mission.",
"Another example is the verification of the safety of the autonomous science agent design that was deployed on the Earth Orbiter 1 spacecraft .A",
"typical planning system consists of a planning domain model, planning problem, planner, plan, executive, and mon-itor. Planners",
"take as an input a domain model which describes application-specific states and actions, and a problem that specifies the goal and the initial state. From these",
"inputs, a sequence of actions that can achieve the goal starting from the initial state is returned as plan. The plan is",
"then executed by an executive to change the world state to match the desired goal.Our research focuses on the verification of planning domain models wrt. safety properties",
". Domain models provide",
"the foundations for planning. They describe real-world",
"actions by capturing their pre-conditions and effects. Due to modelling errors,",
"a domain model might be inconsistent, incomplete, or inaccurate. This could cause the planner",
"to fail in finding a plan or to generate unrealistic plans that will fail to execute in the real world. Moreover, erroneous domain models",
"could lead planners to produce unsafe plans that, when executed, could cause catastrophic consequences in the real world.This paper addresses the fact that the state-of-the-art verification methods for planning domain models are vulnerable to false positive counterexamples. In particular, unconstrained verification",
"tasks might return counterexamples that are unreachable by planners. Such counterexamples can mislead designers",
"to unnecessarily restrict domain models, thereby potentially blocking valid and possibly necessary behaviours. In addition, false positive counterexamples",
"can lead verification engineers to overlook counterexamples that are reachable by planners.To overcome these deficiencies, we propose to employ planning goals as constraints during verification. Thus, we introduce goal-constrained planning",
"domain model verification, a novel concept that eliminates unreachable counterexamples per se. We formally prove that goal-constrained planning",
"domain model verification of safety properties is guaranteed to return reachable counterexamples if and only if any exist. We also demonstrate two different ways to perform",
"goal-constrained planning domain model verification, one using model checkers and the other using state trajectory constraints planning techniques. To the best of our knowledge, this work is the first",
"to recommend fail-safe planning domain model design practice; introduce the concept of goal-constrained planning domain model verification. and demonstrate how model checkers, as well as state",
"trajectory constraints planning techniques, can be used to perform goal-constrained planning domain model verification The rest of this paper is organised as follows. First, Section 2, contrasts the concepts presented here",
"with related work. Second, Section 3 discusses the problem of unreachable",
"counterexamples in planning domain model verification. Third, Section 4 proposes a design practice for planning",
"domain models that can inherently guarantee domain model safety even in the case of undetected modelling errors. A verification concept of planning domain models that avoids",
"returning unreachable counterexamples is presented in Section 5. Then, Section 6 discusses the implementation of this concept",
"on the Cave Diving planning domain using Spin and MIPS-XXL. Finally, Section 7 concludes the paper and suggests future work",
".",
"The verification of planning domain models is essential to guarantee the safety of planning-based automated systems.Unreachable counterexamples returned by unconstrained planning domain model verification techniques undermine the verification results.In this paper, we have discussed the potential deficiencies of this problem and provided an example of an unreachable counterexample form the literature.",
"We then introduced goal-constrained verification, a new concept to address this problem, which restricts the verification task to a specific goal and initial state pair.",
"This limits counterexamples to those practically reachable by a planner that is tasked with achieving the goal given the initial state.",
"Consequently, our method verifies the domain model only wrt.",
"a specific goal and initial state.",
"This is an acceptable limitation, given that planners also operate on this basis.We have demonstrated how model checkers and planning techniques can be used to perform goal-constrained planning domain model verification.",
"In addition, we have recommended an inherently safe practice for domain model design that guarantees the safety of domain models \"by construction\" in case of undetected modelling errors.",
"Goalconstrained domain model verification ensures accurate verification results and complements the inherently safe domain model design practice to generate safe and error-free planning domain models.In conclusion, the main message of this paper is that the direct application of verification algorithms to the planning domain model verification problem can return counterexamples that would never be reached by planners in real planning tasks.",
"These unreachable counterexamples can mislead the designers to perform unnecessary remediations that can be prone to errors.",
"The proposed solution is simple which makes it readily usable in practice.",
"It is also effective as formally proven in the paper.Currently, we are investigating the use of Temporally Extended Goals (TEGs) translators BID20 to perform goal-constrained domain model verification.",
"As future work, we intend to automate the proposed methods, so that they can be applied to real-world sized planning domain models.",
"Finally, we would like to perform an empirical comparison of the proposed methods to assess their scalability and performance."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.07407406717538834,
0.2857142686843872,
0.2666666507720947,
0.0952380895614624,
0.31111109256744385,
0.1111111044883728,
0.04651162400841713,
0,
0.21739129722118378,
0.05405404791235924,
0.05405404791235924,
0.1875,
0.1538461446762085,
0,
0.1904761791229248,
0,
0.0833333283662796,
0.1428571343421936,
0.12903225421905518,
0.10810810327529907,
0.18518517911434174,
0.13333332538604736,
0.22857142984867096,
0.22727271914482117,
0.29411762952804565,
0.2926829159259796,
0.19512194395065308,
0.3684210479259491,
0.21276594698429108,
0.1428571343421936,
0.32258063554763794,
0.21052631735801697,
0.12121211737394333,
0.17142856121063232,
0.27586206793785095,
0.14999999105930328,
0.1621621549129486,
0.1538461446762085,
0.08695651590824127,
0.2978723347187042,
0.09302324801683426,
0.2295081913471222,
0.1875,
0,
0.17777776718139648,
0.15789473056793213,
0.11428570747375488
] | SkxEtOGIqE | true | [
"Why and how to constrain planning domain model verification with planning goals to avoid unreachable counterexamples (false positives verification outcomes)."
] |
[
"We've seen tremendous success of image generating models these years.",
"Generating images through a neural network is usually pixel-based, which is fundamentally different from how humans create artwork using brushes.",
"To imitate human drawing, interactions between the environment and the agent is required to allow trials.",
"However, the environment is usually non-differentiable, leading to slow convergence and massive computation.",
"In this paper we try to address the discrete nature of software environment with an intermediate, differentiable simulation.",
"We present StrokeNet, a novel model where the agent is trained upon a well-crafted neural approximation of the painting environment.",
"With this approach, our agent was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner.",
"Our primary contribution is the neural simulation of a real-world environment.",
"Furthermore, the agent trained with the emulated environment is able to directly transfer its skills to real-world software.",
"To learn drawing or writing, a person first observes (encodes) the target image visually and uses a pen or a brush to scribble (decode), to reconstruct the original image.",
"For an experienced painter, he or she foresees the consequences before taking any move, and could choose the optimal action.Stroke-based image generation is fairly different from traditional image generation problems due to the intermediate rendering program.",
"Raster-based deep learning approaches for image generation allow effective optimization using back-propagation.",
"While for stroke-based approaches, rather than learning to generate the image, it is more of learning to manipulate the painting program.An intuitive yet potentially effective way to tackle the problem is to first learn this mapping from \"stroke data\" to the resulting image with a neural network, which is analogous to learning painting experience.",
"An advantage of such a mapping over software is that it provides a continuous transformation.",
"For any painting program, the pixel values of an image are calcuated based on the coordinate points along the trajectory of an action.",
"Specific pixels are indexed by the discrete pixel coordinates, which cuts the gradient flow with respect to the action.",
"In our implementation, the indexing is done by an MLP described in Section 3.We further define \"drawing\" by giving a formal definition of \"stroke\".",
"In our context, a \"stroke\" consists of color, brush radius, and a sequence of tuples containing the coordinate and pressure of each point along the trajectory.",
"We will later describe this in detail in Section 3.Based on these ideas, we train a differentiable approximator of our painting software, which we call a \"generator\".",
"We then tested the generator by training a vanilla CNN as an agent that encodes the image into \"stroke\" data as an input for the environment.",
"Our proposed architecture, StrokeNet, basically comprises the two components, a generator and an agent.Finally, an agent is trained to write and draw pictures of several popular datasets upon the generator.",
"For the MNIST (LeCun & Cortes, 2010 ) digits, we evaluated the quality of the agent with a classifier trained solely on the original MNIST dataset, and tested the classifier on generated images.",
"We also compared our method with others to show the efficiency.",
"We explored the latent space of the agent as well.",
"For future work, there are several major improvements we want to make both to the network structure and to the algorithm.The recurrent structure adopted here is of the simplest form.",
"We use this setup because we consider drawing as a Markov process, where the current action only depends on what the agent sees, the target image and the previous frame.",
"More advanced structures like LSTM BID10 or GRU BID3 may boost the performance.",
"A stop sign can be also introduced to determine when to stop drawing, which can be useful in character reconstruction.",
"For the agent, various attention mechanism could be incorporated to help the agent focus on undrawn regions, so that smear and blurry scribbles might be prevented.Secondly, The generator and the agent were trained as two separate parts throughout the experiment.",
"We can somehow train them as a whole: during the training of the agent, store all the intermediate stroke data.",
"After a period of training, sample images from the real environment with the stroke data just collected, and train the generator with the data.",
"By doing so in an iterative manner, the generator could fit better to the current agent and provide more reliable reconstructions, while a changing generator may potentially provide more valuable overall gradients.It is also found useful to add a bit of randomness to the learning rate.",
"Since different decoders of the agent learn at different rates, stochasticity results in more appealing results.",
"For example, the agent usually fails to generalize to color images because it always sticks with one global average color (as shown in FIG0 ).",
"However, it sometimes generates appealing results with some randomness added during the training.",
"As a result of this immobility, the way agent writes is dull compared to humans and reinforcement learning agents like SPIRAL.",
"For instance, when writing the digit \"8\", the agent is simply writing \"3\" with endpoints closed.",
"Also, the agent avoids to make intersecting strokes over all datasets, although such actions are harmless and should be totally encouraged and explored!",
"Thus, random sampling techniques could be added to the decision making process to encourage bolder moves.",
"Finally, for the evaluation metrics, the naive l 2 loss can be combined with adversarial learning.",
"If paired sequential data is available, we believe adding it to training will also improve the results.",
"In this paper we bring a proof-of-concept that an agent is able to learn from its neural simulation of an environment.",
"Especially when the environment is deterministic given the action, or contains a huge action space, the proposed approach could be useful.",
"Our primary contribution is that we devised a model-based method to approximate non-differentiable environment with neural network, and the agent trained with our method converges quickly on several datasets.",
"It is able to adapt its skills to real world.",
"Hopefully such approaches can be useful when dealing with more difficult reinforcement learning problems."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.05882352590560913,
0.1395348757505417,
0.20512820780277252,
0.1621621549129486,
0.2380952388048172,
0.380952388048172,
0.0833333283662796,
0.2857142686843872,
0.25,
0.12765957415103912,
0.14035087823867798,
0.0555555522441864,
0.1818181723356247,
0.15789473056793213,
0.1395348757505417,
0.19512194395065308,
0.2083333283662796,
0.13333332538604736,
0.20408162474632263,
0.17391303181648254,
0.3199999928474426,
0.23999999463558197,
0.11428570747375488,
0.1818181723356247,
0.1599999964237213,
0.19607841968536377,
0.05405404791235924,
0.09999999403953552,
0.20338982343673706,
0.1428571343421936,
0.1395348757505417,
0.2222222238779068,
0.15789473056793213,
0.12765957415103912,
0.05405404791235924,
0.2666666507720947,
0.15789473056793213,
0.17391303181648254,
0.1538461446762085,
0.05128204822540283,
0.1463414579629898,
0.27272728085517883,
0.1860465109348297,
0.2745097875595093,
0.12121211737394333,
0
] | HJxwDiActX | true | [
"StrokeNet is a novel architecture where the agent is trained to draw by strokes on a differentiable simulation of the environment, which could effectively exploit the power of back-propagation."
] |
[
"We demonstrate how machine learning is able to model experiments in quantum physics.",
"Quantum entanglement is a cornerstone for upcoming quantum technologies such as quantum computation and quantum cryptography.",
"Of particular interest are complex quantum states with more than two particles and a large number of entangled quantum levels.",
"Given such a multiparticle high-dimensional quantum state, it is usually impossible to reconstruct an experimental setup that produces it.",
"To search for interesting experiments, one thus has to randomly create millions of setups on a computer and calculate the respective output states.",
"In this work, we show that machine learning models can provide significant improvement over random search.",
"We demonstrate that a long short-term memory (LSTM) neural network can successfully learn to model quantum experiments by correctly predicting output state characteristics for given setups without the necessity of computing the states themselves.",
"This approach not only allows for faster search but is also an essential step towards automated design of multiparticle high-dimensional quantum experiments using generative machine learning models.",
"In the past decade, artificial neural networks have been applied to a plethora of scientific disciplines, commercial applications, and every-day tasks with outstanding performance in, e.g., medical diagnosis, self-driving, and board games (Esteva et al., 2017; Silver et al., 2017) .",
"In contrast to standard feedforward neural networks, long short-term memory (LSTM) (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997) architectures have recurrent connections, which allow them to process sequential data such as text and speech (Sutskever et al., 2014) .",
"Such sequence-processing capabilities can be particularly useful for designing complex quantum experiments, since the final state of quantum particles depends on the sequence of elements, i.e. the experimental setup, these particles pass through.",
"For instance, in quantum optical experiments, photons may traverse a sequence of wave plates, beam splitters, and holographic plates.",
"Highdimensional quantum states are important for multiparticle and multisetting violations of local realist models as well as for applications in emerging quantum technologies such as quantum communication and error correction in quantum computers (Shor, 2000; Kaszlikowski et al., 2000) .",
"Already for three photons and only a few quantum levels, it becomes in general infeasible for humans to determine the required setup for a desired final quantum state, which makes automated design procedures for this inverse problem necessary.",
"One example of such an automated procedure is the algorithm MELVIN , which uses a toolbox of optical elements, randomly generates sequences of these elements, calculates the resulting quantum state, and then checks whether the state is interesting, i.e. maximally entangled and involving many quantum levels.",
"The setups proposed by MELVIN have been realized in laboratory experiments Erhard et al., 2018b) .",
"Recently, also a reinforcement learning approach has been applied to design new experiments (Melnikov et al., 2018) .",
"Inspired by these advances, we investigate how LSTM networks can learn quantum optical setups and predict the characteristics of the resulting quantum states.",
"We train the neural networks using millions of setups generated by MELVIN.",
"The huge amount of data makes deep learning approaches the first choice.",
"We use cluster cross validation (Mayr et al., 2016) to evaluate the models.",
"We have shown that an LSTM-based neural network can be trained to successfully predict certain characteristics of high-dimensional multiparticle quantum states from the experimental setup without any explicit knowledge of quantum mechanics.",
"The network performs well even on unseen data beyond the training distribution, proving its extrapolation capabilities.",
"This paves the way to automated design of complex quantum experiments using generative machine learning models."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
1,
0.14814814925193787,
0.0624999962747097,
0.19354838132858276,
0.0555555522441864,
0.13793103396892548,
0.260869562625885,
0.25,
0.038461536169052124,
0.03999999538064003,
0.0476190447807312,
0.1249999925494194,
0.08888888359069824,
0.1304347813129425,
0.07692307233810425,
0.13793103396892548,
0.19354838132858276,
0.11764705181121826,
0.07999999821186066,
0.07999999821186066,
0.14814814925193787,
0.1395348757505417,
0,
0.3448275923728943
] | ryxtWgSKPB | true | [
"We demonstrate how machine learning is able to model experiments in quantum physics."
] |
[
"In this paper, we propose a nonlinear unsupervised metric learning framework to boost of the performance of clustering algorithms.",
"Under our framework, nonlinear distance metric learning and manifold embedding are integrated and conducted simultaneously to increase the natural separations among data samples.",
"The metric learning component is implemented through feature space transformations, regulated by a nonlinear deformable model called Coherent Point Drifting (CPD).",
"Driven by CPD, data points can get to a higher level of linear separability, which is subsequently picked up by the manifold embedding component to generate well-separable sample projections for clustering.",
"Experimental results on synthetic and benchmark datasets show the effectiveness of our proposed approach over the state-of-the-art solutions in unsupervised metric learning.\n",
"Cluster analysis has broad applications in various disciplines.",
"Grouping data samples into categories with similar features is an efficient way to summarize the data for further processing.",
"In measuring the similarities among data samples, the Euclidean distance is the most common choice in clustering algorithms.",
"Under Euclidean distance, feature components are assigned with the same weight, which essentially assumes all features are equally important across the entire data space.",
"In practice, such setup is often not optimal.",
"Learning a customized metric function from the data samples can usually boost the performance of various machine learning algorithms BID1 .",
"While metric learning has been extensively researched under supervised BID19 BID18 BID17 BID14 and semi-supervised settings BID15 BID3 BID23 BID13 , unsupervised metric learning (UML) remains a challenge, in part due to the absence of ground-truth label information to define a learning optimality.",
"In this paper, we focus on the problem of UML for clustering.As the goal of clustering is to capture the natural separations among data samples, one common practice in the existing UML solutions is to increase the data separability and make the separations more identifiable for the ensuing clustering algorithm.",
"Such separability gain can be achieved by projecting data samples onto a carefully chosen low-dimensional manifold, where geometric relationships, such as the pairwise distances, are preserved.",
"The projections can be carried out linearly, as through the Principle Component Analysis, or nonlinearly, as via manifold learning solutions.",
"Under the dimension-reduced space, clustering algorithms, such as K-means, can then be applied.Recent years have seen the developments of UML solutions exploring different setups for the lowdimensional manifolds.",
"FME ) relies on the learning of an optimum linear regression function to specify the target low-dimensional space.",
"BID0 model local sample densities of the data to estimate a new metric space, and use the learned metric as the basis to construct graphs for manifold learning.",
"Application-specific manifolds, such as Grassmann space BID6 and Wasserstein geometry BID16 , have also been studied.",
"When utilized as a separate preprocessing step, dimensionality reduction UML solutions are commonly designed without considering the ensuing clustering algorithm and therefore cannot be fine-tuned accordingly.AML takes a different approach, performing clustering and distance metric learning simultaneously.",
"The joint learning under AML is formulated as a trace maximization problem, and numerically solved through an EM-like iterative procedure, where each iteration consists of a data projection step, followed by a clustering step via kernel K-means.",
"The projection is parameterized by an orthogonal, dimension-reducing matrix.",
"A kernelized extension of AML was proposed in BID2 .",
"As the projection models are built on linear transformations, their capabilities to deal with complex nonlinear structures are limited.",
"UML solutions performing under the original input space have also been proposed.",
"SSO BID7 learns a global similarity metric through a diffusion procedure that propagates smooth metrics through the data space.",
"CPCM BID4 relies on the ratio of within cluster variance over the total data variance to obtain a linear transformation, aiming to improved data separability.",
"As the original spaces are usually high-dimensional, UML solutions in this category tend to suffer from the local minima problem.In light of the aforementioned limitations and drawbacks, we propose a new nonlinear UML framework in this paper.",
"Our solution integrates nonlinear feature transformation and manifold embedding together to improve the data separability for K-means clustering.",
"Our model can be regarded as a fully nonlinear generalization of AML, in which the transformation model is upgraded to a geometric model called Coherent Point Drifting (CPD) BID8 .",
"Data points are driven by CPD to reach a higher level of linear separability, which will be subsequently picked up by the manifold embedding component to generate well-separable sample projections.",
"At the end, K-means is applied on the transformed, dimension-reduced embeddings to produce label predictions.",
"The choice of CPD is with the consideration of its capability of generating high-order yet smooth transformations.",
"The main contributions of this paper include the following.•",
"Our proposed fully nonlinear UML solution enhances data separability through the combination of CPD-driven deformation and spectral embeddings.•",
"To the best of our knowledge, this is the first work that utilizes dense, spatial varying deformations in unsupervised metric learning.•",
"The CPD optimization has a closed-form solution, therefore can be efficiently computed.•",
"Our model outperforms state-of-the-art UML methods on six benchmark databases, indicating promising performance in many real-world applications.The rest of this paper is organized as follows. Section",
"2 describes our proposed method in detail. It includes",
"the description of CPD model, formulation of our CPD based UML, optimization strategy and the approach to kernelize our model. Experimental",
"results are presented in Section 3 to validate our solutions with both synthetic and real-world datasets. Section 4 concludes",
"this paper.",
"The proposed CPD-UML model learns a nonlinear metric and the clusters for the given data simultaneously.",
"The nonlinear metric is achieved by a globally smooth nonlinear transformation, which improves the separability of given data during clustering.",
"CPD is used as the transformation model because of its capability in deforming feature space in sophisticated yet smooth manner.",
"Evaluations on synthetic and benchmark datasets demonstrate the effectiveness of our approach.",
"Applying the proposed approach to other computer vision and machine learning problems are in the direction of our future research."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.8387096524238586,
0.2857142686843872,
0.23529411852359772,
0.2380952388048172,
0.2857142686843872,
0,
0.12903225421905518,
0.20689654350280762,
0.05714285373687744,
0,
0.5,
0.2745097875595093,
0.16326530277729034,
0.10256409645080566,
0.1249999925494194,
0.14999999105930328,
0.2666666507720947,
0.3243243098258972,
0,
0.2083333283662796,
0.1666666567325592,
0,
0.09090908616781235,
0.19354838132858276,
0.07999999821186066,
0.19999998807907104,
0.23529411852359772,
0.260869562625885,
0.25806450843811035,
0.25641024112701416,
0.19512194395065308,
0.14814814925193787,
0.1428571343421936,
0.17391303181648254,
0.1875,
0.29411762952804565,
0.07692307233810425,
0.09999999403953552,
0,
0.19999998807907104,
0.06451612710952759,
0.2857142686843872,
0.375,
0.1249999925494194,
0.1599999964237213,
0.25
] | SJu63o10b | true | [
" a nonlinear unsupervised metric learning framework to boost the performance of clustering algorithms."
] |
[
"As machine learning becomes ubiquitous, deployed systems need to be as accu- rate as they can.",
"As a result, machine learning service providers have a surging need for useful, additional training data that benefits training, without giving up all the details about the trained program.",
"At the same time, data owners would like to trade their data for its value, without having to first give away the data itself be- fore receiving compensation.",
"It is difficult for data providers and model providers to agree on a fair price without first revealing the data or the trained model to the other side.",
"Escrow systems only complicate this further, adding an additional layer of trust required of both parties.",
"Currently, data owners and model owners don’t have a fair pricing system that eliminates the need to trust a third party and training the model on the data, which",
"1) takes a long time to complete,",
"2) does not guarantee that useful data is paid valuably and that useless data isn’t, without trusting in the third party with both the model and the data.",
"Existing improve- ments to secure the transaction focus heavily on encrypting or approximating the data, such as training on encrypted data, and variants of federated learning.",
"As powerful as the methods appear to be, we show them to be impractical in our use case with real world assumptions for preserving privacy for the data owners when facing black-box models.",
"Thus, a fair pricing scheme that does not rely on secure data encryption and obfuscation is needed before the exchange of data.",
"This pa- per proposes a novel method for fair pricing using data-model efficacy techniques such as influence functions, model extraction, and model compression methods, thus enabling secure data transactions.",
"We successfully show that without running the data through the model, one can approximate the value of the data; that is, if the data turns out redundant, the pricing is minimal, and if the data leads to proper improvement, its value is properly assessed, without placing strong assumptions on the nature of the model.",
"Future work will be focused on establishing a system with stronger transactional security against adversarial attacks that will reveal details about the model or the data to the other party.",
"Encrypting the data or approximating the data is a lost cause for the data owner whose privacy is not guaranteed.",
"Since the model owner has greater context on similar data distribution, they can infer much information about the data without actually seeing it.",
"Because data cannot be practically secured without losing its value before being handed over, pricing and the transactional form relevant.",
"In this scheme, no data is given up until money is paid.The suggested methods for Model-Data Efficacy include influence, which explores the change in parameter with training data, model extractions, which approximate a trained network with a decision tree, and model compression techniques that are learned.",
"They all work to approximate the effect of data to the model owners without showing the exact makeup of the model.The crux of the usability of the solution lies in whether the approximation technique preserves model details, but combining secure transaction techniques is sufficient to make the approximated pricing model entirely private (beyond its output) without further approximating the effect of these pricing models, thus keeping them as accurate as the previous results in the last section.Despite the potential accuracy loss, usability is much better.",
"For any transaction reached through model approximation, we still maintain usable privacy guarantee.",
"Securing a pricing function, which is very small, is easy.",
"Enforcing by ways of contract to guarantee that a money-data transaction happens after agreeing on a price is much easier to enforce than contracts that bind within the large model owners?",
"organization, such as trusting a security audit."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1111111044883728,
0.1249999925494194,
0.13333332538604736,
0.3720930218696594,
0,
0.27272728085517883,
0.1428571343421936,
0.2790697515010834,
0.3181818127632141,
0.19607841968536377,
0.2857142686843872,
0.2448979616165161,
0.20338982343673706,
0.2083333283662796,
0.3333333432674408,
0.1904761791229248,
0.1463414579629898,
0.1904761791229248,
0.2469135820865631,
0.1764705777168274,
0.13333332538604736,
0.2857142686843872,
0.1428571343421936
] | r1ayG7WRZ | true | [
"Facing complex, black-box models, encrypting the data is not as usable as approximating the model and using it to price a potential transaction."
] |
[
"We present Line-Storm, an interactive computer system for creative performance.",
"The context we investigated was writing on paper using Line-Storm.",
"We used self-report questionnaires as part of research involving human participants, to evaluate Line-Storm.",
"Line-Storm consisted of a writing stylus and writing pad, augmented with electronics.",
"The writing pad was connected to a contact microphone, and the writing stylus had a small micro-controller board and peripherals attached to it.",
"The signals from these electronic augmentations were fed into the audio-synthesis environment Max/MSP to produce an interactive soundscape.",
"We attempted to discover whether Line-Storm enhanced a self-reported sense of being present and engaged during a writing task, and we compared Line-Storm to a non-interactive control condition.",
"After performing statistical analysis in SPSS, we were unable to support our research hypothesis, that presence and engagement were enhanced by Line-Storm.",
"Participants reported they were, on average, no more present and engaged during the experimental condition than during the control condition.",
"As creativity is subtle, and varies with person, time, context, space and so many other factors, this result was somewhat expected by us.",
"A statistically significant result of our study is that some participants responded to Line-Storm more positively than others.",
"These Preservers of Line-Storm were a group, distinct from other participants, who reported greater presence and engagement and who wrote more words with Line-Storm and during the control condition.",
"We discuss the results of our research and place Line-Storm in an artistic-technological context, drawing upon writings by Martin Heidegger when considering the nature of Line-Storm.",
"Future work includes modifying interactive components, improving aesthetics and using more miniaturized electronics, experimenting with a drawing task instead of a writing task, and collaborating with a composer of electronic music to make a more interesting, immersive, and engaging interactive soundscape for writing or drawing performance.",
"Our philosophy is that people have become frugal regarding \"joy\"!",
"How we all are becoming increasingly suspicious of all joy!",
"The desire for joy already calls itself a \"need to recuperate\" and is beginning to be ashamed of itself.",
"-Nietzsche [51] Tod Machover [47] has emphasized the need to augment existing, traditional musical instruments while ensuring these augmentations act as stimuli to the creative process, not simply as additional features.",
"One focus of this paper is to find a way to enhance human creativity.",
"Another is to observe the emergence of the work when the system is used.",
"A third, is our attempt to make something that is fun to use.",
"We have conceived, designed, constructed, evaluated, our system called Line-Storm 1 , attempting to enhance a sense of both presence and engagement in the user.",
"Only through performance with Line-Storm, does Line-Storm come into being.",
"The method of experience sampling-interrupting a person as they go through their daily activities and asking questions about their experience-has been used to find that when peoples minds are wandering, they are less happy [43] .",
"\"Be Here Now,\" a mantra popularized in the United States by, for example, Dr. Richard Alpert [18] , who became Baba Ram Dass.",
"This mantra now occurs in a leading business publication urging middle managers everywhere to \"be present\" to be a \"great leader\" [35] and presumably to reap the rewards of \"success.\"",
"Even the LSD experimentation Dass describes in Be Here Now, carried out on a small, socially acceptable scale in Silicon Valley, where tech workers \"microdose\" themselves with LSD, to enhance their creativity and improve interpersonal interactions [45] .",
"Some esoteric practices leading to creative work may conjure images of the lone painter or poet, or of a sculptor in her studio.",
"It is not only Silicon Valley technocrats, scrambling for millions and billions of dollars, who might benefit from enhancing human creativity.",
"Even now one is ashamed of resting (equated to waste of time in our mind), and prolonged reflection almost gives people a bad conscience.",
"One thinks with a watch in ones hand, while eating meals, and reading the latest news of the stock market; we live today not to miss out on anything.",
"-Nietzsche [51] Note that Nietzsche was writing well over 100 years before \"FOMO,\" or \"fear of missing out,\" became an expression related to early 21st-century smartphone users.",
"Our point is that we recognize that there are different meanings to the phrase creative work.",
"For example, billionaires and poets are not endorsing the same thing when both use the word \"creative\" or the word \"work,\" though both may praise \"creative work.\"",
"Some decry the extreme measures taken by LSD trippers in the 1960s [45] , and want to turn the drug into an effective money-making tool.",
"An irony is that creative work translates into fortunes undreamt of by poets such as Robert Frost.",
"There is a story in which Joseph Heller, author of the novel Catch-22, when told of an investment banker who had made more money last year than he might ever to be expected to make from the novel, replied that he had something the investment banker would never have: enough.",
"So, we argue that it is possible that what was good for Heller, in the anecdote, would probably not have been good for the investment banker, even when the concept of creative work is broadened to include both their endeavors.",
"Enhancing one type of creative work may not enhance the other.",
"The ecstasy of the composer remarked upon by Csikszentmihalyi [15] or of the novelist, may not be found in the same way the \"A-ha!\" of the software developer is found.",
"Our work involving Line-Storm has been an attempt to provide a ludic system for use by the creative worker.",
"Gaver [24] defines a ludic system as one that is used for its own sake, and not for some other end.",
"By attempting to increase a users sense of presence and engagement-their being here now-our hope is to provide an immersive environment in which to do creative work with a writing stylus such as the mechanical pencil we chose to use.",
"Taskscape is a complex term from Ingold's \"The Temporality of the Landscape\" [38] , which we will refer to later, when speaking of the new possibilities of a task that Line-Storm exposes, as affordances in Gibson's sense of the term [22] .",
"One of our committee members, a professor of music, suggested that our work involves the taskscape of the creative worker, working with a writing stylus and paper.",
"This taskscape includes the place, people, and objects surrounding the creative worker doing creative work.",
"The taskscape is social [38] .",
"The experience of the user of our system, and of the research participants who gave of their time to be a part of this thesis, is a social experience, and the writing tasks they performed are tasks that fit into \"an array of activities\"-which include the writing of this sentence [38] .",
"We do not know-as above, because too little work has been done in this area-whether the taskscape of a user of Line-Storm is altered in ways more conducive to writing poetry than to the drafting of microprocessor plans, for example, or vice versa.",
"Rather than devise a completely new tool, we have chosen to augment an otherwise ordinary mechanical pencil 2 .",
"Perhaps by looking 2 We could have similarly augmented a paintbrush or a pen, though the away from our goal, creative enhancement-as we must when looking at faint night-sky objects with the naked eye (Springob, 2015)-and making the use of the system the primary activity, and the work done with it a secondary activity, we think we will find ourselves progressing in that direction, whereas a direct approach would not have succeeded.",
"By giving a chance for play, we have hoped our system, Line-Storm, serves as stimulant and facilitator \"to the creative process itself,\" as Machover [47] advises.",
"We discuss our experimental results.",
"We conceived our work, initially, as an entertainment system, to be used for one's own pleasure while writing in a journal.",
"We followed that by hoping to jolt users out of complacent acquaintance with paper and pencil and present the writing tools and writing situation as if for the first time, to encourage the practice of writing and sending handwritten letters.",
"We finished the work by attempting to enhance human creativity when working with a writing stylus and paper writing pad, by increasing participants' sense of presence and engagement.",
"We found correlations and K-means clustering results that did suggest there was a group of participants who responded favorably to Line-Storm.",
"We expected that a direct approach to enhancing creativity may/would fail; we attempted to construct a system the use of which would be an end and not only a means [24] , and hoped this might lead, indirectly, to enhancing creativity by encouraging play and playfulness.",
"We provided a ludic environment for creative work, in which some users would focus on using the system, not expecting an outcome and will create their own play/outcome and accept what emerges or not-no quest, no winners, no points or gold to deliver outcome-based satisfaction.",
"In a ludic system, therefore, the creative work (outcome is what it is) and the results would be a secondary consideration and may emerge by itself, an indirect result of the use of the system.",
"We hoped participants in our experiments would find themselves \"losing themselves,\" and a group of participants did tend to lose track of time while they used or performed with Line-Storm.",
"We believe these participants became more absorbed while using the experimental system, exactly our intention.",
"Losing oneself while using the system might open one up to creative energies, thoughts, feelings, and actions that would ordinarily not occur, as Nietzsche [51] wrote."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11764705181121826,
0.11764705181121826,
0,
0.2222222238779068,
0.1538461446762085,
0,
0.06666666269302368,
0,
0,
0,
0,
0,
0,
0.0476190447807312,
0,
0,
0,
0,
0,
0.1111111044883728,
0,
0.0624999962747097,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0.05882352590560913,
0,
0,
0,
0,
0,
0,
0,
0,
0.07692307233810425,
0.07407406717538834,
0.09302325546741486,
0,
0.13793103396892548,
0,
0,
0.045454543083906174,
0.04444444179534912,
0,
0.0307692289352417,
0,
0,
0.0714285671710968,
0.052631575614213943,
0.125,
0,
0.04444444179534912,
0,
0.0555555522441864,
0,
0,
0.060606058686971664
] | 9uAyXtUuW9 | true | [
"Interactive stylus based sound incorporating writing system"
] |
[
"Language style transfer is the problem of migrating the content of a source sentence to a target style.",
"In many applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles.",
"In this paper, we present an encoder-decoder framework under this problem setting.",
"Each sentence is encoded into its content and style latent representations.",
"By recombining the content with the target style, we can decode a sentence aligned in the target domain.",
"To adequately constrain the encoding and decoding functions, we couple them with two loss functions.",
"The first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style.",
"The second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style.",
"We validate the effectiveness of our proposed model on two tasks: sentiment modification of restaurant reviews, and dialog response revision with a romantic style.",
"Style transfer is a long-standing research problem that aims at migrating the content of a sample from a source style to a target style.",
"Recently, great progress has been achieved by applying deep neural networks to redraw an image in a particular style BID7 BID10 BID2 BID20 BID12 .",
"However, until now very few approaches have been proposed for style transfer of natural language sentences, i.e., changing the style or genre of a sentence while preserving its semantic content.",
"For example, we would like a system that can convert a given text piece in the language of Shakespeare BID14 ; or rewrite product reviews with a favored sentiment BID17 .One",
"important issue on language style transfer is that parallel data are unavailable. For",
"instance, considering the task of rewriting a negative review of a product to its counterpart with a positive sentiment, we can hardly find paired data that describe the same content. Yet",
", many text generation frameworks require parallel data, such as the popular sequence-to-sequence model in machine translation and document summarization BID16 , and thus are not applicable under this scenario. A",
"few recent approaches have been proposed for style transfer with non-parallel data BID4 BID17 . Their",
"key idea is to learn a latent representation of the content disentangled from the source style, and then recombine it with the target style to generate the corresponding sentence.All the above approaches assume that data have only two styles, and their task is to transfer sentences from one style to the other. However",
", in many practical settings, we may deal with sentences in more than two styles. Taking",
"the review sentiment modification as an example again, some reviews may be neither positive nor negative, but in a neutral style. Moreover",
", even reviews considered negative can be categorized into more fine-grained sentiments, such as anger, sadness, boredom and other negative styles. It may",
"be beneficial if such styles are treated differently. As another",
"example, consider a chatbot with a coherent persona, which has a consistent language behavior and interaction style BID9 . A simple framework",
"for this task is to first use human dialog data to train a chatbot system, such as a retrieval-based dialog model BID11 , and then transfer the output responses with a language style transfer model so that multi-round responses always have a consistent style. Note that the human",
"dialog sentences are collected from different users, and users' expressions of the content and tones may be in different personalized characteristics. Thus the output responses",
"retrieved from the dialog model may have the language style of any user. Simply treating the responses",
"with a single style and employing the existing style transfer models would lead to unsatisfactory results. Hence, in this paper, we study",
"the setting of language style transfer in which the source data to be transferred can have various (and possibly unknown) styles.Another challenging problem in language style transfer is that the transferred sentence should preserve the content of the original sentence disentangled from its style. To tackle this problem, BID17",
"assumed the source domain and the target domain share the same latent content space, and trained their model by aligning these two latent spaces. BID4 constrained that the latent",
"content representation of the original sentence could be inferred from the transferred sentence. However, these attempts considered",
"content modification in the latent content space but not the sentence space.In this work, we develop an encoder-decoder framework that can transfer a sentence from a source domain to its counterpart in a target domain. The training data in the two domains",
"are non-parallel, and sentences in the source domain can have arbitrary language styles but those in the target domain are with a consensus style. We encode each sentence into two latent",
"representations, one for the content disentangled from the style, and the other for the style. Intuitively, if a source sentence is considered",
"having the target style with a high probability, its style representation should be close to the target style representation. To make use of this idea, we enforce that the discrepancy",
"between an arbitrary style representation and the target style representation should be consistent with the closeness of its sentence style to the target style. A cycle consistency loss is further introduced to avoid content",
"change by directly considering the transferred sentence. Its idea is that the generated sentence, when put back into the",
"encoder and recombined with its original style representation, can recover the original sentence. We evaluate the performance of our proposed model on two tasks.",
"The first is the sentiment modification task with its source domain",
"containing more than one sentiments, and the second is to transfer general dialog responses to a romantic style.",
"In this paper, we present an encoder-decoder framework for language style transfer, which allows for the use of non-parallel data and source data with various unknown language styles.",
"Each sentence is encoded into two latent representations, one corresponding to its content disentangled from the style and and the other representing the style only.",
"By recombining the content with the target style, we can decode a sentence aligned in the target domain.",
"Specifically, we propose two loss functions, i.e., the style discrepancy loss and the cycle consistency loss, to adequately constrain the encoding and decoding functions.",
"The style discrepancy loss is to enforce a properly encoded style representation while the cycle consistency loss is used to ensure that the style-transferred sentences can be transferred back to their original sentences.",
"Experimental results on two tasks demonstrate that our proposed method outperforms the state-of-the-art style transfer method BID17 We randomly select 200 test samples from Yelp and perform human evaluations on four aspects of the results: (1) content: estimates if the content of an input sentence is preserved in a transferred sentence; content rating has 0 (changed), 1 (synonym substitution or partially changed), and 2 (unchanged); (2) sentiment: estimates if the sentiment of a transferred sentence is consistent with the target sentiment; sentiment rating has 0 (unchanged and wrong), 1 (changed but wrong), 2 (correct); (3) fluency: estimates the fluency of transferred sentences; fluency is rated from 1 (unreadable) to 4 (perfect); (4) overall: estimates the overall quality of transferred sentences; overall rating ranges from 1 (failed) to 4 (perfect).We",
"hired five annotators and averaged their evaluations. TAB0",
"shows results on Yelp when the source domain contains not only negative sentences but also 150k positive sentences (row 3 in TAB0 ), and TAB0 shows results on Yelp when the target domain contains only 100k positive sentences ( row 1 in TAB3 ). As can",
"be seen, our model is better in terms of sentiment accuracy and overall quality, which is consistent with the automatic evaluation results."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2222222238779068,
0.2380952388048172,
0.24242423474788666,
0.12121211737394333,
0.10810810327529907,
0.1621621549129486,
0.1428571343421936,
0.1818181723356247,
0.2666666507720947,
0.1904761791229248,
0.08695651590824127,
0.19230768084526062,
0.15686273574829102,
0.17142856121063232,
0.16326530277729034,
0.07843136787414551,
0.2702702581882477,
0.21875,
0.10810810327529907,
0.13636362552642822,
0.09090908616781235,
0.0624999962747097,
0.2926829159259796,
0.27586206793785095,
0.13636362552642822,
0.21621620655059814,
0.1860465109348297,
0.29999998211860657,
0.13636362552642822,
0.10810810327529907,
0.2181818187236786,
0.3265306055545807,
0.25,
0.21739129722118378,
0.25,
0.05128204822540283,
0.27272728085517883,
0.1818181723356247,
0.1538461446762085,
0.8936170339584351,
0.1395348757505417,
0.10810810327529907,
0.13636362552642822,
0.0833333283662796,
0.12962962687015533,
0.06666666269302368,
0.1111111044883728,
0.23255813121795654
] | B1NKuC6SG | true | [
"We present an encoder-decoder framework for language style transfer, which allows for the use of non-parallel data and source data with various unknown language styles."
] |
[
"In this paper, we propose a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs).",
"Optimizing the standard L2 loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but fails to identify the exact eigenvectors.",
"This downside originates from an invariance that cancels out in the global map.",
"Here, we prove that our loss function eliminates this issue, i.e. the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix.",
"For this new loss, we establish that all local minima are global optima and also show that computing the new loss (and also its gradients) has the same order of complexity as the classical loss.",
"We report numerical results on both synthetic simulations, and a real-data PCA experiment on MNIST (i.e., a 60,000 x784 matrix), demonstrating our approach to be practically applicable and rectify previous LAEs' downsides.",
"Ranking among the most widely-used and valuable statistical tools, Principal Component Analysis (PCA) represents a given set of data within a new orthogonal coordinate system in which the data are uncorrelated and the variance of the data along each orthogonal axis is successively ordered from the highest to lowest.",
"The projection of data along each axis gives what are called principal components.",
"Theoretically, eigendecomposition of the covariance matrix provides exactly such a transformation.",
"For large data sets, however, classical decomposition techniques are infeasible and other numerical methods, such as least squares approximation schemes, are practically employed.",
"An especially notable instance is the problem of dimensionality reduction, where only the largest principal components-as the best representative of the data-are desired.",
"Linear autoencoders (LAEs) are one such scheme for dimensionality reduction that is applicable to large data sets.",
"An LAE with a single fully-connected and linear hidden layer, and Mean Squared Error (MSE) loss function can discover the linear subspace spanned by the principal components.",
"This subspace is the same as the one spanned by the weights of the decoder.",
"However, it failure to identify the exact principal directions.",
"This is due to the fact that, when the encoder is transformed by some matrix, transforming the decoder by the inverse of that matrix will yield no change in the loss.",
"In other words, the loss possesses a symmetry under the action of a group of invertible matrices, so that directions (and orderings/permutations thereto) will not be discriminated.",
"The early work of Bourlard & Kamp (1988) and Baldi & Hornik (1989) connected LAEs and PCA and demonstrated the lack of identifiability of principal components.",
"Several methods for neural networks compute the exact eigenvectors (Rubner & Tavan, 1989; Xu, 1993; Kung & Diamantaras, 1990; Oja et al., 1992) , but they depend on either particular network structures or special optimization methods.",
"It was recently observed (Plaut, 2018; Kunin et al., 2019 ) that regularization causes the left singular vectors of the decoder to become the exact eigenvectors, but recovering them still requires an extra decomposition step.",
"As Plaut (2018) point out, no existent method recovers the eigenvectors from an LAE in an optimization-independent way on a standard network -this work fills that void.",
"Moreover, analyzing the loss surface for various architectures of linear/non-linear neural networks is a highly active and prominent area of research (e.g. Baldi & Hornik (1989) ; Kunin et al. (2019) ; Pretorius et al. (2018) ; Frye et al. (2019) ).",
"Most of these works extend the results of Baldi & Hornik (1989) for shallow LAEs to more complex networks.",
"However, most retain the original MSE loss, and they prove the same critical point characterization for their specific architecture of interest.",
"Most notably Zhou & Liang (2018) extends the results of Baldi & Hornik (1989) to deep linear networks and shallow RELU networks.",
"In contrast in this work we are going after a loss with better loss surface properties.",
"We propose a new loss function for performing PCA using LAEs.",
"We show that with the proposed loss function, the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix.",
"The idea is simple: for identifying p principal directions we build up a total loss function as a sum of p squared error losses, where the i th loss function identifies only the first i principal directions.",
"This approach breaks the symmetry since minimizing the first loss results in the first principal direction, which forces the second loss to find the first and the second.",
"This constraint is propagated through the rest of the losses, resulting in all p principal components being identified.",
"For the new loss we prove that all local minima are global minima.",
"Consequently, the proposed loss function has both theoretical and practical implications.",
"Theoretically, it provides better understanding of the loss surface.",
"Specifically, we show that any critical point of our loss L is a critical point of the original MSE loss but not vice versa, and conclude that L eliminates those undesirable global minima of the original loss (i.e., exactly those which suffer from the invariance).",
"Given that the set of critical points of L is a subset of critical points of MSE loss, many of the previous work on loss surfaces of more complex networks likely extend.",
"In light of the removal of undesirable global minima through L, examining more complex networks is certainly a very promising direction.",
"As for practical consequences, we show that the loss and its gradients can be compactly vectorized so that their computational complexity is no different from the MSE loss.",
"Therefore, the loss L can be used to perform PCA/SVD on large datasets using any method of optimization such as Stochastic Gradient Descent (SGD).",
"Chief among the compellingly reasons to perform PCA/SVD using this method is that, in recent years, there has been unprecedented gains in the performance of very large SGD optimizations, with autoencoders in particular successfully handling larger numbers of high-dimensional training data (e.g., images).",
"The loss function we offer is attractive in terms of parallelizability and distributability, and does not prescribe any single specific algorithm or implementation, so stands to continue to benefit from the arms race between SGD and its competitors.",
"More importantly, this single loss function (without an additional post hoc processing step) fits seamlessly into optimization pipelines (where SGD is but one instance).",
"The result is that the loss allows for PCA/SVD computation as single optimization layer, akin to an instance of a fully differentiable building block in a NN pipeline Amos & Kolter (2017) , potentially as part of a much larger network.",
"In this paper, we have introduced a loss function for performing principal component analysis and linear regression using linear autoencoders.",
"We have proved that the optimizing with the given loss results in the decoder matrix converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix.",
"We have also demonstrated the claims on a synthetic data set of random samples drawn from a multivariate normal distribution and on MNIST data set.",
"There are several possible generalizations of this approach we are currently working on.",
"One is improving performance when the corresponding eigenvalues of two principal directions are very close and another is generalization of the loss for tensor decomposition.",
"Before we present the proof for the main theorems, the following two lemmas introduce some notations and basic relations that are required for the proofs.",
"Lemma 2.",
"The constant matrices T p ∈ R p×p and S p ∈ R p×p are defined as",
"Clearly, the diagonal matrix T p is positive definite.",
"Another matrix that will appear in the formula- The following properties of Hadamard product and matrices T p and S p are used throughout:"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3529411852359772,
0.20512820780277252,
0.0714285671710968,
0.307692289352417,
0.13636362552642822,
0.043478257954120636,
0.07407406717538834,
0,
0,
0,
0,
0.1875,
0.20512820780277252,
0,
0.0833333283662796,
0.09999999403953552,
0.10256409645080566,
0.0555555522441864,
0.12244897335767746,
0.08163265138864517,
0.09756097197532654,
0.07999999821186066,
0.060606054961681366,
0.05714285373687744,
0.05714285373687744,
0.13333332538604736,
0.38461539149284363,
0.3529411852359772,
0.13636362552642822,
0.05882352590560913,
0,
0.2222222238779068,
0.1538461446762085,
0.0833333283662796,
0.07999999821186066,
0.10256409645080566,
0,
0.14999999105930328,
0.05128204822540283,
0.0714285671710968,
0.07999999821186066,
0.10256409645080566,
0.11538460850715637,
0.29411762952804565,
0.3243243098258972,
0,
0,
0.10810810327529907,
0.1111111044883728,
0,
0,
0.05405404791235924
] | ByeVWkBYPH | true | [
"A new loss function for PCA with linear autoencoders that provably yields ordered exact eigenvectors "
] |
[
"Users have tremendous potential to aid in the construction and maintenance of knowledges bases (KBs) through the contribution of feedback that identifies incorrect and missing entity attributes and relations.",
"However, as new data is added to the KB, the KB entities, which are constructed by running entity resolution (ER), can change, rendering the intended targets of user feedback unknown–a problem we term identity uncertainty.",
"In this work, we present a framework for integrating user feedback into KBs in the presence of identity uncertainty.",
"Our approach is based on having user feedback participate alongside mentions in ER.",
"We propose a specific representation of user feedback as feedback mentions and introduce a new online algorithm for integrating these mentions into an existing KB.",
"In experiments, we demonstrate that our proposed approach outperforms the baselines in 70% of experimental conditions.",
"Structured knowledge bases (KBs) of entities and relations are often incomplete and noisy, whether constructed by hand or automatically.",
"For example, it has been reported that 71% of people in Freebase are missing a place of birth attribute and 75% have no known nationality BID5 .",
"Similarly, while YAGO2 is estimated to be about 95% accurate on facts extracted from Wikipedia, this translates to roughly 5.7 million incorrect facts involving 2.6 million entities 1 BID11 .",
"The vast research in cleaning and correction of databases is further evidence of the permeation of errors throughout KB construction in multiple domains BID5 ,b, Wang et al., 2015 .As",
"the primary consumers of KBs, human users have significant potential to aid in KB construction and maintenance. From",
"a user's standpoint, a KB contains a set of entities, each entity possessing attributes and optionally participating in relationships with other entities. Thus",
", KB errors manifest as spurious and missing attributes and relationships. However",
", the data that gives rise to a KB is a collection of raw evidence, which can be understood as mentions that require clustering by entity resolution (ER) into a set of inferred entities. The attributes",
"and relations of the inferred KB entities with which the user interacts are drawn from this underlying clustering of the mentions. Therefore, the",
"t = 0 t = 1 t = 2 c a u s e s s p l i t",
"Tables 1a and 1b contain the results of the expertise and title experiments, respectively.",
"Each table reports the paired t-statistic between each baseline method and our proposed Example feedback is shown for a concise target.",
"The packaging and payload for authorship FM would contain the two titles mentioned with positive and negative weights respectively.",
"approach (FM), under detailed and concise feedback generation schemes, with respect to the number of pieces of feedback required to discover the ground-truth partition of the mentions.",
"Each row represents a canopy in which the experiment is performed, and each column corresponds to a baseline method and feedback generation setting.",
"Each cell contains the difference between the mean number of rounds required by the FM approach and a baseline approach to discover the ground-truth partition (higher is better).",
"Positive numbers are bolded; asterisks (*) indicate statistical significance (p < 0.05) and two asterisks (**) indicate statistical significance (p < 0.01).",
"Rows are omitted if the initial partition of the mentions, constructed by the OG algorithm and subject to no user feedback, is correct.The paired-t statistics compare our proposed feedback representation (FM) to the three baseline feedback representations.",
"We find that FM outperforms pack in both the detailed and concise settings of Experiment I on all but two of the canopies.",
"In 7 out of 14 canopies, the results are statistically significant.",
"These results underscore the importance of using only certain attributes (stored in the packaging) during the initial nearest neighbor search.",
"We hypothesize that storing shared attributes in the payload is especially important because otherwise they can interfere with initial routing.",
"When feedback is made with respect to attributes that are not shared, as in Experiment II, separating packaging and payload is less important.",
"This is evidenced by the pack approach slightly outperforming FMs in the detailed setting, but never significantly.",
"FMs generally outperform pack in the concise setting.",
"We hypothesize that this is a result of better initial placement of the feedback in the tree by the OG algorithm.In comparing, FM and assign we find that our proposed approach typically performs better in Experiment II while the baseline performs better in Experiment I.",
"We note that the feedback in Experiment I is more ambiguous than Experiment II (because expertise is a shared attribute).",
"We hypothesize that assign's better performance in Experiment I is due to the baseline's approach of deleting feedback to mitigate errors caused by identity uncertainty with respect to user feedback.",
"We note that this agrees with the observation Table 1 : Paired-t statistic.",
"Each cell represents that difference in mean number of feedback-rounds required to discover the ground-truth entities over 25 runs between a baseline, denoted by the column heading, and our proposed approach (FM).",
"Positive numbers indicate that FM requires fewer rounds of feedback than its competitor (larger numbers are better).",
"Two asterisks (**) indicates that the statistic is significant at a 0.01 significance level; one asterisk indicates statistical significance at the 0.05 level.",
"The mcguire j canopy is excluded from Tables 1a and 1b and the robinson h canopy is excluded from Table 1b since in these canopies, either: 0 or 1 edits are required to discover the ground-truth entities across baselines.that FM generally outperforms assign-m in both experiments, in that assign-m is similar to the assign strategy but never deletes feedback.",
"This work presents a framework for reasoning about user feedback under identity uncertainty during KB construction.",
"We advocate representing user feedback as feedback mentions that participate in ER alongside standard mentions.",
"Our feedback mentions are endowed with a packaging-used to identify similar mentions during ER-and a payload-that is used to add missing attributes to inferred entities, correct mistakes and influence future ER decision.",
"We give a hierarchical model of inferred entities and present the OG algorithm for performing online ER amongst standard and feedback mentions.",
"In experiments, we show that our approach often outperforms baseline approaches in terms of efficiency with respect to recovering the ground-truth partition in ER.",
"Our work is a foundational step in addressing a significant and under-explored problem in automatic KB construction whose solution could improve the accuracy and efficacy of integrating expressive user feedback with KB content."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.1463414579629898,
0.16326530277729034,
0.514285683631897,
0.20689654350280762,
0.2631579041481018,
0.0624999962747097,
0.11764705181121826,
0.09756097197532654,
0,
0.045454539358615875,
0.05882352590560913,
0.10810810327529907,
0,
0.0416666604578495,
0.05714285373687744,
0.06896550953388214,
0,
0.1621621549129486,
0.05882352590560913,
0.10810810327529907,
0.1621621549129486,
0.04999999701976776,
0,
0.08163265138864517,
0.05405404791235924,
0,
0.05882352590560913,
0.0555555522441864,
0.10526315122842789,
0.1249999925494194,
0.0833333283662796,
0.11764705181121826,
0.1764705777168274,
0.23255813121795654,
0,
0.08510638028383255,
0.0624999962747097,
0.0555555522441864,
0.06451612710952759,
0.5625,
0.20689654350280762,
0.09090908616781235,
0.1621621549129486,
0.05128204822540283,
0.2222222238779068
] | SygLHbcapm | true | [
"This paper develops a framework for integrating user feedback under identity uncertainty in knowledge bases. "
] |
[
"Machine learning algorithms are vulnerable to poisoning attacks: An adversary can inject malicious points in the training dataset to influence the learning process and degrade the algorithm's performance.",
"Optimal poisoning attacks have already been proposed to evaluate worst-case scenarios, modelling attacks as a bi-level optimization problem.",
"Solving these problems is computationally demanding and has limited applicability for some models such as deep networks.",
"In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.",
"We propose a Generative Adversarial Net with three components: generator, discriminator, and the target classifier.",
"This approach allows us to model naturally the detectability constrains that can be expected in realistic attacks and to identify the regions of the underlying data distribution that can be more vulnerable to data poisoning.",
"Our experimental evaluation shows the effectiveness of our attack to compromise machine learning classifiers, including deep networks.",
"Despite the advancements and the benefits of machine learning, it has been shown that learning algorithms are vulnerable and can be the target of attackers, who can gain a significant advantage by exploiting these vulnerabilities (Huang et al., 2011) .",
"At training time, learning algorithms are vulnerable to poisoning attacks, where small fractions of malicious points injected in the training set can subvert the learning process and degrade the performance of the system in an indiscriminate or targeted way.",
"Data poisoning is one of the most relevant and emerging security threats in applications that rely upon the collection of large amounts of data in the wild (Joseph et al., 2013) .",
"Some applications rely on the data from users' feedback or untrusted sources of information that often collude towards the same malicious goal.",
"For example, in IoT environments sensors can be compromised and adversaries can craft coordinated attacks manipulating the measurements of neighbour sensors evading detection (Illiano et al., 2016) .",
"In many applications curation of the whole training dataset is not possible, exposing machine learning systems to poisoning attacks.",
"In the research literature optimal poisoning attack strategies have been proposed against different machine learning algorithms (Biggio et al., 2012; Mei & Zhu, 2015; Muñoz-González et al., 2017; Jagielski et al., 2018) , allowing to assess their performance in worst-case scenarios.",
"These attacks can be modelled as a bi-level optimization problem, where the outer objective represents the attacker's goal and the inner objective corresponds to the training of the learning algorithm with the poisoned dataset.",
"Solving these bi-level optimization problems is challenging and can be computationally demanding, especially for generating poisoning points at scale.",
"This limits its applicability against some learning algorithms such as deep networks or where the training set is large.",
"In many cases, if no detectability constraints are considered, the poisoning points generated are outliers that can be removed with data filtering (Paudice et al., 2018a) .",
"Furthermore, such attacks are not realistic as real attackers would aim to remain undetected in order to be able to continue subverting the system in the future.",
"As shown in (Koh et al., 2018) , detectability constraints for these optimal attack strategies can be modelled, however they further increase the complexity of the attack, limiting even more the application of these techniques.",
"Taking an entirely different and novel approach, in this paper we propose a poisoning attack strategy against machine learning classifiers with Generative Adversarial Nets (GANs) (Goodfellow et al., 2014) .",
"This allows us to craft poisoning points in a more systematic way, looking for regions of the data distribution where the poisoning points are more influential and, at the same time, difficult to detect.",
"Our proposed scheme, pGAN, consists on three components: generator, discriminator and target classifier.",
"The generator aims to generate poisoning points that maximize the error of the target classifier but minimize the discriminator's ability to distinguish them from genuine data points.",
"The classifier aims to minimize some loss function evaluated on a training dataset that contains a fraction of poisoning points.",
"As in a standard GAN, the problem can be formulated as a minimax game.",
"pGAN allows to systematically generate adversarial training examples , which are similar to genuine data points but that can degrade the performance of the system when used for training.",
"The use of a generative model allows us to produce poisoning points at scale, enabling poisoning attacks against learning algorithms where the number of training points is large or in situations where optimal attack strategies with bi-level optimization are intractable or difficult to compute, as it can be the case for deep networks.",
"Additionally, our proposed model also includes a mechanism to control the detectability of the generated poisoning points.",
"For this, the generator maximizes a convex combination of the losses for the discriminator and the classifier evaluated on the poisoning data points.",
"Our model allows to control the aggressiveness of the attack through a parameter that controls the weighted sum of the two losses.",
"This induces a trade-off between effectiveness and detectability of the attack.",
"In this way, pGAN can be applied for systematic testing of machine learning classifiers at different risk levels.",
"Our experimental evaluation in synthetic and real datasets shows that pGAN is capable of compromising different machine learning classifiers bypassing different defence mechanisms, including outlier detection Paudice et al. (2018a) , Sever Diakonikolas et al. (2019) , PCA-based defences Rubinstein et al. (2009) and label sanitization Paudice et al. (2018b) .",
"We analyse the trade-off between detectability and effectiveness of the attack: Too conservative strategies will have a reduced impact on the target classifier but, if the attack is too aggressive, most poisoning points can be detected as outliers.",
"The pGAN approach we introduce in this paper allows to naturally model attackers with different levels of aggressiveness and the effect of different detectability constraints on the robustness of the algorithms.",
"This allows to",
"a) study the characteristics of the attacks and identify regions of the data distributions where poisoning points are more influential, yet more difficult to detect,",
"b) systematically generate in an efficient and scalable way attacks that correspond to different types of threats and",
"c) study the effect of mitigation measures such as improving detectability.",
"In addition to studying the tradeoffs involved in the adversarial model, pGAN also allows to naturally study the tradeoffs between performance and robustness of the system as the fraction of poisoning points increases.",
"Our experimental evaluation shows that pGAN effectively bypasses different strategies to mitigate poisoning attacks, including outlier detection, label sanitization, PCA-based defences and Sever algorithm."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12244897335767746,
0.1904761791229248,
0.0952380895614624,
0.5,
0.14999999105930328,
0.19230768084526062,
0.3333333432674408,
0.09999999403953552,
0.10526315122842789,
0.038461532443761826,
0,
0.07843136787414551,
0.27272728085517883,
0.19354838132858276,
0.18867923319339752,
0.045454539358615875,
0.1818181723356247,
0.19607841968536377,
0.0833333283662796,
0.0714285671710968,
0.4000000059604645,
0.18867923319339752,
0,
0.0833333283662796,
0.13636362552642822,
0.052631575614213943,
0.039215680211782455,
0.30985915660858154,
0.24390242993831635,
0.09090908616781235,
0.1395348757505417,
0.1111111044883728,
0.23255813121795654,
0.0923076868057251,
0.09999999403953552,
0.31372547149658203,
0.0714285671710968,
0.1304347813129425,
0.0952380895614624,
0.0555555522441864,
0.11764705181121826,
0.12244897335767746
] | Bke-6pVKvB | true | [
"In this paper we propose a novel generative model to craft systematic poisoning attacks with detectability constraints against machine learning classifiers, including deep networks. "
] |
[
"The Expectation-Maximization (EM) algorithm is a fundamental tool in unsupervised machine learning.",
"It is often used as an efficient way to solve Maximum Likelihood (ML) and Maximum A Posteriori estimation problems, especially for models with latent variables.",
"It is also the algorithm of choice to fit mixture models: generative models that represent unlabelled points originating from $k$ different processes, as samples from $k$ multivariate distributions.",
"In this work we define and use a quantum version of EM to fit a Gaussian Mixture Model.",
"Given quantum access to a dataset of $n$ vectors of dimension $d$, our algorithm has convergence and precision guarantees similar to the classical algorithm, but the runtime is only polylogarithmic in the number of elements in the training set, and is polynomial in other parameters - as the dimension of the feature space, and the number of components in the mixture.",
"We generalize further the algorithm by fitting any mixture model of base distributions in the exponential family.",
"We discuss the performance of the algorithm on datasets that are expected to be classified successfully by those algorithms, arguing that on those cases we can give strong guarantees on the runtime.",
"Over the last few years, the effort to find real world applications of quantum computers has greatly intensified.",
"Along with chemistry, material sciences, finance, one of the fields where quantum computers are expected to be most beneficial is machine learning.",
"A number of different algorithms have been proposed for quantum machine learning (Biamonte et al., 2017; Wiebe et al., 2017; Kerenidis & Prakash, 2018; Harrow et al., 2009; Subaşı et al., 2019; Farhi & Neven, 2018) , both for the supervised and unsupervised setting, and despite the lack of large-scale quantum computers and quantum memory devises, some quantum algorithms have been demonstrated in proof-of-principle experiments (Li et al., 2015; Otterbach et al., 2017; Jiang et al., 2019) .",
"Here, we look at Expectation-Maximization (EM), a fundamental algorithm in unsupervised learning, that can be used to fit different mixture models and give maximum likelihood estimates with the so-called latent variable models.",
"Such generative models are one of the most promising approaches for unsupervised problems.",
"The goal of a generative model is to learn a probability distribution that is most likely to have generated the data collected in a training set V ∈ R n×d of n vectors of d features.",
"Fitting the model consists in learning the parameters of a probability distribution p in a certain parameterized family that best describes our vectors v i .",
"We will see that, thanks to this formulation, we can reduce a statistical problem into an optimization problem using maximum likelihood estimation (ML) estimation.",
"The likelihood is the function that we use to measure how good a model is for explaining a given dataset.",
"For a given machine learning model with parameters γ, the likelihood of our data set V is the probability that the data have been generated by the model with parameters γ, assuming each point is independent and identically distributed.",
"We think the likelihood as a function of γ, holding the dataset V fixed.",
"For p(v i |γ) the probability that a point v i comes from model γ, the likelihood is defined as L(γ; V ) := n i=1 p(v i |γ).",
"From this formula, we can see that in order to find the best parameters γ * of our model we need to solve an optimization problem.",
"For numerical and analytical reasons, instead of maximizing the likelihood L, it is common practice to find the best model by maximizing the log-likelihood function (γ; V ) = log L(γ; V ) = n i=1 log p(v i |γ).",
"In this context, we want to find the model that maximizes the log-likelihood: γ * M L := arg max γ n i=1 log p(v i |γ).",
"The procedure to calculate the log-likelihood depends on the specific model under consideration.",
"A possible solution would be to use a gradient based optimization algorithm on .",
"Unfortunately, due to the indented landscape of the function, gradient based techniques often do not perform well.",
"Therefore, it is common to solve the maximum likelihood estimation (or maximum a priori) problem using the Expectation-Maximization (EM) algorithm.",
"EM is an iterative algorithm which is guaranteed to converge to a (local) optimum of the likelihood.",
"This algorithm has a striking variety of applications, and has been successfully used for medical imaging (Balafar et al., 2010) , image restoration (Lagendijk et al., 1990) , problems in computational biology (Fan et al., 2010) , and so on.",
"EM has been proposed in different works by different authors, but has been formalized as we know it only in 1977 (Dempster et al., 1977) .",
"For more details, we refer to (Lindsay, 1995; Bilmes et al., 1998) .",
"In this work, we introduce Quantum Expectation-Maximization (QEM), a new algorithm for fitting mixture models.",
"We detail its usage in the context of Gaussian Mixture Models, and we extend the result to other distributions in the exponential family.",
"We also generalize the result by showing how to compute the MAP: the Maximum A Posteriori estimate of a mixture model.",
"MAP estimates can be seen as the Bayesian version of maximum likelihood estimation problems.",
"MAP estimates are often preferred over ML estimates, due to a reduced propensity to overfit.",
"Our main result can be stated as: Result (Quantum Expectation-Maximization).",
"(see Theorem 3.9)",
"For a data matrix V ∈ R n×d stored in an appropriate QRAM data structure and for parameters δ θ , δ µ > 0 , Quantum Expectation-Maximization (QEM) fits a Maximum Likelihood (or a Maximum A Posteriori) estimate of a Gaussian Mixture Model with k components, in running time per iteration which is dominated by:",
"where Σ is a covariance matrix of a Gaussian distribution, η is a parameter of the dataset related to the maximum norm of the vectors, δ θ , δ µ are error parameters in the QEM algorithm, µ(< √ d) is a factor appearing in quantum linear algebra and κ is the condition number of a matrix.",
"Here we only kept the term in the running time that dominates for the range of parameters of interest.",
"In Theorem 3.9 we explicate the running time of each step of the algorithm.",
"The QEM algorithm runs for a number of iterations until a stopping condition is met (defined by a parameter τ > 0) which implies a convergence to a (local) optimum.",
"Let's have a first high-level comparison of this result with the standard classical algorithms.",
"The runtime of a single iteration in the standard implementation of the EM algorithm is at least O(knd 2 ) (Pedregosa et al., 2011; Murphy, 2012) .",
"The advantage of the quantum algorithm is an exponential improvement with respect to the number of elements in the training set, albeit with a worsening on other parameters.",
"It is crucial to find datasets where such a quantum algorithm can offer a speedup.",
"For a reasonable range of parameters ( d = 40, k = 10, η = 10, δ = 0.5, κ(V ) = 25, κ(Σ) = 5, µ(Σ) = 4) which is motivated by some experimental evidence reported in Section 4, datasets where the number of samples in the order of O(10 12 ) might be processed faster on a quantum computer.",
"One should expect that some of the parameters of the quantum algorithm can be improved, especially the dependence on the condition numbers and the errors, which can make enlarge the type of datasets where QEM can offer an advantage.",
"Note that we expect the number of iterations of the quantum algorithm to be proportional to the number of iteration of the classical case.",
"This is to be expected since the convergence rate does not change, and it is corroborated by previous experimental evidence in a similar scenario: the number of iterations needed by q-means algorithm for convergence, is proportional to the number of iterations of the classical k-means algorithm (Kerenidis et al., 2018) .",
"Expectation-Maximization is widely used for fitting mixture models in machine learning (Murphy, 2012) .",
"Most mixture models use a base distribution in the exponential family: Poisson (Church & Gale, 1995 ), Binomial, Multinomial, log-normal (Dexter & Tanner, 1972 ), exponential (Ghitany et al., 1994 , Dirichlet multinomial (Yin & Wang, 2014) , and others.",
"EM is also used to fit mixtures of experts, mixtures of the student T distribution (which does not belong to the exponential family, and can be fitted with EM using (Liu & Rubin, 1995) ) and for factor analysis, probit regression, and learning Hidden Markov Models (Murphy, 2012)."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12903225421905518,
0.04651162400841713,
0.13333332538604736,
0.1111111044883728,
0.3333333432674408,
0.22857142984867096,
0.2222222238779068,
0.1666666567325592,
0.1463414579629898,
0.17142856121063232,
0.11999999731779099,
0.1875,
0.12244897335767746,
0.1463414579629898,
0,
0.1621621549129486,
0.08163265138864517,
0.1875,
0.04651162400841713,
0.1395348757505417,
0.07692307233810425,
0.045454539358615875,
0.19354838132858276,
0.1249999925494194,
0.11428570747375488,
0.10810810327529907,
0.1764705777168274,
0.19999998807907104,
0.09756097197532654,
0,
0.11764705181121826,
0.1538461446762085,
0.10526315122842789,
0.12121211737394333,
0,
0,
0,
0.08955223113298416,
0.20338982343673706,
0.2857142686843872,
0.1875,
0.17777776718139648,
0.12121211737394333,
0.22727271914482117,
0.3720930218696594,
0.12121211737394333,
0.17910447716712952,
0.20408162474632263,
0.2857142686843872,
0.20689654350280762,
0.1249999925494194,
0.07407406717538834,
0.09999999403953552
] | Hkgs3aNYDS | true | [
"It's the quantum algorithm for Expectation Maximization. It's fast: the runtime depends only polylogarithmically on the number of elements in the dataset. "
] |
Subsets and Splits