{"abstract": "This paper presents a new deep learning architecture for Natural Language\nInference (NLI). Firstly, we introduce a new architecture where alignment pairs\nare compared, compressed and then propagated to upper layers for enhanced\nrepresentation learning. Secondly, we adopt factorization layers for efficient\nand expressive compression of alignment vectors into scalar features, which are\nthen used to augment the base word representations. The design of our approach\nis aimed to be conceptually simple, compact and yet powerful. We conduct\nexperiments on three popular benchmarks, SNLI, MultiNLI and SciTail, achieving\ncompetitive performance on all. A lightweight parameterization of our model\nalso enjoys a $\\approx 3$ times reduction in parameter size compared to the\nexisting state-of-the-art models, e.g., ESIM and DIIN, while maintaining\ncompetitive performance. Additionally, visual analysis shows that our\npropagated features are highly interpretable.", "field": ["Sequence To Sequence Models"], "task": ["Natural Language Inference", "Representation Learning"], "method": ["ESIM", "Enhanced Sequential Inference Model"], "dataset": ["SciTail", "SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy", "Accuracy"], "title": "Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference"} {"abstract": "Deep Convolutional Neural Network (DCNN) and Transformer have achieved remarkable successes in image recognition. However, their performance in fine-grained image recognition is still difficult to meet the requirements of actual needs. This paper proposes a Sequence Random Network (SRN) to enhance the performance of DCNN. The output of DCNN is one-dimensional features. This one-dimensional feature abstractly represents image information, but it does not express well the detailed information of image. To address this issue, we use the proposed SRN which composed of BiLSTM and several Tanh-Dropout blocks (called BiLSTM-TDN), to further process DCNN one-dimensional features for highlighting the detail information of image. After the feature transform by BiLSTM-TDN, the recognition performance has been greatly improved. We conducted the experiments on six fine-grained image datasets. Except for FGVC-Aircraft, the accuracies of the proposed methods on the other datasets exceeded 99%. Experimental results show that BiLSTM-TDN is far superior to the existing state-of-the-art methods. In addition to DCNN, BiLSTM-TDN can also be extended to other models, such as Transformer.", "field": ["Regularization", "Attention Modules", "Stochastic Optimization", "Output Functions", "Recurrent Neural Networks", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections", "Bidirectional Recurrent Neural Networks"], "task": ["Fine-Grained Image Classification", "Fine-Grained Image Recognition", "Image Classification"], "method": ["Stable Rank Normalization", "Adam", "Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Scaled Dot-Product Attention", "Transformer", "Bidirectional LSTM", "Residual Connection", "Dense Connections", "SRN", "Layer Normalization", "Label Smoothing", "Sigmoid Activation", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "LSTM", "Dropout"], "dataset": ["Oxford-IIIT Pets", "CUB-200-2011", "Stanford Cars", "Flowers-102"], "metric": ["Accuracy"], "title": "Sequential Random Network for Fine-grained Image Classification"} {"abstract": "In this paper, we focus on the imbalance issue, which is rarely studied in aspect term extraction and aspect sentiment classification when regarding them as sequence labeling tasks. Besides, previous works usually ignore the interaction between aspect terms when labeling polarities. We propose a GRadient hArmonized and CascadEd labeling model (GRACE) to solve these problems. Specifically, a cascaded labeling module is developed to enhance the interchange between aspect terms and improve the attention of sentiment tokens when labeling sentiment polarities. The polarities sequence is designed to depend on the generated aspect terms labels. To alleviate the imbalance issue, we extend the gradient harmonized mechanism used in object detection to the aspect-based sentiment analysis by adjusting the weight of each label dynamically. The proposed GRACE adopts a post-pretraining BERT as its backbone. Experimental results demonstrate that the proposed model achieves consistency improvement on multiple benchmark datasets and generates state-of-the-art results.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Attention Modules", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Aspect-Based Sentiment Analysis", "Object Detection", "Sentiment Analysis"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SemEval 2014 Task 4 Subtask 1+2"], "metric": ["F1"], "title": "GRACE: Gradient Harmonized and Cascaded Labeling for Aspect-based Sentiment Analysis"} {"abstract": "The dominant sequence transduction models are based on complex recurrent or\nconvolutional neural networks in an encoder-decoder configuration. The best\nperforming models also connect the encoder and decoder through an attention\nmechanism. We propose a new simple network architecture, the Transformer, based\nsolely on attention mechanisms, dispensing with recurrence and convolutions\nentirely. Experiments on two machine translation tasks show these models to be\nsuperior in quality while being more parallelizable and requiring significantly\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\nEnglish-to-German translation task, improving over the existing best results,\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\ntranslation task, our model establishes a new single-model state-of-the-art\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\nof the training costs of the best models from the literature. We show that the\nTransformer generalizes well to other tasks by applying it successfully to\nEnglish constituency parsing both with large and limited training data.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Constituency Parsing", "Machine Translation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["CNN / Daily Mail", "GigaWord", "Penn Treebank", "IWSLT2015 English-German", "WMT2014 English-German", "WMT2014 English-French", "IWSLT2014 German-English"], "metric": ["ROUGE-1", "F1 score", "ROUGE-2", "BLEU score", "ROUGE-L"], "title": "Attention Is All You Need"} {"abstract": "Recently, Hyperbolic Spaces in the context of Non-Euclidean Deep Learning have gained popularity because of their ability to represent hierarchical data. We propose that it is possible to take advantage of the hierarchical characteristic present in the images by using hyperbolic neural networks in a GAN architecture. In this study, different configurations using fully connected hyperbolic layers in the GAN, CGAN, and WGAN are tested, in what we call the HGAN, HCGAN, and HWGAN, respectively. The results are measured using the Inception Score (IS) and the Fr\\'echet Inception Distance (FID) on the MNIST dataset. Depending on the configuration and space curvature, better results are achieved for each proposed hyperbolic versions than their euclidean counterpart.", "field": ["Convolutions", "Generative Adversarial Networks"], "task": ["Image Generation"], "method": ["WGAN", "Wasserstein GAN", "Convolution"], "dataset": ["MNIST"], "metric": ["FID"], "title": "Hyperbolic Generative Adversarial Network"} {"abstract": "We propose a convolutional neural network (CNN) architecture for facial\nexpression recognition. The proposed architecture is independent of any\nhand-crafted feature extraction and performs better than the earlier proposed\nconvolutional neural network based approaches. We visualize the automatically\nextracted features which have been learned by the network in order to provide a\nbetter understanding. The standard datasets, i.e. Extended Cohn-Kanade (CKP)\nand MMI Facial Expression Databse are used for the quantitative evaluation. On\nthe CKP set the current state of the art approach, using CNNs, achieves an\naccuracy of 99.2%. For the MMI dataset, currently the best accuracy for emotion\nrecognition is 93.33%. The proposed architecture achieves 99.6% for CKP and\n98.63% for MMI, therefore performing better than the state of the art using\nCNNs. Automatic facial expression recognition has a broad spectrum of\napplications such as human-computer interaction and safety systems. This is due\nto the fact that non-verbal cues are important forms of communication and play\na pivotal role in interpersonal communication. The performance of the proposed\narchitecture endorses the efficacy and reliable usage of the proposed work for\nreal world applications.", "field": ["Stochastic Optimization"], "task": ["Emotion Recognition", "Facial Expression Recognition"], "method": ["Adam"], "dataset": ["MMI"], "metric": ["Accuracy"], "title": "DeXpression: Deep Convolutional Neural Network for Expression Recognition"} {"abstract": "The field of object detection has made significant advances riding on the\nwave of region-based ConvNets, but their training procedure still includes many\nheuristics and hyperparameters that are costly to tune. We present a simple yet\nsurprisingly effective online hard example mining (OHEM) algorithm for training\nregion-based ConvNet detectors. Our motivation is the same as it has always\nbeen -- detection datasets contain an overwhelming number of easy examples and\na small number of hard examples. Automatic selection of these hard examples can\nmake training more effective and efficient. OHEM is a simple and intuitive\nalgorithm that eliminates several heuristics and hyperparameters in common use.\nBut more importantly, it yields consistent and significant boosts in detection\nperformance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness\nincreases as datasets become larger and more difficult, as demonstrated by the\nresults on the MS COCO dataset. Moreover, combined with complementary advances\nin the field, OHEM leads to state-of-the-art results of 78.9% and 76.3% mAP on\nPASCAL VOC 2007 and 2012 respectively.", "field": ["Prioritized Sampling"], "task": ["Object Detection"], "method": ["Online Hard Example Mining", "OHEM"], "dataset": ["Trillion Pairs Dataset", "PASCAL VOC 2007"], "metric": ["Accuracy", "MAP"], "title": "Training Region-based Object Detectors with Online Hard Example Mining"} {"abstract": "Keyword spotting (KWS) is a critical component for enabling speech based user\ninteractions on smart devices. It requires real-time response and high accuracy\nfor good user experience. Recently, neural networks have become an attractive\nchoice for KWS architecture because of their superior accuracy compared to\ntraditional speech processing algorithms. Due to its always-on nature, KWS\napplication has highly constrained power budget and typically runs on tiny\nmicrocontrollers with limited memory and compute capability. The design of\nneural network architecture for KWS must consider these constraints. In this\nwork, we perform neural network architecture evaluation and exploration for\nrunning KWS on resource-constrained microcontrollers. We train various neural\nnetwork architectures for keyword spotting published in literature to compare\ntheir accuracy and memory/compute requirements. We show that it is possible to\noptimize these neural network architectures to fit within the memory and\ncompute constraints of microcontrollers without sacrificing accuracy. We\nfurther explore the depthwise separable convolutional neural network (DS-CNN)\nand compare it against other neural network architectures. DS-CNN achieves an\naccuracy of 95.4%, which is ~10% higher than the DNN model with similar number\nof parameters.", "field": ["Convolutions", "Skip Connections", "Normalization"], "task": ["Keyword Spotting"], "method": ["Dilated Convolution", "Residual Connection", "Batch Normalization"], "dataset": ["Google Speech Commands"], "metric": ["Google Speech Commands V1 12"], "title": "Hello Edge: Keyword Spotting on Microcontrollers"} {"abstract": "Letting a deep network be aware of the quality of its own predictions is an\ninteresting yet important problem. In the task of instance segmentation, the\nconfidence of instance classification is used as mask quality score in most\ninstance segmentation frameworks. However, the mask quality, quantified as the\nIoU between the instance mask and its ground truth, is usually not well\ncorrelated with classification score. In this paper, we study this problem and\npropose Mask Scoring R-CNN which contains a network block to learn the quality\nof the predicted instance masks. The proposed network block takes the instance\nfeature and the corresponding predicted mask together to regress the mask IoU.\nThe mask scoring strategy calibrates the misalignment between mask quality and\nmask score, and improves instance segmentation performance by prioritizing more\naccurate mask predictions during COCO AP evaluation. By extensive evaluations\non the COCO dataset, Mask Scoring R-CNN brings consistent and noticeable gain\nwith different models, and outperforms the state-of-the-art Mask R-CNN. We hope\nour simple and effective approach will provide a new direction for improving\ninstance segmentation. The source code of our method is available at\n\\url{https://github.com/zjhuang22/maskscoring_rcnn}.", "field": ["Initialization", "Output Functions", "Convolutional Neural Networks", "Feature Extractors", "RoI Feature Extractors", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Region Proposal", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Semantic Segmentation"], "method": ["Average Pooling", "1x1 Convolution", "RoIAlign", "Region Proposal Network", "ResNet", "Convolution", "ReLU", "Residual Connection", "FPN", "Deformable Convolution", "RPN", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Softmax", "Feature Pyramid Network", "Mask Scoring R-CNN", "Bottleneck Residual Block", "Mask R-CNN", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["mask AP"], "title": "Mask Scoring R-CNN"} {"abstract": "Convolutional neural networks (CNN) have shown promising results for end-to-end speech recognition, albeit still behind other state-of-the-art methods in performance. In this paper, we study how to bridge this gap and go beyond with a novel CNN-RNN-transducer architecture, which we call ContextNet. ContextNet features a fully convolutional encoder that incorporates global context information into convolution layers by adding squeeze-and-excitation modules. In addition, we propose a simple scaling method that scales the widths of ContextNet that achieves good trade-off between computation and accuracy. We demonstrate that on the widely used LibriSpeech benchmark, ContextNet achieves a word error rate (WER) of 2.1%/4.6% without external language model (LM), 1.9%/4.1% with LM and 2.9%/7.0% with only 10M parameters on the clean/noisy LibriSpeech test sets. This compares to the previous best published system of 2.0%/4.6% with LM and 3.9%/11.3% with 20M parameters. The superiority of the proposed ContextNet model is also verified on a much larger internal dataset.", "field": ["Convolutions"], "task": ["End-To-End Speech Recognition", "Language Modelling", "Speech Recognition"], "method": ["Convolution"], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context"} {"abstract": "Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. In this paper, we challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. To facilitate this investigation, we compile a comprehensive biomedical NLP benchmark from publicly-available datasets. Our experiments show that domain-specific pretraining serves as a solid foundation for a wide range of biomedical NLP tasks, leading to new state-of-the-art results across the board. Further, in conducting a thorough evaluation of modeling choices, both for pretraining and task-specific fine-tuning, we discover that some common practices are unnecessary with BERT models, such as using complex tagging schemes in named entity recognition (NER). To help accelerate research in biomedical NLP, we have released our state-of-the-art pretrained and task-specific models for the community, and created a leaderboard featuring our BLURB benchmark (short for Biomedical Language Understanding & Reasoning Benchmark) at https://aka.ms/BLURB.", "field": ["Regularization", "Attention Modules", "Learning Rate Schedules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Document Classification", "Language Modelling", "Named Entity Recognition", "PICO", "Question Answering", "Relation Extraction", "Sentence Similarity"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["BC5CDR-disease", "HoC", "PubMedQA", "NCBI Disease", "ChemProt", "JNLPBA", "BC2GM", "GAD", "DDI", "BioASQ", "BC5CDR-chemical"], "metric": ["F1 entity level", "Accuracy", "Micro F1"], "title": "Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing"} {"abstract": "There has been a lot of recent interest in designing neural network models to\nestimate a distribution from a set of examples. We introduce a simple\nmodification for autoencoder neural networks that yields powerful generative\nmodels. Our method masks the autoencoder's parameters to respect autoregressive\nconstraints: each input is reconstructed only from previous inputs in a given\nordering. Constrained this way, the autoencoder outputs can be interpreted as a\nset of conditional probabilities, and their product, the full joint\nprobability. We can also train a single network that can decompose the joint\nprobability in multiple different orderings. Our simple framework can be\napplied to multiple architectures, including deep ones. Vectorized\nimplementations, such as on GPUs, are simple and fast. Experiments demonstrate\nthat this approach is competitive with state-of-the-art tractable distribution\nestimators. At test time, the method is significantly faster and scales better\nthan other autoregressive estimators.", "field": ["Generative Models"], "task": ["Density Estimation", "Image Generation"], "method": ["AutoEncoder"], "dataset": ["UCI GAS", "Binarized MNIST"], "metric": ["nats", "Log-likelihood"], "title": "MADE: Masked Autoencoder for Distribution Estimation"} {"abstract": "Semantic segmentation has made much progress with increasingly powerful\npixel-wise classifiers and incorporating structural priors via Conditional\nRandom Fields (CRF) or Generative Adversarial Networks (GAN). We propose a\nsimpler alternative that learns to verify the spatial structure of segmentation\nduring training only. Unlike existing approaches that enforce semantic labels\non individual pixels and match labels between neighbouring pixels, we propose\nthe concept of Adaptive Affinity Fields (AAF) to capture and match the semantic\nrelations between neighbouring pixels in the label space. We use adversarial\nlearning to select the optimal affinity field size for each semantic category.\nIt is formulated as a minimax problem, optimizing our segmentation neural\nnetwork in a best worst-case learning scenario. AAF is versatile for\nrepresenting structures as a collection of pixel-centric relations, easier to\ntrain than GAN and more efficient than CRF without run-time inference. Our\nextensive evaluations on PASCAL VOC 2012, Cityscapes, and GTA5 datasets\ndemonstrate its above-par segmentation performance and robust generalization\nacross domains.", "field": ["Structured Prediction", "Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Generative Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Semantic Segmentation"], "method": ["ResNet", "Generative Adversarial Network", "Average Pooling", "Conditional Random Field", "GAN", "Batch Normalization", "CRF", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Convolution", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Cityscapes test"], "metric": ["Mean IoU (class)"], "title": "Adaptive Affinity Fields for Semantic Segmentation"} {"abstract": "Recently, a semi-supervised learning method known as \"noisy student training\" has been shown to improve image classification performance of deep networks significantly. Noisy student training is an iterative self-training method that leverages augmentation to improve network performance. In this work, we adapt and improve noisy student training for automatic speech recognition, employing (adaptive) SpecAugment as the augmentation method. We find effective methods to filter, balance and augment the data generated in between self-training iterations. By doing so, we are able to obtain word error rates (WERs) 4.2%/8.6% on the clean/noisy LibriSpeech test sets by only using the clean 100h subset of LibriSpeech as the supervised set and the rest (860h) as the unlabeled set. Furthermore, we are able to achieve WERs 1.7%/3.4% on the clean/noisy LibriSpeech test sets by using the unlab-60k subset of LibriLight as the unlabeled set for LibriSpeech 960h. We are thus able to improve upon the previous state-of-the-art clean/noisy test WERs achieved on LibriSpeech 100h (4.74%/12.20%) and LibriSpeech (1.9%/4.1%).", "field": ["Image Data Augmentation", "Semi-Supervised Learning Methods", "Regularization"], "task": ["Image Classification", "Speech Recognition"], "method": ["Dropout", "Noisy Student", "Stochastic Depth", "RandAugment"], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "Improved Noisy Student Training for Automatic Speech Recognition"} {"abstract": "Previous works have shown that convolutional neural networks can achieve good performance in image denoising tasks. However, limited by the local rigid convolutional operation, these methods lead to oversmoothing artifacts. A deeper network structure could alleviate these problems, but more computational overhead is needed. In this paper, we propose a novel spatial-adaptive denoising network (SADNet) for efficient single image blind noise removal. To adapt to changes in spatial textures and edges, we design a residual spatial-adaptive block. Deformable convolution is introduced to sample the spatially correlated features for weighting. An encoder-decoder structure with a context block is introduced to capture multiscale information. With noise removal from the coarse to fine, a high-quality noisefree image can be obtained. We apply our method to both synthetic and real noisy image datasets. The experimental results demonstrate that our method can surpass the state-of-the-art denoising methods both quantitatively and visually.", "field": ["Convolutions"], "task": ["Denoising", "Image Denoising"], "method": ["Convolution", "Deformable Convolution"], "dataset": ["SIDD", "DND"], "metric": ["SSIM (sRGB)", "PSNR (sRGB)"], "title": "Spatial-Adaptive Network for Single Image Denoising"} {"abstract": "In recent years, the performance of face verification systems has\nsignificantly improved using deep convolutional neural networks (DCNNs). A\ntypical pipeline for face verification includes training a deep network for\nsubject classification with softmax loss, using the penultimate layer output as\nthe feature descriptor, and generating a cosine similarity score given a pair\nof face images. The softmax loss function does not optimize the features to\nhave higher similarity score for positive pairs and lower similarity score for\nnegative pairs, which leads to a performance gap. In this paper, we add an\nL2-constraint to the feature descriptors which restricts them to lie on a\nhypersphere of a fixed radius. This module can be easily implemented using\nexisting deep learning frameworks. We show that integrating this simple step in\nthe training pipeline significantly boosts the performance of face\nverification. Specifically, we achieve state-of-the-art results on the\nchallenging IJB-A dataset, achieving True Accept Rate of 0.909 at False Accept\nRate 0.0001 on the face verification protocol. Additionally, we achieve\nstate-of-the-art performance on LFW dataset with an accuracy of 99.78%, and\ncompeting performance on YTF dataset with accuracy of 96.08%.", "field": ["Output Functions"], "task": ["Face Verification"], "method": ["Softmax"], "dataset": ["IJB-A"], "metric": ["TAR @ FAR=0.01"], "title": "L2-constrained Softmax Loss for Discriminative Face Verification"} {"abstract": "Atrial fibrillation is a cardiac arrhythmia that affects an estimated 33.5\nmillion people globally and is the potential cause of 1 in 3 strokes in people\nover the age of 60. Detection and diagnosis of atrial fibrillation (AFIB) is\ndone noninvasively in the clinical environment through the evaluation of\nelectrocardiograms (ECGs). Early research into automated methods for the\ndetection of AFIB in ECG signals focused on traditional bio-medical signal\nanalysis to extract important features for use in statistical classification\nmodels. Artificial intelligence models have more recently been used that employ\nconvolutional and/or recurrent network architectures. In this work, significant\ntime and frequency domain characteristics of the ECG signal are extracted by\napplying the short-time Fourier trans-form and then visually representing the\ninformation in a spectrogram. Two different classification approaches were\ninvestigated that utilized deep features in the spectrograms construct-ed from\nECG segments. The first approach used a pretrained DenseNet model to extract\nfeatures that were then classified using Support Vector Machines, and the\nsecond approach used the spectrograms as direct input into a convolutional\nnetwork. Both approaches were evaluated against the MIT-BIH AFIB dataset, where\nthe convolutional network approach achieved a classification accuracy of\n93.16%. While these results do not surpass established automated atrial\nfibrillation detection methods, they are promising and warrant further\ninvestigation given they did not require any noise prefiltering, hand-crafted\nfeatures, nor a reliance on beat detection.", "field": ["Initialization", "Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks"], "task": ["Arrhythmia Detection", "Atrial Fibrillation Detection", "Electrocardiography (ECG)"], "method": ["Dense Block", "Average Pooling", "Softmax", "Concatenated Skip Connection", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Dropout", "DenseNet", "Kaiming Initialization", "Dense Connections", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["MIT-BIH AF"], "metric": ["Accuracy"], "title": "Atrial Fibrillation Detection Using Deep Features and Convolutional Networks"} {"abstract": "Recent advances in object detection are mainly driven by deep learning with\nlarge-scale detection benchmarks. However, the fully-annotated training set is\noften limited for a target detection task, which may deteriorate the\nperformance of deep detectors. To address this challenge, we propose a novel\nlow-shot transfer detector (LSTD) in this paper, where we leverage rich\nsource-domain knowledge to construct an effective target-domain detector with\nvery few training examples. The main contributions are described as follows.\nFirst, we design a flexible deep architecture of LSTD to alleviate transfer\ndifficulties in low-shot detection. This architecture can integrate the\nadvantages of both SSD and Faster RCNN in a unified deep framework. Second, we\nintroduce a novel regularized transfer learning framework for low-shot\ndetection, where the transfer knowledge (TK) and background depression (BD)\nregularizations are proposed to leverage object knowledge respectively from\nsource and target domains, in order to further enhance fine-tuning with a few\ntarget images. Finally, we examine our LSTD on a number of challenging low-shot\ndetection experiments, where LSTD outperforms other state-of-the-art\napproaches. The results demonstrate that LSTD is a preferable deep detector for\nlow-shot scenarios.", "field": ["Convolutions", "Object Detection Models", "Proposal Filtering"], "task": ["Object Detection", "Transfer Learning"], "method": ["1x1 Convolution", "Non Maximum Suppression", "SSD", "Convolution"], "dataset": ["MS-COCO (30-shot)", "MS-COCO (10-shot)"], "metric": ["AP"], "title": "LSTD: A Low-Shot Transfer Detector for Object Detection"} {"abstract": "In this paper we address the question of how to render sequence-level\nnetworks better at handling structured input. We propose a machine reading\nsimulator which processes text incrementally from left to right and performs\nshallow reasoning with memory and attention. The reader extends the Long\nShort-Term Memory architecture with a memory network in place of a single\nmemory cell. This enables adaptive memory usage during recurrence with neural\nattention, offering a way to weakly induce relations among tokens. The system\nis initially designed to process a single sequence but we also demonstrate how\nto integrate it with an encoder-decoder architecture. Experiments on language\nmodeling, sentiment analysis, and natural language inference show that our\nmodel matches or outperforms the state of the art.", "field": ["Working Memory Models"], "task": ["Language Modelling", "Natural Language Inference", "Reading Comprehension", "Sentiment Analysis"], "method": ["Memory Network"], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Long Short-Term Memory-Networks for Machine Reading"} {"abstract": "We present a transductive deep learning-based formulation for the sparse\nrepresentation-based classification (SRC) method. The proposed network consists\nof a convolutional autoencoder along with a fully-connected layer. The role of\nthe autoencoder network is to learn robust deep features for classification. On\nthe other hand, the fully-connected layer, which is placed in between the\nencoder and the decoder networks, is responsible for finding the sparse\nrepresentation. The estimated sparse codes are then used for classification.\nVarious experiments on three different datasets show that the proposed network\nleads to sparse representations that give better classification results than\nstate-of-the-art SRC methods. The source code is available at:\ngithub.com/mahdiabavisani/DSRC.", "field": ["Generative Models"], "task": ["Image Classification", "Semi-Supervised Image Classification", "Sparse Representation-based Classification"], "method": ["AutoEncoder"], "dataset": ["SVHN"], "metric": ["Accuracy"], "title": "Deep Sparse Representation-based Classification"} {"abstract": "In this paper, we propose a generative model, Temporal Generative Adversarial\nNets (TGAN), which can learn a semantic representation of unlabeled videos, and\nis capable of generating videos. Unlike existing Generative Adversarial Nets\n(GAN)-based methods that generate videos with a single generator consisting of\n3D deconvolutional layers, our model exploits two different types of\ngenerators: a temporal generator and an image generator. The temporal generator\ntakes a single latent variable as input and outputs a set of latent variables,\neach of which corresponds to an image frame in a video. The image generator\ntransforms a set of such latent variables into a video. To deal with\ninstability in training of GAN with such advanced networks, we adopt a recently\nproposed model, Wasserstein GAN, and propose a novel method to train it stably\nin an end-to-end manner. The experimental results demonstrate the effectiveness\nof our methods.", "field": ["Initialization", "Stochastic Optimization", "Adversarial Training", "Activation Functions", "Convolutions", "Feedforward Networks", "Generative Models"], "task": ["Video Generation"], "method": ["Singular Value Clipping", "RMSProp", "Convolution", "Tanh Activation", "ReLU", "Linear Layer", "Leaky ReLU", "Kaiming Initialization", "Rectified Linear Units", "TGAN"], "dataset": ["UCF-101 16 frames, 64x64, Unconditional", "UCF-101 16 frames, Unconditional, Single GPU"], "metric": ["Inception Score"], "title": "Temporal Generative Adversarial Nets with Singular Value Clipping"} {"abstract": "Real-world growth processes, such as epidemic growth, are inherently noisy, uncertain and often involve multiple growth phases. The logistic-sigmoid function has been suggested and applied in the domain of modelling such growth processes. However, existing definitions are limiting, as they do not consider growth as restricted in two-dimension. Additionally, as the number of growth phases increase, the modelling and estimation of logistic parameters becomes more cumbersome, requiring more complex tools and analysis. To remedy this, we introduce the nlogistic-sigmoid function as a compact, unified modern definition of logistic growth for modelling such real-world growth phenomena. Also, we introduce two characteristic metrics of the logistic-sigmoid curve that can give more robust projections on the state of the growth process in each dimension. Specifically, we apply this function to modelling the daily World Health Organization published COVID-19 time-series data of infection and death cases of the world and countries of the world to date. Our results demonstrate statistically significant goodness of fit greater than or equal to 99% for affected countries of the world exhibiting patterns of either single or multiple stages of the ongoing COVID-19 outbreak, such as the USA. Consequently, this modern logistic definition and its metrics, as a machine learning tool, can help to provide clearer and more robust monitoring and quantification of the ongoing pandemic growth process.", "field": ["Activation Functions"], "task": ["COVID-19 Modelling", "Time Series"], "method": ["Sigmoid Activation", "nlsig", "nlogistic-sigmoid function"], "dataset": ["WHO-COVID19 Dataset"], "metric": ["KS-GoF"], "title": "From the logistic-sigmoid to nlogistic-sigmoid: modelling the COVID-19 pandemic growth"} {"abstract": "In this paper, we analyze the spatial information of deep features, and\npropose two complementary regressions for robust visual tracking. First, we\npropose a kernelized ridge regression model wherein the kernel value is defined\nas the weighted sum of similarity scores of all pairs of patches between two\nsamples. We show that this model can be formulated as a neural network and thus\ncan be efficiently solved. Second, we propose a fully convolutional neural\nnetwork with spatially regularized kernels, through which the filter kernel\ncorresponding to each output channel is forced to focus on a specific region of\nthe target. Distance transform pooling is further exploited to determine the\neffectiveness of each output channel of the convolution layer. The outputs from\nthe kernelized ridge regression model and the fully convolutional neural\nnetwork are combined to obtain the ultimate response. Experimental results on\ntwo benchmark datasets validate the effectiveness of the proposed method.", "field": ["Convolutions"], "task": ["Regression", "Visual Object Tracking", "Visual Tracking"], "method": ["Convolution"], "dataset": ["VOT2017/18"], "metric": ["Expected Average Overlap (EAO)"], "title": "Learning Spatial-Aware Regressions for Visual Tracking"} {"abstract": "Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on https://mmcheng.net/res2net/.", "field": ["Object Detection Models", "Image Data Augmentation", "Initialization", "Output Functions", "Convolutional Neural Networks", "Learning Rate Schedules", "Regularization", "Stochastic Optimization", "Activation Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Region Proposal", "Image Models", "Skip Connection Blocks"], "task": ["Image Classification", "Instance Segmentation", "Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": ["Weight Decay", "Average Pooling", "Faster R-CNN", "1x1 Convolution", "RoIAlign", "Region Proposal Network", "ResNet", "Random Horizontal Flip", "Convolution", "RoIPool", "ReLU", "Residual Connection", "RPN", "Res2Net Block", "Grouped Convolution", "Random Resized Crop", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Step Decay", "Res2Net", "ResNeXt Block", "SGD with Momentum", "ResNeXt", "Softmax", "Bottleneck Residual Block", "Mask R-CNN", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["DUT-OMRON", "ECSSD", "CIFAR-100", "PASCAL VOC 2012 val", "COCO minival", "PASCAL-S", "HKU-IS", "PASCAL VOC 2007", "ImageNet"], "metric": ["APM", "Top 1 Accuracy", "Percentage correct", "MAP", "mIoU", "box AP", "F-measure", "MAE", "AP75", "APS", "APL", "AP50", "Top 5 Accuracy", "mask AP"], "title": "Res2Net: A New Multi-scale Backbone Architecture"} {"abstract": "General-purpose pretrained sentence encoders such as BERT are not ideal for real-world conversational AI applications; they are computationally heavy, slow, and expensive to train. We propose ConveRT (Conversational Representations from Transformers), a pretraining framework for conversational tasks satisfying all the following requirements: it is effective, affordable, and quick to train. We pretrain using a retrieval-based response selection task, effectively leveraging quantization and subword-level parameterization in the dual encoder to build a lightweight memory- and energy-efficient model. We show that ConveRT achieves state-of-the-art performance across widely established response selection tasks. We also demonstrate that the use of extended dialog history as context yields further performance gains. Finally, we show that pretrained representations from the proposed encoder can be transferred to the intent classification task, yielding strong results across three diverse data sets. ConveRT trains substantially faster than standard sentence encoders or previous state-of-the-art dual encoders. With its reduced size and superior performance, we believe this model promises wider portability and scalability for Conversational AI applications.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Conversational Response Selection", "Intent Classification", "Quantization"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["DSTC7 Ubuntu", "PolyAI Reddit", "PolyAI AmazonQA"], "metric": ["1-of-100 Accuracy"], "title": "ConveRT: Efficient and Accurate Conversational Representations from Transformers"} {"abstract": "Pedestrian detection in a crowd is a very challenging issue. This paper\naddresses this problem by a novel Non-Maximum Suppression (NMS) algorithm to\nbetter refine the bounding boxes given by detectors. The contributions are\nthreefold: (1) we propose adaptive-NMS, which applies a dynamic suppression\nthreshold to an instance, according to the target density; (2) we design an\nefficient subnetwork to learn density scores, which can be conveniently\nembedded into both the single-stage and two-stage detectors; and (3) we achieve\nstate of the art results on the CityPersons and CrowdHuman benchmarks.", "field": ["Regularization", "Convolutional Neural Networks", "Stochastic Optimization", "Feature Extractors", "Learning Rate Schedules", "Activation Functions", "Output Functions", "RoI Feature Extractors", "Proposal Filtering", "Convolutions", "Feedforward Networks", "Pooling Operations", "Region Proposal", "Object Detection Models"], "task": ["Object Detection", "Pedestrian Detection"], "method": ["Weight Decay", "Faster R-CNN", "1x1 Convolution", "Region Proposal Network", "Adaptive NMS", "VGG", "RoIPool", "Convolution", "ReLU", "FPN", "Dense Connections", "RPN", "Step Decay", "SGD with Momentum", "Softmax", "Feature Pyramid Network", "Dropout", "Rectified Linear Units", "Max Pooling"], "dataset": ["CrowdHuman (full body)"], "metric": ["mMR", "AP"], "title": "Adaptive NMS: Refining Pedestrian Detection in a Crowd"} {"abstract": "Co-saliency detection (Co-SOD) aims to segment the common salient foreground in a group of relevant images. In this paper, inspired by human behavior, we propose a gradient-induced co-saliency detection (GICD) method. We first abstract a consensus representation for the grouped images in the embedding space; then, by comparing the single image with consensus representation, we utilize the feedback gradient information to induce more attention to the discriminative co-salient features. In addition, due to the lack of Co-SOD training data, we design a jigsaw training strategy, with which Co-SOD networks can be trained on general saliency datasets without extra pixel-level annotations. To evaluate the performance of Co-SOD methods on discovering the co-salient object among multiple foregrounds, we construct a challenging CoCA dataset, where each image contains at least one extraneous foreground along with the co-salient object. Experiments demonstrate that our GICD achieves state-of-the-art performance. Our codes and dataset are available at https://mmcheng.net/gicd/.", "field": ["Self-Supervised Learning"], "task": ["Co-Salient Object Detection", "Saliency Detection"], "method": ["Jigsaw"], "dataset": ["CoSal2015", "CoCA"], "metric": ["max E-Measure", "S-Measure", "Average MAE", "Mean F-measure", "mean E-Measure", "max F-Measure"], "title": "Gradient-Induced Co-Saliency Detection"} {"abstract": "Person re-identification is a challenging task mainly due to factors such as\nbackground clutter, pose, illumination and camera point of view variations.\nThese elements hinder the process of extracting robust and discriminative\nrepresentations, hence preventing different identities from being successfully\ndistinguished. To improve the representation learning, usually, local features\nfrom human body parts are extracted. However, the common practice for such a\nprocess has been based on bounding box part detection. In this paper, we\npropose to adopt human semantic parsing which, due to its pixel-level accuracy\nand capability of modeling arbitrary contours, is naturally a better\nalternative. Our proposed SPReID integrates human semantic parsing in person\nre-identification and not only considerably outperforms its counter baseline,\nbut achieves state-of-the-art performance. We also show that by employing a\n\\textit{simple} yet effective training strategy, standard popular deep\nconvolutional architectures such as Inception-V3 and ResNet-152, with no\nmodification, while operating solely on full image, can dramatically outperform\ncurrent state-of-the-art. Our proposed methods improve state-of-the-art person\nre-identification on: Market-1501 by ~17% in mAP and ~6% in rank-1, CUHK03 by\n~4% in rank-1 and DukeMTMC-reID by ~24% in mAP and ~10% in rank-1.", "field": ["Output Functions", "Regularization", "Stochastic Optimization", "Convolutional Neural Networks", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks", "Miscellaneous Components"], "task": ["Person Re-Identification", "Representation Learning", "Semantic Parsing"], "method": ["Inception-v3 Module", "Average Pooling", "RMSProp", "Softmax", "Auxiliary Classifier", "Convolution", "1x1 Convolution", "Inception-v3", "Label Smoothing", "Dropout", "Dense Connections", "Max Pooling"], "dataset": ["Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Human Semantic Parsing for Person Re-identification"} {"abstract": "The non-local module works as a particularly useful technique for semantic segmentation while criticized for its prohibitive computation and GPU memory occupation. In this paper, we present Asymmetric Non-local Neural Network to semantic segmentation, which has two prominent components: Asymmetric Pyramid Non-local Block (APNB) and Asymmetric Fusion Non-local Block (AFNB). APNB leverages a pyramid sampling module into the non-local block to largely reduce the computation and memory consumption without sacrificing the performance. AFNB is adapted from APNB to fuse the features of different levels under a sufficient consideration of long range dependencies and thus considerably improves the performance. Extensive experiments on semantic segmentation benchmarks demonstrate the effectiveness and efficiency of our work. In particular, we report the state-of-the-art performance of 81.3 mIoU on the Cityscapes test set. For a 256x128 input, APNB is around 6 times faster than a non-local block on GPU while 28 times smaller in GPU running memory occupation. Code is available at: https://github.com/MendelXu/ANN.git.", "field": ["Image Feature Extractors", "Skip Connections", "Image Model Blocks", "Convolutions"], "task": ["Semantic Segmentation"], "method": ["1x1 Convolution", "Non-Local Block", "Residual Connection", "Non-Local Operation"], "dataset": ["COCO-Stuff test", "PASCAL Context", "Cityscapes test", "ADE20K val"], "metric": ["Mean IoU (class)", "mIoU"], "title": "Asymmetric Non-local Neural Networks for Semantic Segmentation"} {"abstract": "Keras-based implementation of WDSR, EDSR and SRGAN for single image super-resolution", "field": ["Regularization", "Convolutional Neural Networks", "Output Functions", "Activation Functions", "Miscellaneous Components", "Normalization", "Loss Functions", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Generative Adversarial Networks", "Skip Connection Blocks"], "task": ["Image Super-Resolution", "Multi-Frame Super-Resolution", "Super-Resolution"], "method": ["PixelShuffle", "VGG", "Convolution", "PReLU", "ReLU", "Residual Connection", "Leaky ReLU", "Dense Connections", "Batch Normalization", "SRGAN Residual Block", "Parameterized ReLU", "SRGAN", "Sigmoid Activation", "Softmax", "VGG Loss", "Dropout", "Residual Block", "Rectified Linear Units", "Max Pooling"], "dataset": ["PROBA-V"], "metric": ["Normalized cPSNR"], "title": "Wide Activation for Efficient and Accurate Image Super-Resolution"} {"abstract": "Context is essential for semantic segmentation. Due to the diverse shapes of objects and their complex layout in various scene images, the spatial scales and shapes of contexts for different objects have very large variation. It is thus ineffective or inefficient to aggregate various context information from a predefined fixed region. In this work, we propose to generate a scale- and shape-variant semantic mask for each pixel to confine its contextual region. To this end, we first propose a novel paired convolution to infer the semantic correlation of the pair and based on that to generate a shape mask. Using the inferred spatial scope of the contextual region, we propose a shape-variant convolution, of which the receptive field is controlled by the shape mask that varies with the appearance of input. In this way, the proposed network aggregates the context information of a pixel from its semantic-correlated region instead of a predefined fixed region. Furthermore, this work also proposes a labeling denoising model to reduce wrong predictions caused by the noisy low-level features. Without bells and whistles, the proposed segmentation network achieves new state-of-the-arts consistently on the six public segmentation datasets.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Denoising", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO-Stuff test", "PASCAL Context", "Cityscapes test"], "metric": ["Mean IoU (class)", "mIoU"], "title": "Semantic Correlation Promoted Shape-Variant Context for Segmentation"} {"abstract": "Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Common Sense Reasoning", "Language Modelling", "Lexical Simplification", "Linguistic Acceptability", "Natural Language Inference", "Question Answering", "Reading Comprehension", "Semantic Textual Similarity", "Sentiment Analysis"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "RoBERTa", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["MultiNLI", "SQuAD2.0 dev", "ANLI test", "RACE", "SST-2 Binary classification", "RTE", "WNLI", "MRPC", "SQuAD2.0", "STS Benchmark", "QNLI", "CoLA", "SWAG", "Quora Question Pairs"], "metric": ["Pearson Correlation", "Accuracy (Middle)", "A1", "Test", "Matched", "A3", "F1", "A2", "Accuracy", "Mismatched", "EM", "Accuracy (High)"], "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach"} {"abstract": "Batch normalization (BN) has become a standard technique for training the modern deep networks. However, its effectiveness diminishes when the batch size becomes smaller, since the batch statistics estimation becomes inaccurate. That hinders batch normalization's usage for 1) training larger model which requires small batches constrained by memory consumption, 2) training on mobile or embedded devices of which the memory resource is limited. In this paper, we propose a simple but effective method, called extended batch normalization (EBN). For NCHW format feature maps, extended batch normalization computes the mean along the (N, H, W) dimensions, as the same as batch normalization, to maintain the advantage of batch normalization. To alleviate the problem caused by small batch size, extended batch normalization computes the standard deviation along the (N, C, H, W) dimensions, thus enlarges the number of samples from which the standard deviation is computed. We compare extended batch normalization with batch normalization and group normalization on the datasets of MNIST, CIFAR-10/100, STL-10, and ImageNet, respectively. The experiments show that extended batch normalization alleviates the problem of batch normalization with small batch size while achieving close performances to batch normalization with large batch size.", "field": ["Normalization"], "task": ["Image Classification"], "method": ["Group Normalization", "Batch Normalization"], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Extended Batch Normalization"} {"abstract": "Retinal vessel segmentation contributes significantly to the domain of retinal image analysis for the diagnosis of vision-threatening diseases. With existing techniques the generated segmentation result deteriorates when thresholded with higher confidence value. To alleviate from this, we propose RVGAN, a new multi-scale generative architecture for accurate retinal vessel segmentation. Our architecture uses two generators and two multi-scale autoencoder based discriminators, for better microvessel localization and segmentation. By combining reconstruction and weighted feature matching loss, our adversarial training scheme generates highly accurate pixel-wise segmentation of retinal vessels with threshold >= 0.5. The architecture achieves AUC of 0.9887, 0.9814, and 0.9887 on three publicly available datasets, namely DRIVE, CHASE-DB1, and STARE, respectively. Additionally, RV-GAN outperforms other architectures in two additional relevant metrics, Mean-IOU and SSIM.", "field": ["Generative Models"], "task": ["Retinal Vessel Segmentation", "SSIM"], "method": ["AutoEncoder"], "dataset": ["STARE", "CHASE_DB1", "DRIVE"], "metric": ["F1 score", "mIoU", "AUC", "Accuracy", "mIOU"], "title": "RV-GAN : Retinal Vessel Segmentation from Fundus Images using Multi-scale Generative Adversarial Networks"} {"abstract": "This paper describes our method for the task of Semantic Question Similarity in Arabic in the workshop on NLP Solutions for Under-Resourced Languages (NSURL). The aim is to build a model that is able to detect similar semantic questions in the Arabic language for the provided dataset. Different methods of determining questions similarity are explored in this work. The proposed models achieved high F1-scores, which range from (88% to 96%). Our official best result is produced from the ensemble model of using a pre-trained multilingual BERT model with different random seeds with 95.924% F1-Score, which ranks the first among nine participants teams.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Question Similarity"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["Q2Q Arabic Benchmark"], "metric": ["F1 score"], "title": "The Inception Team at NSURL-2019 Task 8: Semantic Question Similarity in Arabic"} {"abstract": "Neural architecture search (NAS) has shown great promise designing state-of-the-art (SOTA) models that are both accurate and fast. Recently, two-stage NAS, e.g. BigNAS, decouples the model training and searching process and achieves good search efficiency. Two-stage NAS requires sampling from the search space during training, which directly impacts the accuracy of the final searched models. While uniform sampling has been widely used for simplicity, it is agnostic of the model performance Pareto front, which are the main focus in the search process, and thus, misses opportunities to further improve the model accuracy. In this work, we propose AttentiveNAS that focuses on sampling the networks to improve the performance Pareto. We also propose algorithms to efficiently and effectively identify the networks on the Pareto during training. Without extra re-training or post-processing, we can simultaneously obtain a large number of networks across a wide range of FLOPs. Our discovered model family, AttentiveNAS models, achieves top-1 accuracy from 77.3% to 80.7% on ImageNet, and outperforms SOTA models, including BigNAS, Once-for-All networks and FBNetV3. We also achieve ImageNet accuracy of 80.1% with only 491 MFLOPs.", "field": ["Policy Gradient Methods", "Regularization", "Output Functions", "Recurrent Neural Networks", "Activation Functions", "Neural Architecture Search"], "task": ["Neural Architecture Search"], "method": ["Softmax", "Long Short-Term Memory", "Entropy Regularization", "Tanh Activation", "LSTM", "PPO", "Proximal Policy Optimization", "Neural Architecture Search", "Sigmoid Activation"], "dataset": ["ImageNet"], "metric": ["Top-1 Error Rate", "MACs", "Accuracy"], "title": "AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling"} {"abstract": "We present a comprehensive study on a new task named camouflaged object detection (COD), which aims to identify objects that are \"seamlessly\" embedded in their surroundings. The high intrinsic similarities between the target object and the background make COD far more challenging than the traditional object detection task. To address this issue, we elaborately collect a novel dataset, called COD10K, which comprises 10,000 images covering camouflaged objects in various natural scenes, over 78 object categories. All the images are densely annotated with category, bounding-box, object-/instance-level, and matting-level labels. This dataset could serve as a catalyst for progressing many vision tasks, e.g., localization, segmentation, and alpha-matting, etc. In addition, we develop a simple but effective framework for COD, termed Search Identification Network (SINet). Without any bells and whistles, SINet outperforms various state-of-the-art object detection baselines on all datasets tested, making it a robust, general framework that can help facilitate future research in COD. Finally, we conduct a large-scale COD study, evaluating 13 cutting-edge models, providing some interesting findings, and showing several potential applications. Our research offers the community an opportunity to explore more in this new field. The code will be available at https://github.com/DengPingFan/SINet/.\r", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Anomaly Detection", "Camouflaged Object Segmentation", "Object Detection"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CAMO", "COD"], "metric": ["MAE", "E-Measure", "S-Measure", "Weighted F-Measure"], "title": "Camouflaged Object Detection"} {"abstract": "Aspect-Based Sentiment Analysis (ABSA) deals with the extraction of sentiments and their targets. Collecting labeled data for this task in order to help neural networks generalize better can be laborious and time-consuming. As an alternative, similar data to the real-world examples can be produced artificially through an adversarial process which is carried out in the embedding space. Although these examples are not real sentences, they have been shown to act as a regularization method which can make neural networks more robust. In this work, we apply adversarial training, which was put forward by Goodfellow et al. (2014), to the post-trained BERT (BERT-PT) language model proposed by Xu et al. (2019) on the two major tasks of Aspect Extraction and Aspect Sentiment Classification in sentiment analysis. After improving the results of post-trained BERT by an ablation study, we propose a novel architecture called BERT Adversarial Training (BAT) to utilize adversarial training in ABSA. The proposed model outperforms post-trained BERT in both tasks. To the best of our knowledge, this is the first study on the application of adversarial training in ABSA.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Aspect-Based Sentiment Analysis", "Aspect Extraction", "Language Modelling", "Sentiment Analysis"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (F1)", "Laptop (Acc)", "Mean F1 (Laptop + Restaurant)", "Restaurant (Acc)", "Restaurant (F1)", "Mean Acc (Restaurant + Laptop)"], "title": "Adversarial Training for Aspect-Based Sentiment Analysis with BERT"} {"abstract": "Deep feedforward neural networks with piecewise linear activations are\ncurrently producing the state-of-the-art results in several public datasets.\nThe combination of deep learning models and piecewise linear activation\nfunctions allows for the estimation of exponentially complex functions with the\nuse of a large number of subnetworks specialized in the classification of\nsimilar input examples. During the training process, these subnetworks avoid\noverfitting with an implicit regularization scheme based on the fact that they\nmust share their parameters with other subnetworks. Using this framework, we\nhave made an empirical observation that can improve even more the performance\nof such models. We notice that these models assume a balanced initial\ndistribution of data points with respect to the domain of the piecewise linear\nactivation function. If that assumption is violated, then the piecewise linear\nactivation units can degenerate into purely linear activation units, which can\nresult in a significant reduction of their capacity to learn complex functions.\nFurthermore, as the number of model layers increases, this unbalanced initial\ndistribution makes the model ill-conditioned. Therefore, we propose the\nintroduction of batch normalisation units into deep feedforward neural networks\nwith piecewise linear activations, which drives a more balanced use of these\nactivation units, where each region of the activation function is trained with\na relatively large proportion of training samples. Also, this batch\nnormalisation promotes the pre-conditioning of very deep learning models. We\nshow that by introducing maxout and batch normalisation units to the network in\nnetwork model results in a model that produces classification results that are\nbetter than or comparable to the current state of the art in CIFAR-10,\nCIFAR-100, MNIST, and SVHN datasets.", "field": ["Activation Functions"], "task": ["Image Classification"], "method": ["Maxout"], "dataset": ["SVHN", "MNIST", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units"} {"abstract": "Neural networks have enabled state-of-the-art approaches to achieve incredible results on computer vision tasks such as object detection. However, such success greatly relies on costly computation resources, which hinders people with cheap devices from appreciating the advanced technology. In this paper, we propose Cross Stage Partial Network (CSPNet) to mitigate the problem that previous works require heavy inference computations from the network architecture perspective. We attribute the problem to the duplicate gradient information within network optimization. The proposed networks respect the variability of the gradients by integrating feature maps from the beginning and the end of a network stage, which, in our experiments, reduces computations by 20% with equivalent or even superior accuracy on the ImageNet dataset, and significantly outperforms state-of-the-art approaches in terms of AP50 on the MS COCO object detection dataset. The CSPNet is easy to implement and general enough to cope with architectures based on ResNet, ResNeXt, and DenseNet. Source code is at https://github.com/WongKinYiu/CrossStagePartialNetworks.", "field": ["Convolutional Neural Networks", "Feature Extractors", "Normalization", "Feature Pyramid Blocks", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Region Proposal", "Stochastic Optimization", "Feedforward Networks", "Skip Connection Blocks", "Initialization", "Output Functions", "Learning Rate Schedules", "RoI Feature Extractors", "Instance Segmentation Models", "Skip Connections", "Image Model Blocks"], "task": ["Image Classification", "Object Detection", "Real-Time Object Detection"], "method": ["Weight Decay", "Average Pooling", "Polynomial Rate Decay", "CSPPeleeNet", "Residual Block", "Tanh Activation", "Bottom-up Path Augmentation", "1x1 Convolution", "RoIAlign", "Softplus", "Exact Fusion Model", "PAFPN", "Region Proposal Network", "Two-Way Dense Layer", "ResNet", "PeleeNet", "Mish", "Convolution", "Adaptive Feature Pooling", "Maxout", "Residual Connection", "FPN", "ReLU", "DenseNet-Elastic", "Leaky ReLU", "Dense Connections", "Max Pooling", "RPN", "Dense Block", "CSPResNeXt", "Swish", "Grouped Convolution", "Spatial Attention Module", "Batch Normalization", "Elastic Dense Block", "CSPDenseNet", "CSPDenseNet-Elastic", "Residual Network", "Squeeze-and-Excitation Block", "Kaiming Initialization", "Sigmoid Activation", "Step Decay", "ResNeXt Block", "CSPDarknet53", "SGD with Momentum", "ResNeXt", "Softmax", "Feature Pyramid Network", "Concatenated Skip Connection", "Bottleneck Residual Block", "Dropout", "DenseNet", "Darknet-53", "Global Average Pooling", "Rectified Linear Units", "PANet", "CSPResNeXt Block"], "dataset": ["COCO", "ImageNet"], "metric": ["Number of params", "Top 1 Accuracy", "FPS", "MAP", "inference time (ms)", "Top 5 Accuracy"], "title": "CSPNet: A New Backbone that can Enhance Learning Capability of CNN"} {"abstract": "We present a novel single-shot text detector that directly outputs word-level\nbounding boxes in a natural image. We propose an attention mechanism which\nroughly identifies text regions via an automatically learned attentional map.\nThis substantially suppresses background interference in the convolutional\nfeatures, which is the key to producing accurate inference of words,\nparticularly at extremely small sizes. This results in a single model that\nessentially works in a coarse-to-fine manner. It departs from recent FCN- based\ntext detectors which cascade multiple FCN models to achieve an accurate\nprediction. Furthermore, we develop a hierarchical inception module which\nefficiently aggregates multi-scale inception features. This enhances local\ndetails, and also encodes strong context information, allow- ing the detector\nto work reliably on multi-scale and multi- orientation text with single-scale\nimages. Our text detector achieves an F-measure of 77% on the ICDAR 2015 bench-\nmark, advancing the state-of-the-art results in [18, 28]. Demo is available at:\nhttp://sstd.whuang.org/.", "field": ["Image Model Blocks", "Convolutions", "Pooling Operations", "Semantic Segmentation Models"], "task": ["Scene Text Detection"], "method": ["Inception Module", "Convolution", "1x1 Convolution", "Fully Convolutional Network", "Max Pooling", "FCN"], "dataset": ["ICDAR 2013", "ICDAR 2015", "COCO-Text"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Single Shot Text Detector with Regional Attention"} {"abstract": "Siamese trackers turn tracking into similarity estimation between a template and the candidate regions in the frame. Mathematically, one of the key ingredients of success of the similarity function is translation equivariance. Non-translation-equivariant architectures induce a positional bias during training, so the location of the target will be hard to recover from the feature space. In real life scenarios, objects undergoe various transformations other than translation, such as rotation or scaling. Unless the model has an internal mechanism to handle them, the similarity may degrade. In this paper, we focus on scaling and we aim to equip the Siamese network with additional built-in scale equivariance to capture the natural variations of the target a priori. We develop the theory for scale-equivariant Siamese trackers, and provide a simple recipe for how to make a wide range of existing trackers scale-equivariant. We present SE-SiamFC, a scale-equivariant variant of SiamFC built according to the recipe. We conduct experiments on OTB and VOT benchmarks and on the synthetically generated T-MNIST and S-MNIST datasets. We demonstrate that a built-in additional scale equivariance is useful for visual object tracking.", "field": ["Twin Networks"], "task": ["Object Tracking", "Visual Object Tracking", "Visual Tracking"], "method": ["Siamese Network"], "dataset": ["VOT2016", "OTB-2013", "OTB-2015", "VOT2017"], "metric": ["AUC", "Expected Average Overlap (EAO)"], "title": "Scale Equivariance Improves Siamese Tracking"} {"abstract": "3D skeleton-based action recognition and motion prediction are two essential problems of human activity understanding. In many previous works: 1) they studied two tasks separately, neglecting internal correlations; 2) they did not capture sufficient relations inside the body. To address these issues, we propose a symbiotic model to handle two tasks jointly; and we propose two scales of graphs to explicitly capture relations among body-joints and body-parts. Together, we propose symbiotic graph neural networks, which contain a backbone, an action-recognition head, and a motion-prediction head. Two heads are trained jointly and enhance each other. For the backbone, we propose multi-branch multi-scale graph convolution networks to extract spatial and temporal features. The multi-scale graph convolution networks are based on joint-scale and part-scale graphs. The joint-scale graphs contain actional graphs, capturing action-based relations, and structural graphs, capturing physical constraints. The part-scale graphs integrate body-joints to form specific parts, representing high-level relations. Moreover, dual bone-based graphs and networks are proposed to learn complementary features. We conduct extensive experiments for skeleton-based action recognition and motion prediction with four datasets, NTU-RGB+D, Kinetics, Human3.6M, and CMU Mocap. Experiments show that our symbiotic graph neural networks achieve better performances on both tasks compared to the state-of-the-art methods.", "field": ["Convolutions"], "task": ["Action Recognition", "motion prediction", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Convolution"], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Symbiotic Graph Neural Networks for 3D Skeleton-based Human Action Recognition and Motion Prediction"} {"abstract": "How do we determine whether two or more clothing items are compatible or\nvisually appealing? Part of the answer lies in understanding of visual\naesthetics, and is biased by personal preferences shaped by social attitudes,\ntime, and place. In this work we propose a method that predicts compatibility\nbetween two items based on their visual features, as well as their context. We\ndefine context as the products that are known to be compatible with each of\nthese item. Our model is in contrast to other metric learning approaches that\nrely on pairwise comparisons between item features alone. We address the\ncompatibility prediction problem using a graph neural network that learns to\ngenerate product embeddings conditioned on their context. We present results\nfor two prediction tasks (fill in the blank and outfit compatibility) tested on\ntwo fashion datasets Polyvore and Fashion-Gen, and on a subset of the Amazon\ndataset; we achieve state of the art results when using context information and\nshow how test performance improves as more context is used.", "field": ["Regularization", "Graph Models", "Stochastic Optimization"], "task": ["Metric Learning", "Recommendation Systems", "Slot Filling"], "method": ["Graph Convolutional Network", "Adam", "Dropout", "GCN"], "dataset": ["Polyvore"], "metric": ["FITB", "AUC"], "title": "Context-Aware Visual Compatibility Prediction"} {"abstract": "Grapheme-to-phoneme (G2P) conversion is an important task in automatic speech recognition and text-to-speech systems. Recently, G2P conversion is viewed as a sequence to sequence task and modeled by RNN or CNN based encoder-decoder framework. However, previous works do not consider the practical issues when deploying G2P model in the production system, such as how to leverage additional unlabeled data to boost the accuracy, as well as reduce model size for online deployment. In this work, we propose token-level ensemble distillation for G2P conversion, which can (1) boost the accuracy by distilling the knowledge from additional unlabeled data, and (2) reduce the model size but maintain the high accuracy, both of which are very practical and helpful in the online production system. We use token-level knowledge distillation, which results in better accuracy than the sequence-level counterpart. What is more, we adopt the Transformer instead of RNN or CNN based models to further boost the accuracy of G2P conversion. Experiments on the publicly available CMUDict dataset and an internal English dataset demonstrate the effectiveness of our proposed method. Particularly, our method achieves 19.88% WER on CMUDict dataset, outperforming the previous works by more than 4.22% WER, and setting the new state-of-the-art results.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Knowledge Distillation", "Speech Recognition", "Text-To-Speech Synthesis"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["CMUDict 0.7b"], "metric": ["Phoneme Error Rate", "Word Error Rate (WER)"], "title": "Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion"} {"abstract": "Recently, dense connections have attracted substantial attention in computer\nvision because they facilitate gradient flow and implicit deep supervision\nduring training. Particularly, DenseNet, which connects each layer to every\nother layer in a feed-forward fashion, has shown impressive performances in\nnatural image classification tasks. We propose HyperDenseNet, a 3D fully\nconvolutional neural network that extends the definition of dense connectivity\nto multi-modal segmentation problems. Each imaging modality has a path, and\ndense connections occur not only between the pairs of layers within the same\npath, but also between those across different paths. This contrasts with the\nexisting multi-modal CNN approaches, in which modeling several modalities\nrelies entirely on a single joint layer (or level of abstraction) for fusion,\ntypically either at the input or at the output of the network. Therefore, the\nproposed network has total freedom to learn more complex combinations between\nthe modalities, within and in-between all the levels of abstraction, which\nincreases significantly the learning representation. We report extensive\nevaluations over two different and highly competitive multi-modal brain tissue\nsegmentation challenges, iSEG 2017 and MRBrainS 2013, with the former focusing\non 6-month infant data and the latter on adult images. HyperDenseNet yielded\nsignificant improvements over many state-of-the-art segmentation networks,\nranking at the top on both benchmarks. We further provide a comprehensive\nexperimental analysis of features re-use, which confirms the importance of\nhyper-dense connections in multi-modal representation learning. Our code is\npublicly available at https://www.github.com/josedolz/HyperDenseNet.", "field": ["Semantic Segmentation Models", "Initialization", "Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks"], "task": ["Brain Segmentation", "Image Classification", "Medical Image Segmentation", "Multi-modal image segmentation", "Representation Learning", "Semantic Segmentation"], "method": ["Dense Block", "Average Pooling", "Softmax", "Concatenated Skip Connection", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "HyperDenseNet", "Dropout", "DenseNet", "Kaiming Initialization", "Dense Connections", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["iSEG 2017 Challenge"], "metric": ["Dice Score"], "title": "HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation"} {"abstract": "Despite the breakthroughs in accuracy and speed of single image\nsuper-resolution using faster and deeper convolutional neural networks, one\ncentral problem remains largely unsolved: how do we recover the finer texture\ndetails when we super-resolve at large upscaling factors? The behavior of\noptimization-based super-resolution methods is principally driven by the choice\nof the objective function. Recent work has largely focused on minimizing the\nmean squared reconstruction error. The resulting estimates have high peak\nsignal-to-noise ratios, but they are often lacking high-frequency details and\nare perceptually unsatisfying in the sense that they fail to match the fidelity\nexpected at the higher resolution. In this paper, we present SRGAN, a\ngenerative adversarial network (GAN) for image super-resolution (SR). To our\nknowledge, it is the first framework capable of inferring photo-realistic\nnatural images for 4x upscaling factors. To achieve this, we propose a\nperceptual loss function which consists of an adversarial loss and a content\nloss. The adversarial loss pushes our solution to the natural image manifold\nusing a discriminator network that is trained to differentiate between the\nsuper-resolved images and original photo-realistic images. In addition, we use\na content loss motivated by perceptual similarity instead of similarity in\npixel space. Our deep residual network is able to recover photo-realistic\ntextures from heavily downsampled images on public benchmarks. An extensive\nmean-opinion-score (MOS) test shows hugely significant gains in perceptual\nquality using SRGAN. The MOS scores obtained with SRGAN are closer to those of\nthe original high-resolution images than to those obtained with any\nstate-of-the-art method.", "field": ["Regularization", "Convolutional Neural Networks", "Output Functions", "Stochastic Optimization", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Feedforward Networks", "Skip Connections", "Generative Adversarial Networks", "Skip Connection Blocks"], "task": ["Image Super-Resolution", "Super-Resolution"], "method": ["VGG", "Softmax", "Adam", "SRGAN", "VGG Loss", "Batch Normalization", "Convolution", "PReLU", "SRGAN Residual Block", "Residual Connection", "Parameterized ReLU", "Dropout", "Leaky ReLU", "Dense Connections", "Sigmoid Activation"], "dataset": ["VggFace2 - 8x upscaling", "FFHQ 256 x 256 - 4x upscaling", "Set14 - 4x upscaling", "BSD100 - 4x upscaling", "FFHQ 1024 x 1024 - 4x upscaling", "Set5 - 4x upscaling", "FFHQ 512 x 512 - 4x upscaling", "PIRM-test", "WebFace - 8x upscaling"], "metric": ["LLE", "PSNR", "FID", "FED", "MS-SSIM", "MOS", "LPIPS", "NIQE", "SSIM"], "title": "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"} {"abstract": "This paper addresses the problem of supervised video summarization by\nformulating it as a sequence-to-sequence learning problem, where the input is a\nsequence of original video frames, the output is a keyshot sequence. Our key\nidea is to learn a deep summarization network with attention mechanism to mimic\nthe way of selecting the keyshots of human. To this end, we propose a novel\nvideo summarization framework named Attentive encoder-decoder networks for\nVideo Summarization (AVS), in which the encoder uses a Bidirectional Long\nShort-Term Memory (BiLSTM) to encode the contextual information among the input\nvideo frames. As for the decoder, two attention-based LSTM networks are\nexplored by using additive and multiplicative objective functions,\nrespectively. Extensive experiments are conducted on three video summarization\nbenchmark datasets, i.e., SumMe, and TVSum. The results demonstrate the\nsuperiority of the proposed AVS-based approaches against the state-of-the-art\napproaches,with remarkable improvements from 0.8% to 3% on two\ndatasets,respectively..", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Supervised Video Summarization", "Video Summarization"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["TvSum", "SumMe"], "metric": ["F1-score (Canonical)", "F1-score (Augmented)"], "title": "Video Summarization with Attention-Based Encoder-Decoder Networks"} {"abstract": "State-of-the-art sequence labeling systems traditionally require large\namounts of task-specific knowledge in the form of hand-crafted features and\ndata pre-processing. In this paper, we introduce a novel neutral network\narchitecture that benefits from both word- and character-level representations\nautomatically, by using combination of bidirectional LSTM, CNN and CRF. Our\nsystem is truly end-to-end, requiring no feature engineering or data\npre-processing, thus making it applicable to a wide range of sequence labeling\ntasks. We evaluate our system on two data sets for two sequence labeling tasks\n--- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003\ncorpus for named entity recognition (NER). We obtain state-of-the-art\nperformance on both the two data --- 97.55\\% accuracy for POS tagging and\n91.21\\% F1 for NER.", "field": ["Recurrent Neural Networks", "Activation Functions", "Structured Prediction"], "task": ["Feature Engineering", "Named Entity Recognition", "Part-Of-Speech Tagging"], "method": ["Conditional Random Field", "Long Short-Term Memory", "CRF", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["CoNLL++", "CoNLL 2003 (English)", "Penn Treebank"], "metric": ["F1", "Accuracy"], "title": "End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF"} {"abstract": "Energy-based models (EBMs) have recently been successful in representing complex distributions of small images. However, sampling from them requires expensive Markov chain Monte Carlo (MCMC) iterations that mix slowly in high dimensional pixel space. Unlike EBMs, variational autoencoders (VAEs) generate samples quickly and are equipped with a latent space that enables fast traversal of the data manifold. However, VAEs tend to assign high probability density to regions in data space outside the actual data distribution and often fail at generating sharp images. In this paper, we propose VAEBM, a symbiotic composition of a VAE and an EBM that offers the best of both worlds. VAEBM captures the overall mode structure of the data distribution using a state-of-the-art VAE and it relies on its EBM component to explicitly exclude non-data-like regions from the model and refine the image samples. Moreover, the VAE component in VAEBM allows us to speed up MCMC updates by reparameterizing them in the VAE's latent space. Our experimental results show that VAEBM outperforms state-of-the-art VAEs and EBMs in generative quality on several benchmark image datasets by a large margin. It can generate high-quality images as large as 256$\\times$256 pixels with short MCMC chains. We also demonstrate that VAEBM provides complete mode coverage and performs well in out-of-distribution detection.", "field": ["Generative Models"], "task": ["Image Generation", "Out-of-Distribution Detection"], "method": ["VAE", "Variational Autoencoder"], "dataset": ["CelebA-HQ 256x256", "Stacked MNIST", "CelebA-HQ 64x64", "CIFAR-10"], "metric": ["Inception score", "FID"], "title": "VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models"} {"abstract": "Biometric systems based on Machine learning and Deep learning are being extensively used as authentication mechanisms in resource-constrained environments like smartphones and other small computing devices. These AI-powered facial recognition mechanisms have gained enormous popularity in recent years due to their transparent, contact-less and non-invasive nature. While they are effective to a large extent, there are ways to gain unauthorized access using photographs, masks, glasses, etc. In this paper, we propose an alternative authentication mechanism that uses both facial recognition and the unique movements of that particular face while uttering a password, that is, the temporal facial feature movements. The proposed model is not inhibited by language barriers because a user can set a password in any language. When evaluated on the standard MIRACL-VC1 dataset, the proposed model achieved an accuracy of 98.1%, underscoring its effectiveness as an effective and robust system. The proposed method is also data-efficient since the model gave good results even when trained with only 10 positive video samples. The competence of the training of the network is also demonstrated by benchmarking the proposed system against various compounded Facial recognition and Lip reading models.", "field": ["Recurrent Neural Networks", "Activation Functions", "Convolutions"], "task": ["Lip password classification", "Lip Reading"], "method": ["Long Short-Term Memory", "3D Convolution", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["MIRACL-VC1"], "metric": ["2-Class Accuracy"], "title": "AuthNet: A Deep Learning based Authentication Mechanism using Temporal Facial Feature Movements"} {"abstract": "Neural machine translation (NMT) models typically operate with a fixed\nvocabulary, but translation is an open-vocabulary problem. Previous work\naddresses the translation of out-of-vocabulary words by backing off to a\ndictionary. In this paper, we introduce a simpler and more effective approach,\nmaking the NMT model capable of open-vocabulary translation by encoding rare\nand unknown words as sequences of subword units. This is based on the intuition\nthat various word classes are translatable via smaller units than words, for\ninstance names (via character copying or transliteration), compounds (via\ncompositional translation), and cognates and loanwords (via phonological and\nmorphological transformations). We discuss the suitability of different word\nsegmentation techniques, including simple character n-gram models and a\nsegmentation based on the byte pair encoding compression algorithm, and\nempirically show that subword models improve over a back-off dictionary\nbaseline for the WMT 15 translation tasks English-German and English-Russian by\n1.1 and 1.3 BLEU, respectively.", "field": ["Subword Segmentation"], "task": ["Machine Translation", "Transliteration"], "method": ["BPE", "Byte Pair Encoding"], "dataset": ["WMT2015 English-Russian", "WMT2015 English-German"], "metric": ["BLEU score"], "title": "Neural Machine Translation of Rare Words with Subword Units"} {"abstract": "Transition-based parsers implemented with Pointer Networks have become the new state of the art in dependency parsing, excelling in producing labelled syntactic trees and outperforming graph-based models in this task. In order to further test the capabilities of these powerful neural networks on a harder NLP problem, we propose a transition system that, thanks to Pointer Networks, can straightforwardly produce labelled directed acyclic graphs and perform semantic dependency parsing. In addition, we enhance our approach with deep contextualized word embeddings extracted from BERT. The resulting system not only outperforms all existing transition-based models, but also matches the best fully-supervised accuracy to date on the SemEval 2015 Task 18 datasets among previous state-of-the-art graph-based parsers.", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Output Functions", "Learning Rate Schedules", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Dependency Parsing", "Semantic Dependency Parsing", "Word Embeddings"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["PAS", "DM", "PSD"], "metric": ["Out-of-domain", "In-domain"], "title": "Transition-based Semantic Dependency Parsing with Pointer Networks"} {"abstract": "Traditional machine learning applications, such as optical character recognition, arose from the inability to explicitly program a computer to perform a routine task. In this context, learning algorithms usually derive a model exclusively from the evidence present in a massive dataset. Yet in some scientific disciplines, obtaining an abundance of data is an impractical luxury, however; there is an explicit model of the domain based upon previous scientific discoveries. Here we introduce a new approach to machine learning that is able to leverage prior scientific discoveries in order to improve generalizability over a scientific model. We show its efficacy in predicting the entire energy spectrum of a Hamiltonian on a superconducting quantum device, a key task in present quantum computer calibration. Our accuracy surpasses the current state-of-the-art by over $20\\%.$ Our approach thus demonstrates how artificial intelligence can be further enhanced by \"standing on the shoulders of giants.\"", "field": ["Generalized Additive Models"], "task": ["Few-Shot Learning", "Few-shot Regression", "Multi-target regression"], "method": ["Base Boosting"], "dataset": ["Google 5 qubit random Hamiltonian"], "metric": ["Average mean absolute error"], "title": "Boosting on the shoulders of giants in quantum device calibration"} {"abstract": "We present an exhaustive investigation of recent Deep Learning architectures,\nalgorithms, and strategies for the task of document image classification to\nfinally reduce the error by more than half. Existing approaches, such as the\nDeepDocClassifier, apply standard Convolutional Network architectures with\ntransfer learning from the object recognition domain. The contribution of the\npaper is threefold: First, it investigates recently introduced very deep neural\nnetwork architectures (GoogLeNet, VGG, ResNet) using transfer learning (from\nreal images). Second, it proposes transfer learning from a huge set of document\nimages, i.e. 400,000 documents. Third, it analyzes the impact of the amount of\ntraining data (document images) and other parameters to the classification\nabilities. We use two datasets, the Tobacco-3482 and the large-scale RVL-CDIP\ndataset. We achieve an accuracy of 91.13% for the Tobacco-3482 dataset while\nearlier approaches reach only 77.6%. Thus, a relative error reduction of more\nthan 60% is achieved. For the large dataset RVL-CDIP, an accuracy of 90.97% is\nachieved, corresponding to a relative error reduction of 11.5%.", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Output Functions", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Document Image Classification", "Image Classification", "Object Recognition", "Transfer Learning"], "method": ["ResNet", "Average Pooling", "VGG", "Softmax", "Batch Normalization", "Convolution", "Rectified Linear Units", "ReLU", "1x1 Convolution", "Residual Connection", "Bottleneck Residual Block", "Dropout", "Residual Network", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Dense Connections", "Max Pooling"], "dataset": ["RVL-CDIP"], "metric": ["Accuracy"], "title": "Cutting the Error by Half: Investigation of Very Deep CNN and Advanced Training Strategies for Document Image Classification"} {"abstract": "Sequential data such as time series, video, or text can be challenging to analyse as the ordered structure gives rise to complex dependencies. At the heart of this is non-commutativity, in the sense that reordering the elements of a sequence can completely change its meaning. We use a classical mathematical object -- the tensor algebra -- to capture such dependencies. To address the innate computational complexity of high degree tensors, we use compositions of low-rank tensor projections. This yields modular and scalable building blocks for neural networks that give state-of-the-art performance on standard benchmarks such as multivariate time series classification and generative models for video.", "field": ["Semantic Segmentation Models", "Normalization", "Convolutions", "Pooling Operations", "Generative Models", "Non-Parametric Classification"], "task": ["Imputation", "Time Series", "Time Series Classification"], "method": ["Convolution", "Batch Normalization", "VAE", "Gaussian Process", "Fully Convolutional Network", "Variational Autoencoder", "Max Pooling", "FCN"], "dataset": ["ECG", "DigitShapes", "CharacterTrajectories", "Shapes", "UWave", "KickvsPunch", "AUSLAN", "PenDigits", "PhysioNet Challenge 2012", "JapaneseVowels", "Wafer", "NetFlow", "Sprites", "PEMS", "ArabicDigits", "HMNIST", "Libras", "CMUsubject16", "WalkvsRun"], "metric": ["AUROC", "NLL", "MSE", "Accuracy"], "title": "Seq2Tens: An Efficient Representation of Sequences by Low-Rank Tensor Projections"} {"abstract": "GANs have been shown to perform exceedingly well on tasks pertaining to image generation and style transfer. In the field of language modelling, word embeddings such as GLoVe and word2vec are state-of-the-art methods for applying neural network models on textual data. Attempts have been made to utilize GANs with word embeddings for text generation. This study presents an approach to text generation using Skip-Thought sentence embeddings with GANs based on gradient penalty functions and f-measures. The proposed architecture aims to reproduce writing style in the generated text by modelling the way of expression at a sentence level across all the works of an author. Extensive experiments were run in different embedding settings on a variety of tasks including conditional text generation and language generation. The model outperforms baseline text generation networks across several automated evaluation metrics like BLEU-n, METEOR and ROUGE. Further, wide applicability and effectiveness in real life tasks are demonstrated through human judgement scores.", "field": ["Word Embeddings"], "task": ["Conditional Text Generation", "Language Modelling", "Sentence Embeddings", "Style Transfer", "Text Generation", "Word Embeddings"], "method": ["GloVe Embeddings", "GloVe"], "dataset": ["CMU-SE"], "metric": ["BLEU-3"], "title": "Generating Text through Adversarial Training using Skip-Thought Vectors"} {"abstract": "In recent years, several deep learning models have been proposed for cover song identification and they have been designed to learn fixed-length feature vectors for music tracks. However, the aspect of temporal progression of music, which is important for measuring the melody similarity between two tracks, is not well represented by fixed-length vectors. In this paper, we propose a new Siamese network architecture for music melody similarity metric learning. The architecture consists of two parts. One part is a network for learn- ing the deep sequence representation of music tracks, and the other is a similarity estimation network which takes as input the cross- similarity matrices calculated from the deep sequences of a pair of tracks. The two networks are jointly trained and optimized to achieve high melody similarity prediction accuracy. Experiments conducted on several public datasets demonstrate the superiority of the proposed architecture.", "field": ["Twin Networks"], "task": ["Cover song identification", "Metric Learning"], "method": ["Siamese Network"], "dataset": ["Covers80", "YouTube350", "SHS100K-TEST"], "metric": ["mAP", "MAP"], "title": "SIMILARITY LEARNING FOR COVER SONG IDENTIFICATION USING CROSS-SIMILARITY MATRICES OF MULTI-LEVEL DEEP SEQUENCES"} {"abstract": "State-of-the-art visual perception models for a wide range of tasks rely on\nsupervised pretraining. ImageNet classification is the de facto pretraining\ntask for these models. Yet, ImageNet is now nearly ten years old and is by\nmodern standards \"small\". Even so, relatively little is known about the\nbehavior of pretraining with datasets that are multiple orders of magnitude\nlarger. The reasons are obvious: such datasets are difficult to collect and\nannotate. In this paper, we present a unique study of transfer learning with\nlarge convolutional networks trained to predict hashtags on billions of social\nmedia images. Our experiments demonstrate that training for large-scale hashtag\nprediction leads to excellent results. We show improvements on several image\nclassification and object detection tasks, and report the highest ImageNet-1k\nsingle-crop, top-1 accuracy to date: 85.4% (97.6% top-5). We also perform\nextensive experiments that provide novel empirical data on the relationship\nbetween large-scale pretraining and transfer learning performance.", "field": ["Image Data Augmentation", "Initialization", "Convolutional Neural Networks", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification", "Object Detection", "Transfer Learning"], "method": ["ResNeXt Block", "SGD with Momentum", "Average Pooling", "Grouped Convolution", "Random Horizontal Flip", "ResNeXt", "Random Resized Crop", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Connection", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 5 Accuracy", "Top 1 Accuracy"], "title": "Exploring the Limits of Weakly Supervised Pretraining"} {"abstract": "We propose a novel algorithm for visual question answering based on a\nrecurrent deep neural network, where every module in the network corresponds to\na complete answering unit with attention mechanism by itself. The network is\noptimized by minimizing loss aggregated from all the units, which share model\nparameters while receiving different information to compute attention\nprobability. For training, our model attends to a region within image feature\nmap, updates its memory based on the question and attended image feature, and\nanswers the question based on its memory state. This procedure is performed to\ncompute loss in each step. The motivation of this approach is our observation\nthat multi-step inferences are often required to answer questions while each\nproblem may have a unique desirable number of steps, which is difficult to\nidentify in practice. Hence, we always make the first unit in the network solve\nproblems, but allow it to learn the knowledge from the rest of units by\nbackpropagation unless it degrades the model. To implement this idea, we\nearly-stop training each unit as soon as it starts to overfit. Note that, since\nmore complex models tend to overfit on easier questions quickly, the last\nanswering unit in the unfolded recurrent neural network is typically killed\nfirst while the first one remains last. We make a single-step prediction for a\nnew question using the shared model. This strategy works better than the other\noptions within our framework since the selected model is trained effectively\nfrom all units without overfitting. The proposed algorithm outperforms other\nmulti-step attention based approaches using a single step prediction in VQA\ndataset.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Question Answering", "Visual Question Answering"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO Visual Question Answering (VQA) real images 1.0 open ended", "COCO Visual Question Answering (VQA) real images 1.0 multiple choice", "VQA v1 test-std", "VQA v1 test-dev"], "metric": ["Percentage correct", "Accuracy"], "title": "Training Recurrent Answering Units with Joint Loss Minimization for VQA"} {"abstract": "3D face shape is more expressive and viewpoint-consistent than its 2D\ncounterpart. However, 3D facial landmark localization in a single image is\nchallenging due to the ambiguous nature of landmarks under 3D perspective.\nExisting approaches typically adopt a suboptimal two-step strategy, performing\n2D landmark localization followed by depth estimation. In this paper, we\npropose the Joint Voxel and Coordinate Regression (JVCR) method for 3D facial\nlandmark localization, addressing it more effectively in an end-to-end fashion.\nFirst, a compact volumetric representation is proposed to encode the per-voxel\nlikelihood of positions being the 3D landmarks. The dimensionality of such a\nrepresentation is fixed regardless of the number of target landmarks, so that\nthe curse of dimensionality could be avoided. Then, a stacked hourglass network\nis adopted to estimate the volumetric representation from coarse to fine,\nfollowed by a 3D convolution network that takes the estimated volume as input\nand regresses 3D coordinates of the face shape. In this way, the 3D structural\nconstraints between landmarks could be learned by the neural network in a more\nefficient manner. Moreover, the proposed pipeline enables end-to-end training\nand improves the robustness and accuracy of 3D facial landmark localization.\nThe effectiveness of our approach is validated on the 3DFAW and AFLW2000-3D\ndatasets. Experimental results show that the proposed method achieves\nstate-of-the-art performance in comparison with existing methods.", "field": ["Convolutions"], "task": ["3D Facial Landmark Localization", "Depth Estimation", "Face Alignment", "Facial Landmark Detection", "Regression"], "method": ["3D Convolution", "Convolution"], "dataset": ["3DFAW", "AFLW2000-3D"], "metric": ["CVGTCE", "GTE"], "title": "Joint Voxel and Coordinate Regression for Accurate 3D Facial Landmark Localization"} {"abstract": "Summarization of speech is a difficult problem due to the spontaneity of the flow, disfluencies, and other issues that are not usually encountered in written texts. Our work presents the first application of the BERTSum model to conversational language. We generate abstractive summaries of narrated instructional videos across a wide variety of topics, from gardening and cooking to software configuration and sports. In order to enrich the vocabulary, we use transfer learning and pretrain the model on a few large cross-domain datasets in both written and spoken English. We also do preprocessing of transcripts to restore sentence segmentation and punctuation in the output of an ASR system. The results are evaluated with ROUGE and Content-F1 scoring for the How2 and WikiHow datasets. We engage human judges to score a set of summaries randomly selected from a dataset curated from HowTo100M and YouTube. Based on blind evaluation, we achieve a level of textual fluency and utility close to that of summaries written by human content creators. The model beats current SOTA when applied to WikiHow articles that vary widely in style and topic, while showing no performance regression on the canonical CNN/DailyMail dataset. Due to the high generalizability of the model across different styles and domains, it has great potential to improve accessibility and discoverability of internet content. We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Abstractive Text Summarization", "Regression", "Sentence segmentation", "Text Summarization", "Transfer Learning"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["WikiHow", "How2"], "metric": ["ROUGE-L", "ROUGE-1", "Content F1", "ROUGE-2"], "title": "Abstractive Summarization of Spoken andWritten Instructions with BERT"} {"abstract": "Virtual adversarial training (VAT) is a powerful technique to improve model robustness in both supervised and semi-supervised settings. It is effective and can be easily adopted on lots of image classification and text classification tasks. However, its benefits to sequence labeling tasks such as named entity recognition (NER) have not been shown as significant, mostly, because the previous approach can not combine VAT with the conditional random field (CRF). CRF can significantly boost accuracy for sequence models by putting constraints on label transitions, which makes it an essential component in most state-of-the-art sequence labeling model architectures. In this paper, we propose SeqVAT, a method which naturally applies VAT to sequence labeling models with CRF. Empirical studies show that SeqVAT not only significantly improves the sequence labeling performance over baselines under supervised settings, but also outperforms state-of-the-art approaches under semi-supervised settings.", "field": ["Structured Prediction"], "task": ["Chunking", "Image Classification", "Named Entity Recognition", "Text Classification"], "method": ["Conditional Random Field", "CRF"], "dataset": ["CoNLL 2000"], "metric": ["Exact Span F1"], "title": "SeqVAT: Virtual Adversarial Training for Semi-Supervised Sequence Labeling"} {"abstract": "In this paper, we propose a very deep fully convolutional encoding-decoding\nframework for image restoration such as denoising and super-resolution. The\nnetwork is composed of multiple layers of convolution and de-convolution\noperators, learning end-to-end mappings from corrupted images to the original\nones. The convolutional layers act as the feature extractor, which capture the\nabstraction of image contents while eliminating noises/corruptions.\nDe-convolutional layers are then used to recover the image details. We propose\nto symmetrically link convolutional and de-convolutional layers with skip-layer\nconnections, with which the training converges much faster and attains a\nhigher-quality local optimum. First, The skip connections allow the signal to\nbe back-propagated to bottom layers directly, and thus tackles the problem of\ngradient vanishing, making training deep networks easier and achieving\nrestoration performance gains consequently. Second, these skip connections pass\nimage details from convolutional layers to de-convolutional layers, which is\nbeneficial in recovering the original image. Significantly, with the large\ncapacity, we can handle different levels of noises using a single model.\nExperimental results show that our network achieves better performance than all\npreviously reported state-of-the-art methods.", "field": ["Convolutions"], "task": ["Denoising", "Image Restoration", "Image Super-Resolution", "Super-Resolution"], "method": ["Convolution"], "dataset": ["Set5 - 4x upscaling", "BSD100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections"} {"abstract": "In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs.", "field": ["Regularization", "Convolutional Neural Networks", "Stochastic Optimization", "Learning Rate Schedules", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks"], "task": ["Image Classification"], "method": ["Weight Decay", "Squeeze-and-Excitation Block", "SGD with Momentum", "Cosine Annealing", "Swish", "Grouped Convolution", "RegNetX", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "RegNetY", "Dense Connections", "Global Average Pooling", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 1 Accuracy"], "title": "Designing Network Design Spaces"} {"abstract": "Neural machine translation (NMT) aims at solving machine translation (MT)\nproblems using neural networks and has exhibited promising results in recent\nyears. However, most of the existing NMT models are shallow and there is still\na performance gap between a single NMT model and the best conventional MT\nsystem. In this work, we introduce a new type of linear connections, named\nfast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,\nand an interleaved bi-directional architecture for stacking the LSTM layers.\nFast-forward connections play an essential role in propagating the gradients\nand building a deep topology of depth 16. On the WMT'14 English-to-French task,\nwe achieve BLEU=37.7 with a single attention model, which outperforms the\ncorresponding single shallow model by 6.2 BLEU points. This is the first time\nthat a single NMT model achieves state-of-the-art performance and outperforms\nthe best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3\neven without using an attention mechanism. After special handling of unknown\nwords and model ensembling, we obtain the best score reported to date on this\ntask with BLEU=40.4. Our models are also validated on the more difficult WMT'14\nEnglish-to-German task.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Machine Translation"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["WMT2014 English-German", "WMT2014 English-French"], "metric": ["BLEU score"], "title": "Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation"} {"abstract": "This paper studies the problem of embedding very large information networks\ninto low-dimensional vector spaces, which is useful in many tasks such as\nvisualization, node classification, and link prediction. Most existing graph\nembedding methods do not scale for real world information networks which\nusually contain millions of nodes. In this paper, we propose a novel network\nembedding method called the \"LINE,\" which is suitable for arbitrary types of\ninformation networks: undirected, directed, and/or weighted. The method\noptimizes a carefully designed objective function that preserves both the local\nand global network structures. An edge-sampling algorithm is proposed that\naddresses the limitation of the classical stochastic gradient descent and\nimproves both the effectiveness and the efficiency of the inference. Empirical\nexperiments prove the effectiveness of the LINE on a variety of real-world\ninformation networks, including language networks, social networks, and\ncitation networks. The algorithm is very efficient, which is able to learn the\nembedding of a network with millions of vertices and billions of edges in a few\nhours on a typical single machine. The source code of the LINE is available\nonline.", "field": ["Graph Embeddings"], "task": ["Graph Embedding", "Link Prediction", "Network Embedding", "Node Classification"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["BlogCatalog", "Wikipedia"], "metric": ["Macro-F1", "Accuracy"], "title": "LINE: Large-scale Information Network Embedding"} {"abstract": "Video classification researches that have recently attracted attention are the fields of temporal modeling and 3D efficient architecture. However, the temporal modeling methods are not efficient or the 3D efficient architecture is less interested in temporal modeling. For bridging the gap between them, we propose an efficient temporal modeling 3D architecture, called VoV3D, that consists of a temporal one-shot aggregation (T-OSA) module and depthwise factorized component, D(2+1)D. The T-OSA is devised to build a feature hierarchy by aggregating temporal features with different temporal receptive fields. Stacking this T-OSA enables the network itself to model short-range as well as long-range temporal relationships across frames without any external modules. Inspired by kernel factorization and channel factorization, we also design a depthwise spatiotemporal factorization module, named, D(2+1)D that decomposes a 3D depthwise convolution into two spatial and temporal depthwise convolutions for making our network more lightweight and efficient. By using the proposed temporal modeling method (T-OSA), and the efficient factorized component (D(2+1)D), we construct two types of VoV3D networks, VoV3D-M and VoV3D-L. Thanks to its efficiency and effectiveness of temporal modeling, VoV3D-L has 6x fewer model parameters and 16x less computation, surpassing a state-of-the-art temporal modeling method on both Something-Something and Kinetics-400. Furthermore, VoV3D shows better temporal modeling ability than a state-of-the-art efficient 3D architecture, X3D having comparable model capacity. We hope that VoV3D can serve as a baseline for efficient video classification.", "field": ["Convolutions", "Skip Connections", "Normalization", "Skip Connection Blocks"], "task": ["Action Recognition", "Video Classification"], "method": ["Depthwise Convolution", "Concatenated Skip Connection", "Batch Normalization", "Convolution", "1x1 Convolution", "One-Shot Aggregation"], "dataset": ["Something-Something V2", "Something-Something V1"], "metric": ["Top 1 Accuracy", "Top-5 Accuracy", "GFLOPs", "Top-1 Accuracy", "Parameters", "Param.", "Top 5 Accuracy"], "title": "Diverse Temporal Aggregation and Depthwise Spatiotemporal Factorization for Efficient Video Classification"} {"abstract": "Learning long term dependencies in recurrent networks is difficult due to\nvanishing and exploding gradients. To overcome this difficulty, researchers\nhave developed sophisticated optimization techniques and network architectures.\nIn this paper, we propose a simpler solution that use recurrent neural networks\ncomposed of rectified linear units. Key to our solution is the use of the\nidentity matrix or its scaled version to initialize the recurrent weight\nmatrix. We find that our solution is comparable to LSTM on our four benchmarks:\ntwo toy problems involving long-range temporal structures, a large language\nmodeling problem and a benchmark speech recognition problem.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Language Modelling", "Sequential Image Classification", "Speech Recognition"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["Sequential MNIST"], "metric": ["Permuted Accuracy", "Unpermuted Accuracy"], "title": "A Simple Way to Initialize Recurrent Networks of Rectified Linear Units"} {"abstract": "We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant\nof deep neural networks for irregular structured and geometric input, e.g.,\ngraphs or meshes. Our main contribution is a novel convolution operator based\non B-splines, that makes the computation time independent from the kernel size\ndue to the local support property of the B-spline basis functions. As a result,\nwe obtain a generalization of the traditional CNN convolution operator by using\ncontinuous kernel functions parametrized by a fixed number of trainable\nweights. In contrast to related approaches that filter in the spectral domain,\nthe proposed method aggregates features purely in the spatial domain. In\naddition, SplineCNN allows entire end-to-end training of deep architectures,\nusing only the geometric structure as input, instead of handcrafted feature\ndescriptors. For validation, we apply our method on tasks from the fields of\nimage graph classification, shape correspondence and graph node classification,\nand show that it outperforms or pars state-of-the-art approaches while being\nsignificantly faster and having favorable properties like domain-independence.", "field": ["Convolutions"], "task": ["Graph Classification", "Node Classification", "Superpixel Image Classification"], "method": ["Convolution"], "dataset": ["Cora", "Pubmed", "Citeseer", "75 Superpixel MNIST"], "metric": ["Classification Error", "Accuracy"], "title": "SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels"} {"abstract": "Over the past few years, Convolutional Neural Networks (CNNs) have shown\npromise on facial expression recognition. However, the performance degrades\ndramatically under real-world settings due to variations introduced by subtle\nfacial appearance changes, head pose variations, illumination changes, and\nocclusions.\n In this paper, a novel island loss is proposed to enhance the discriminative\npower of the deeply learned features. Specifically, the IL is designed to\nreduce the intra-class variations while enlarging the inter-class differences\nsimultaneously. Experimental results on four benchmark expression databases\nhave demonstrated that the CNN with the proposed island loss (IL-CNN)\noutperforms the baseline CNN models with either traditional softmax loss or the\ncenter loss and achieves comparable or better performance compared with the\nstate-of-the-art methods for facial expression recognition.", "field": ["Output Functions"], "task": ["Facial Expression Recognition"], "method": ["Softmax"], "dataset": ["SFEW"], "metric": ["Accuracy"], "title": "Island Loss for Learning Discriminative Features in Facial Expression Recognition"} {"abstract": "We propose a novel end-to-end semi-supervised adversarial framework to\ngenerate photorealistic face images of new identities with wide ranges of\nexpressions, poses, and illuminations conditioned by a 3D morphable model.\nPrevious adversarial style-transfer methods either supervise their networks\nwith large volume of paired data or use unpaired data with a highly\nunder-constrained two-way generative framework in an unsupervised fashion. We\nintroduce pairwise adversarial supervision to constrain two-way domain\nadaptation by a small number of paired real and synthetic images for training\nalong with the large volume of unpaired data. Extensive qualitative and\nquantitative experiments are performed to validate our idea. Generated face\nimages of new identities contain pose, lighting and expression diversity and\nqualitative results show that they are highly constraint by the synthetic input\nimage while adding photorealism and retaining identity information. We combine\nface images generated by the proposed method with the real data set to train\nface recognition algorithms. We evaluated the model on two challenging data\nsets: LFW and IJB-A. We observe that the generated images from our framework\nconsistently improves over the performance of deep face recognition network\ntrained with Oxford VGG Face dataset and achieves comparable results to the\nstate-of-the-art.", "field": ["Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Convolutions", "Feedforward Networks", "Pooling Operations"], "task": ["Domain Adaptation", "Face Generation", "Face Recognition", "Face Verification", "Style Transfer"], "method": ["VGG", "Softmax", "Convolution", "ReLU", "Dropout", "Dense Connections", "Rectified Linear Units", "Max Pooling"], "dataset": ["IJB-A", "Labeled Faces in the Wild"], "metric": ["TAR @ FAR=0.01", "Accuracy", "TAR @ FAR=0.001"], "title": "Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model"} {"abstract": "In this work we aim to solve a large collection of tasks using a single\nreinforcement learning agent with a single set of parameters. A key challenge\nis to handle the increased amount of data and extended training time. We have\ndeveloped a new distributed agent IMPALA (Importance Weighted Actor-Learner\nArchitecture) that not only uses resources more efficiently in single-machine\ntraining but also scales to thousands of machines without sacrificing data\nefficiency or resource utilisation. We achieve stable learning at high\nthroughput by combining decoupled acting and learning with a novel off-policy\ncorrection method called V-trace. We demonstrate the effectiveness of IMPALA\nfor multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the\nDeepMind Lab environment (Beattie et al., 2016)) and Atari-57 (all available\nAtari games in Arcade Learning Environment (Bellemare et al., 2013a)). Our\nresults show that IMPALA is able to achieve better performance than previous\nagents with less data, and crucially exhibits positive transfer between tasks\nas a result of its multi-task approach.", "field": ["Policy Gradient Methods", "Regularization", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Optimization", "Convolutions", "Pooling Operations", "Replay Memory", "Skip Connections", "Value Function Estimation"], "task": ["Atari Games"], "method": ["V-trace", "RMSProp", "Long Short-Term Memory", "Entropy Regularization", "Max Pooling", "Convolution", "Tanh Activation", "ReLU", "Residual Connection", "Experience Replay", "LSTM", "Gradient Clipping", "IMPALA", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari-57", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Pitfall!", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Defender", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Solaris", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Surround", "Atari 2600 Yars Revenge", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score", "Medium Human-Normalized Score"], "title": "IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures"} {"abstract": "The success of a text simplification system heavily depends on the quality and quantity of complex-simple sentence pairs in the training corpus, which are extracted by aligning sentences between parallel articles. To evaluate and improve sentence alignment quality, we create two manually annotated sentence-aligned datasets from two commonly used text simplification corpora, Newsela and Wikipedia. We propose a novel neural CRF alignment model which not only leverages the sequential nature of sentences in parallel documents but also utilizes a neural sentence pair model to capture semantic similarity. Experiments demonstrate that our proposed approach outperforms all the previous work on monolingual sentence alignment task by more than 5 points in F1. We apply our CRF aligner to construct two new text simplification datasets, Newsela-Auto and Wiki-Auto, which are much larger and of better quality compared to the existing datasets. A Transformer-based seq2seq model trained on our datasets establishes a new state-of-the-art for text simplification in both automatic and human evaluation.", "field": ["Recurrent Neural Networks", "Activation Functions", "Structured Prediction", "Sequence To Sequence Models"], "task": ["Semantic Similarity", "Semantic Textual Similarity", "Text Simplification"], "method": ["Conditional Random Field", "Long Short-Term Memory", "CRF", "Tanh Activation", "Sequence to Sequence", "LSTM", "Seq2Seq", "Sigmoid Activation"], "dataset": ["Newsela"], "metric": ["SARI"], "title": "Neural CRF Model for Sentence Alignment in Text Simplification"} {"abstract": "Learning to dehaze single hazy images, especially using a small training dataset is quite challenging. We propose a novel generative adversarial network architecture for this problem, namely back projected pyramid network (BPPNet), that gives good performance for a variety of challenging haze conditions, including dense haze and inhomogeneous haze. Our architecture incorporates learning of multiple levels of complexities while retaining spatial context through iterative blocks of UNets and structural information of multiple scales through a novel pyramidal convolution block. These blocks together for the generator and are amenable to learning through back projection. We have shown that our network can be trained without over-fitting using as few as 20 image pairs of hazy and non-hazy images. We report the state of the art performances on NTIRE 2018 homogeneous haze datasets for indoor and outdoor images, NTIRE 2019 denseHaze dataset, and NTIRE 2020 non-homogeneous haze dataset.", "field": ["Convolutions"], "task": ["Image Dehazing", "Nonhomogeneous Image Dehazing", "Single Image Dehazing"], "method": ["Convolution"], "dataset": ["Dense-Haze", "O-Haze", "I-Haze"], "metric": ["SSIM", "PSNR"], "title": "Single image dehazing for a variety of haze scenarios using back projected pyramid network"} {"abstract": "We describe an end-to-end method for recovering 3D human body mesh from single images and monocular videos. Different from the existing methods try to obtain all the complex 3D pose, shape, and camera parameters from one coupling feature, we propose a skeleton-disentangling based framework, which divides this task into multi-level spatial and temporal granularity in a decoupling manner. In spatial, we propose an effective and pluggable \"disentangling the skeleton from the details\" (DSD) module. It reduces the complexity and decouples the skeleton, which lays a good foundation for temporal modeling. In temporal, the self-attention based temporal convolution network is proposed to efficiently exploit the short and long-term temporal cues. Furthermore, an unsupervised adversarial training strategy, temporal shuffles and order recovery, is designed to promote the learning of motion dynamics. The proposed method outperforms the state-of-the-art 3D human mesh recovery methods by 15.4% MPJPE and 23.8% PA-MPJPE on Human3.6M. State-of-the-art results are also achieved on the 3D pose in the wild (3DPW) dataset without any fine-tuning. Especially, ablation studies demonstrate that skeleton-disentangled representation is crucial for better temporal modeling and generalization.", "field": ["Convolutions"], "task": ["3D Human Pose Estimation", "3D Pose Estimation"], "method": ["Convolution"], "dataset": ["3DPW"], "metric": ["PA-MPJPE"], "title": "Human Mesh Recovery from Monocular Images via a Skeleton-disentangled Representation"} {"abstract": "In this paper, we propose Emo2Vec which encodes emotional semantics into\nvectors. We train Emo2Vec by multi-task learning six different emotion-related\ntasks, including emotion/sentiment analysis, sarcasm classification, stress\ndetection, abusive language classification, insult detection, and personality\nrecognition. Our evaluation of Emo2Vec shows that it outperforms existing\naffect-related representations, such as Sentiment-Specific Word Embedding and\nDeepMoji embeddings with much smaller training corpora. When concatenated with\nGloVe, Emo2Vec achieves competitive performances to state-of-the-art results on\nseveral tasks using a simple logistic regression classifier.", "field": ["Generalized Linear Models"], "task": ["Abusive Language", "Multi-Task Learning", "Regression", "Sentiment Analysis"], "method": ["Logistic Regression"], "dataset": ["SST-2 Binary classification", "SST-5 Fine-grained classification"], "metric": ["Accuracy"], "title": "Emo2Vec: Learning Generalized Emotion Representation by Multi-task Training"} {"abstract": "Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system. The two tasks are closely tied and the slots often highly depend on the intent. In this paper, we propose a novel framework for SLU to better incorporate the intent information, which further guides the slot filling. In our framework, we adopt a joint model with Stack-Propagation which can directly use the intent information as input for slot filling, thus to capture the intent semantic knowledge. In addition, to further alleviate the error propagation, we perform the token-level intent detection for the Stack-Propagation framework. Experiments on two publicly datasets show that our model achieves the state-of-the-art performance and outperforms other previous methods by a large margin. Finally, we use the Bidirectional Encoder Representation from Transformer (BERT) model in our framework, which further boost our performance in SLU task.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Intent Detection", "Slot Filling", "Spoken Language Understanding"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["ATIS", "SNIPS"], "metric": ["Slot F1 Score", "Intent Accuracy", "F1", "Accuracy"], "title": "A Stack-Propagation Framework with Token-Level Intent Detection for Spoken Language Understanding"} {"abstract": "Semantic concept hierarchy is still under-explored for semantic segmentation\ndue to the inefficiency and complicated optimization of incorporating\nstructural inference into dense prediction. This lack of modeling semantic\ncorrelations also makes prior works must tune highly-specified models for each\ntask due to the label discrepancy across datasets. It severely limits the\ngeneralization capability of segmentation models for open set concept\nvocabulary and annotation utilization. In this paper, we propose a\nDynamic-Structured Semantic Propagation Network (DSSPN) that builds a semantic\nneuron graph by explicitly incorporating the semantic concept hierarchy into\nnetwork construction. Each neuron represents the instantiated module for\nrecognizing a specific type of entity such as a super-class (e.g. food) or a\nspecific concept (e.g. pizza). During training, DSSPN performs the\ndynamic-structured neuron computation graph by only activating a sub-graph of\nneurons for each image in a principled way. A dense semantic-enhanced neural\nblock is proposed to propagate the learned knowledge of all ancestor neurons\ninto each fine-grained child neuron for feature evolving. Another merit of such\nsemantic explainable structure is the ability of learning a unified model\nconcurrently on diverse datasets by selectively activating different neuron\nsub-graphs for each annotation at each step. Extensive experiments on four\npublic semantic segmentation datasets (i.e. ADE20K, COCO-Stuff, Cityscape and\nMapillary) demonstrate the superiority of our DSSPN over state-of-the-art\nsegmentation models. Moreoever, we demonstrate a universal segmentation model\nthat is jointly trained on diverse datasets can surpass the performance of the\ncommon fine-tuning scheme for exploiting multiple domain knowledge.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Cityscapes test", "ADE20K val"], "metric": ["Mean IoU (class)", "mIoU"], "title": "Dynamic-structured Semantic Propagation Network"} {"abstract": "Although Generative Adversarial Networks (GANs) have shown remarkable success\nin various tasks, they still face challenges in generating high quality images.\nIn this paper, we propose Stacked Generative Adversarial Networks (StackGAN)\naiming at generating high-resolution photo-realistic images. First, we propose\na two-stage generative adversarial network architecture, StackGAN-v1, for\ntext-to-image synthesis. The Stage-I GAN sketches the primitive shape and\ncolors of the object based on given text description, yielding low-resolution\nimages. The Stage-II GAN takes Stage-I results and text descriptions as inputs,\nand generates high-resolution images with photo-realistic details. Second, an\nadvanced multi-stage generative adversarial network architecture, StackGAN-v2,\nis proposed for both conditional and unconditional generative tasks. Our\nStackGAN-v2 consists of multiple generators and discriminators in a tree-like\nstructure; images at multiple scales corresponding to the same scene are\ngenerated from different branches of the tree. StackGAN-v2 shows more stable\ntraining behavior than StackGAN-v1 by jointly approximating multiple\ndistributions. Extensive experiments demonstrate that the proposed stacked\ngenerative adversarial networks significantly outperform other state-of-the-art\nmethods in generating photo-realistic images.", "field": ["Generative Models", "Convolutions"], "task": ["Image Generation", "Text-to-Image Generation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["COCO", "Oxford 102 Flowers", "LSUN Bedroom 256 x 256", "CUB"], "metric": ["Inception score", "FID"], "title": "StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks"} {"abstract": "Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and \\squad benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at https://github.com/google-research/ALBERT.", "field": ["Regularization", "Attention Modules", "Stochastic Optimization", "Learning Rate Schedules", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Large Batch Optimization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Linguistic Acceptability", "Natural Language Inference", "Question Answering", "Self-Supervised Learning", "Semantic Textual Similarity"], "method": ["Weight Decay", "ALBERT", "WordPiece", "GELU", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "LAMB", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["MultiNLI", "SQuAD2.0 dev", "SST-2 Binary classification", "RTE", "WNLI", "MRPC", "SQuAD2.0", "STS Benchmark", "QNLI", "CoLA", "Quora Question Pairs"], "metric": ["Pearson Correlation", "Matched", "F1", "Accuracy", "EM"], "title": "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations"} {"abstract": "We present and release a new tool for music source separation with pre-trained models called Spleeter.Spleeter was designed with ease of use, separation performance and speed in mind. Spleeter is based onTensorflow [1] and makes it possible to:\u2022separate audio files into2,4or5stems with a single command line using pre-trained models.\u2022train source separation models or fine-tune pre-trained ones with Tensorflow (provided you have a dataset of isolated sources).The performance of the pre-trained models are very close to the published state of the art and is, to the authors knowledge, the best performing4stems separation model on the common musdb18 benchmark [6]to be publicly released. Spleeter is also very fast as it can separate a mix audio file into4stems100timesfaster than real-time1on a single Graphics Processing Unit (GPU) using the pre-trained4-stems model. Spleeter is packaged within Docker which makes it usable as is on various platforms.", "field": ["Semantic Segmentation Models", "Activation Functions", "Normalization", "Convolutions", "Skip Connections"], "task": ["Music Source Separation", "Speech Enhancement"], "method": ["U-Net", "Exponential Linear Unit", "Concatenated Skip Connection", "Batch Normalization", "Convolution", "ELU", "ReLU", "Leaky ReLU", "Rectified Linear Units"], "dataset": ["MUSDB18"], "metric": ["SDR (vocals)", "SDR (other)", "SDR (drums)", "SDR (bass)"], "title": "Spleeter: A Fast And State-of-the Art Music Source Separation Tool With Pre-trained Models"} {"abstract": "Word sense induction (WSI) is the task of unsupervised clustering of word usages within a sentence to distinguish senses. Recent work obtain strong results by clustering lexical substitutes derived from pre-trained RNN language models (ELMo). Adapting the method to BERT improves the scores even further. We extend the previous method to support a dynamic rather than a fixed number of clusters as supported by other prominent methods, and propose a method for interpreting the resulting clusters by associating them with their most informative substitutes. We then perform extensive error analysis revealing the remaining sources of errors in the WSI task. Our code is available at https://github.com/asafamr/bertwsi.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Word Sense Induction"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SemEval 2013", "SemEval 2010 WSI"], "metric": ["F_NMI", "F-BC", "V-Measure", "AVG", "F-Score"], "title": "Towards better substitution-based word sense induction"} {"abstract": "Object detection is one of the most important areas in computer vision, which plays a key role in various practical scenarios. Due to limitation of hardware, it is often necessary to sacrifice accuracy to ensure the infer speed of the detector in practice. Therefore, the balance between effectiveness and efficiency of object detector must be considered. The goal of this paper is to implement an object detector with relatively balanced effectiveness and efficiency that can be directly applied in actual application scenarios, rather than propose a novel detection model. Considering that YOLOv3 has been widely used in practice, we develop a new object detector based on YOLOv3. We mainly try to combine various existing tricks that almost not increase the number of model parameters and FLOPs, to achieve the goal of improving the accuracy of detector as much as possible while ensuring that the speed is almost unchanged. Since all experiments in this paper are conducted based on PaddlePaddle, we call it PP-YOLO. By combining multiple tricks, PP-YOLO can achieve a better balance between effectiveness (45.2% mAP) and efficiency (72.9 FPS), surpassing the existing state-of-the-art detectors such as EfficientDet and YOLOv4.Source code is at https://github.com/PaddlePaddle/PaddleDetection.", "field": ["Object Detection Models", "Image Data Augmentation", "Output Functions", "Convolutional Neural Networks", "Generalized Linear Models", "Feature Extractors", "Regularization", "Activation Functions", "Learning Rate Schedules", "Normalization", "Convolutions", "Clustering", "Pooling Operations", "Skip Connections", "Image Model Blocks"], "task": ["Object Detection"], "method": ["Depthwise Convolution", "Cosine Annealing", "Average Pooling", "Tanh Activation", "Bottom-up Path Augmentation", "1x1 Convolution", "EfficientDet", "Softplus", "PAFPN", "BiFPN", "Mish", "Convolution", "CutMix", "ReLU", "Residual Connection", "FPN", "YOLOv3", "Spatial Attention Module", "Batch Normalization", "Label Smoothing", "Pointwise Convolution", "Sigmoid Activation", "k-Means Clustering", "Logistic Regression", "DropBlock", "CSPDarknet53", "Softmax", "Feature Pyramid Network", "YOLOv4", "Depthwise Separable Convolution", "Darknet-53", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "Spatial Pyramid Pooling"], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "PP-YOLO: An Effective and Efficient Implementation of Object Detector"} {"abstract": "Recently, a new recurrent neural network (RNN) named the Legendre Memory Unit (LMU) was proposed and shown to achieve state-of-the-art performance on several benchmark datasets. Here we leverage the linear time-invariant (LTI) memory component of the LMU to construct a simplified variant that can be parallelized during training (and yet executed as an RNN during inference), thus overcoming a well known limitation of training RNNs on GPUs. We show that this reformulation that aids parallelizing, which can be applied generally to any deep network whose recurrent components are linear, makes training up to 200 times faster. Second, to validate its utility, we compare its performance against the original LMU and a variety of published LSTM and transformer networks on seven benchmarks, ranging from psMNIST to sentiment analysis to machine translation. We demonstrate that our models exhibit superior performance on all datasets, often using fewer parameters. For instance, our LMU sets a new state-of-the-art result on psMNIST, and uses half the parameters while outperforming DistilBERT and LSTM models on IMDB sentiment analysis.", "field": ["Output Functions", "Attention Modules", "Stochastic Optimization", "Learning Rate Schedules", "Regularization", "Activation Functions", "Recurrent Neural Networks", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation", "Sentiment Analysis", "Sequential Image Classification"], "method": ["Weight Decay", "Adam", "Long Short-Term Memory", "Tanh Activation", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "DistilBERT", "Legendre Memory Unit", "Residual Connection", "Dense Connections", "Layer Normalization", "GELU", "Sigmoid Activation", "LMU", "WordPiece", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "LSTM", "Dropout", "BERT"], "dataset": ["IMDb", "Sequential MNIST"], "metric": ["Permuted Accuracy", "Accuracy"], "title": "Parallelizing Legendre Memory Unit Training"} {"abstract": "This paper proposes a computationally efficient approach to detecting objects\nnatively in 3D point clouds using convolutional neural networks (CNNs). In\nparticular, this is achieved by leveraging a feature-centric voting scheme to\nimplement novel convolutional layers which explicitly exploit the sparsity\nencountered in the input. To this end, we examine the trade-off between\naccuracy and speed for different architectures and additionally propose to use\nan L1 penalty on the filter activations to further encourage sparsity in the\nintermediate representations. To the best of our knowledge, this is the first\nwork to propose sparse convolutional layers and L1 regularisation for efficient\nlarge-scale processing of 3D data. We demonstrate the efficacy of our approach\non the KITTI object detection benchmark and show that Vote3Deep models with as\nfew as three layers outperform the previous state of the art in both laser and\nlaser-vision based approaches by margins of up to 40% while remaining highly\ncompetitive in terms of processing time.", "field": ["Convolutions", "Activation Functions", "Regularization", "Proposal Filtering"], "task": ["3D Object Detection", "Object Detection", "Real-Time Object Detection"], "method": ["Feature-Centric Voting", "Non Maximum Suppression", "Sparse Convolutions", "ReLU", "L1 Regularization", "Rectified Linear Units"], "dataset": ["KITTI Cyclists Hard", "KITTI Pedestrians Hard", "KITTI Cars Moderate", "KITTI Cyclists Moderate", "KITTI Pedestrians Moderate", "KITTI Cars Hard", "KITTI Pedestrians Easy", "KITTI Cyclists Easy", "KITTI Cars Easy"], "metric": ["AP"], "title": "Vote3Deep: Fast Object Detection in 3D Point Clouds Using Efficient Convolutional Neural Networks"} {"abstract": "Scene text detection, an essential step of scene text recognition system, is\nto locate text instances in natural scene images automatically. Some recent\nattempts benefiting from Mask R-CNN formulate scene text detection task as an\ninstance segmentation problem and achieve remarkable performance. In this\npaper, we present a new Mask R-CNN based framework named Pyramid Mask Text\nDetector (PMTD) to handle the scene text detection. Instead of binary text mask\ngenerated by the existing Mask R-CNN based methods, our PMTD performs\npixel-level regression under the guidance of location-aware supervision,\nyielding a more informative soft text mask for each text instance. As for the\ngeneration of text boxes, PMTD reinterprets the obtained 2D soft mask into 3D\nspace and introduces a novel plane clustering algorithm to derive the optimal\ntext box on the basis of 3D shape. Experiments on standard datasets demonstrate\nthat the proposed PMTD brings consistent and noticeable gain and clearly\noutperforms state-of-the-art methods. Specifically, it achieves an F-measure of\n80.13% on ICDAR 2017 MLT dataset.", "field": ["Convolutions", "RoI Feature Extractors", "Output Functions", "Instance Segmentation Models"], "task": ["Instance Segmentation", "Regression", "Scene Text", "Scene Text Detection", "Scene Text Recognition", "Semantic Segmentation"], "method": ["Mask R-CNN", "Softmax", "RoIAlign", "Convolution"], "dataset": ["ICDAR 2017 MLT", "ICDAR 2015"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Pyramid Mask Text Detector"} {"abstract": "We present a simple, yet effective and flexible method for action recognition supporting multiple sensor modalities. Multivariate signal sequences are encoded in an image and are then classified using a recently proposed EfficientNet CNN architecture. Our focus was to find an approach that generalizes well across different sensor modalities without specific adaptions while still achieving good results. We apply our method to 4 action recognition datasets containing skeleton sequences, inertial and motion capturing measurements as well as \\wifi fingerprints that range up to 120 action classes. Our method defines the current best CNN-based approach on the NTU RGB+D 120 dataset, lifts the state of the art on the ARIL Wi-Fi dataset by +6.78%, improves the UTD-MHAD inertial baseline by +14.4%, the UTD-MHAD skeleton baseline by 1.13% and achieves 96.11% on the Simitate motion capturing data (80/20 split). We further demonstrate experiments on both, modality fusion on a signal level and signal reduction to prevent the representation from overloading.", "field": ["Regularization", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["Action Recognition", "Activity Recognition", "Multimodal Activity Recognition", "Skeleton Based Action Recognition"], "method": ["Depthwise Convolution", "Squeeze-and-Excitation Block", "Average Pooling", "Inverted Residual Block", "RMSProp", "Swish", "EfficientNet", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Dropout", "Depthwise Separable Convolution", "Pointwise Convolution", "Dense Connections", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["UTD-MHAD", "NTU RGB+D 120"], "metric": ["Accuracy (Cross-Subject)", "Accuracy (Cross-Setup)", "Accuracy (CS)"], "title": "Gimme Signals: Discriminative signal encoding for multimodal activity recognition"} {"abstract": "In this paper, we report state-of-the-art results on LibriSpeech among end-to-end speech recognition models without any external training data. Our model, Jasper, uses only 1D convolutions, batch normalization, ReLU, dropout, and residual connections. To improve training, we further introduce a new layer-wise optimizer called NovoGrad. Through experiments, we demonstrate that the proposed deep architecture performs as well or better than more complex choices. Our deepest Jasper variant uses 54 convolutional layers. With this architecture, we achieve 2.95% WER using a beam-search decoder with an external neural language model and 3.86% WER with a greedy decoder on LibriSpeech test-clean. We also report competitive results on the Wall Street Journal and the Hub5'00 conversational evaluation datasets.", "field": ["Activation Functions"], "task": ["End-To-End Speech Recognition", "Language Modelling", "Speech Recognition"], "method": ["ReLU", "Rectified Linear Units"], "dataset": ["LibriSpeech test-other", "WSJ eval92", "LibriSpeech test-clean", "Hub5'00 SwitchBoard"], "metric": ["CallHome", "SwitchBoard", "Word Error Rate (WER)"], "title": "Jasper: An End-to-End Convolutional Neural Acoustic Model"} {"abstract": "Few-shot learning is challenging for learning algorithms that learn each task\nin isolation and from scratch. In contrast, meta-learning learns from many\nrelated tasks a meta-learner that can learn a new task more accurately and\nfaster with fewer examples, where the choice of meta-learners is crucial. In\nthis paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner\nthat can initialize and adapt any differentiable learner in just one step, on\nboth supervised learning and reinforcement learning. Compared to the popular\nmeta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and\ncan be learned more efficiently. Compared to the latest meta-learner MAML,\nMeta-SGD has a much higher capacity by learning to learn not just the learner\ninitialization, but also the learner update direction and learning rate, all in\na single meta-learning process. Meta-SGD shows highly competitive performance\nfor few-shot learning on regression, classification, and reinforcement\nlearning.", "field": ["Recurrent Neural Networks", "Activation Functions", "Meta-Learning Algorithms"], "task": ["Few-Shot Learning", "Meta-Learning", "Regression"], "method": ["MAML", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Model-Agnostic Meta-Learning", "Sigmoid Activation"], "dataset": ["Mini-Imagenet 20-way (5-shot)", "Mini-Imagenet 20-way (1-shot)"], "metric": ["Accuracy"], "title": "Meta-SGD: Learning to Learn Quickly for Few-Shot Learning"} {"abstract": "Attributed graph clustering is challenging as it requires joint modelling of graph structures and node attributes. Recent progress on graph convolutional networks has proved that graph convolution is effective in combining structural and content information, and several recent methods based on it have achieved promising clustering performance on some real attributed networks. However, there is limited understanding of how graph convolution affects clustering performance and how to properly use it to optimize performance for different graphs. Existing methods essentially use graph convolution of a fixed and low order that only takes into account neighbours within a few hops of each node, which underutilizes node relations and ignores the diversity of graphs. In this paper, we propose an adaptive graph convolution method for attributed graph clustering that exploits high-order graph convolution to capture global cluster structure and adaptively selects the appropriate order for different graphs. We establish the validity of our method by theoretical analysis and extensive experiments on benchmark datasets. Empirical results show that our method compares favourably with state-of-the-art methods.", "field": ["Convolutions"], "task": ["Graph Clustering"], "method": ["Convolution"], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Attributed Graph Clustering via Adaptive Graph Convolution"} {"abstract": "Episodic control provides a highly sample-efficient method for reinforcement learning while enforcing high memory and computational requirements. This work proposes a simple heuristic for reducing these requirements, and an application to Model-Free Episodic Control (MFEC) is presented. Experiments on Atari games show that this heuristic successfully reduces MFEC computational demands while producing no significant loss of performance when conservative choices of hyperparameters are used. Consequently, episodic control becomes a more feasible option when dealing with reinforcement learning tasks.", "field": ["Non-Parametric Regression"], "task": ["Atari Games"], "method": ["Model-Free Episodic Control", "MFEC"], "dataset": ["Atari 2600 River Raid", "Atari 2600 Ms. Pacman", "Atari 2600 Frostbite", "Atari 2600 Space Invaders", "Atari 2600 Q*Bert", "Atari 2600 HERO"], "metric": ["Score", "Best Score"], "title": "Model-Free Episodic Control with State Aggregation"} {"abstract": "We improve the recently-proposed \"MixMatch\" semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation anchoring. Distribution alignment encourages the marginal distribution of predictions on unlabeled data to be close to the marginal distribution of ground-truth labels. Augmentation anchoring feeds multiple strongly augmented versions of an input into the model and encourages each output to be close to the prediction for a weakly-augmented version of the same input. To produce strong augmentations, we propose a variant of AutoAugment which learns the augmentation policy while the model is being trained. Our new algorithm, dubbed ReMixMatch, is significantly more data-efficient than prior work, requiring between $5\\times$ and $16\\times$ less data to reach the same accuracy. For example, on CIFAR-10 with 250 labeled examples we reach $93.73\\%$ accuracy (compared to MixMatch's accuracy of $93.58\\%$ with $4{,}000$ examples) and a median accuracy of $84.92\\%$ with just four labels per class. We make our code and data open-source at https://github.com/google-research/remixmatch.", "field": ["Recurrent Neural Networks", "Activation Functions", "Image Data Augmentation"], "task": ["Image Classification", "Semi-Supervised Image Classification"], "method": ["Long Short-Term Memory", "AutoAugment", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["STL-10", "CIFAR-10, 40 Labels", "cifar10, 250 Labels", "CIFAR-10, 250 Labels", "SVHN, 1000 labels", "STL-10, 1000 Labels", "CIFAR-10, 4000 Labels"], "metric": ["Percentage error", "Percentage correct", "Accuracy"], "title": "ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring"} {"abstract": "We develop a technique for generating smooth and accurate 3D human pose and motion estimates from RGB video sequences. Our method, which we call Motion Estimation via Variational Autoencoder (MEVA), decomposes a temporal sequence of human motion into a smooth motion representation using auto-encoder-based motion compression and a residual representation learned through motion refinement. This two-step encoding of human motion captures human motion in two stages: a general human motion estimation step that captures the coarse overall motion, and a residual estimation that adds back person-specific motion details. Experiments show that our method produces both smooth and accurate 3D human pose and motion estimates.", "field": ["Generative Models"], "task": ["3D Human Pose Estimation"], "method": ["AutoEncoder"], "dataset": ["MPI-INF-3DHP", "3DPW"], "metric": ["MJPE", "PA-MPJPE", "MPJPE"], "title": "3D Human Motion Estimation via Motion Compression and Refinement"} {"abstract": "Point cloud is an important type of 3D representation. However, directly applying convolutions on point clouds is challenging due to the sparse, irregular and unordered data structure. In this paper, we propose a novel Interpolated Convolution operation, InterpConv, to tackle the point cloud feature learning and understanding problem. The key idea is to utilize a set of discrete kernel weights and interpolate point features to neighboring kernel-weight coordinates by an interpolation function for convolution. A normalization term is introduced to handle neighborhoods of different sparsity levels. Our InterpConv is shown to be permutation and sparsity invariant, and can directly handle irregular inputs. We further design Interpolated Convolutional Neural Networks (InterpCNNs) based on InterpConv layers to handle point cloud recognition tasks including shape classification, object part segmentation and indoor scene semantic parsing. Experiments show that the networks can capture both fine-grained local structures and global shape context information effectively. The proposed approach achieves state-of-the-art performance on public benchmarks including ModelNet40, ShapeNet Parts and S3DIS.", "field": ["Convolutions"], "task": ["3D Part Segmentation", "3D Point Cloud Classification", "Semantic Parsing"], "method": ["Convolution"], "dataset": ["ShapeNet-Part", "ModelNet40"], "metric": ["Overall Accuracy", "Class Average IoU", "Instance Average IoU"], "title": "Interpolated Convolutional Networks for 3D Point Cloud Understanding"} {"abstract": "We present a self-supervised Contrastive Video Representation Learning (CVRL) method to learn spatiotemporal visual representations from unlabeled videos. Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away. We study what makes for good data augmentation for video self-supervised learning and find both spatial and temporal information are crucial. We carefully design data augmentations involving spatial and temporal cues. Concretely, we propose a temporally consistent spatial augmentation method to impose strong spatial augmentations on each frame of the video while maintaining the temporal consistency across frames. We also propose a sampling-based temporal augmentation method to avoid overly enforcing invariance on the clips that are distant in a video. On the Kinetics-600 dataset, a linear classifier trained on the representations learned by CVRL achieves 70.4% top-1 accuracy with a 3D-ResNet-50 (R3D-50) backbone, outperforming ImageNet supervised pre-training by 15.7% and SimCLR unsupervised pre-training by 18.8% using the same inflated R3D-50. The performance of CVRL can be further improved to 72.6% with a larger R3D-50 (4$\\times$ filters) backbone, significantly closing the gap between unsupervised and supervised video representation learning.", "field": ["Self-Supervised Learning", "Image Data Augmentation", "Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Loss Functions", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Action Recognition", "Data Augmentation", "Representation Learning", "Self-Supervised Action Recognition", "Self-Supervised Learning", "Unsupervised Pre-training"], "method": ["Average Pooling", "1x1 Convolution", "Normalized Temperature-scaled Cross Entropy Loss", "ResNet", "Convolution", "ReLU", "SimCLR", "Residual Connection", "Dense Connections", "Feedforward Network", "Random Resized Crop", "Batch Normalization", "Residual Network", "ColorJitter", "Kaiming Initialization", "Color Jitter", "NT-Xent", "Random Gaussian Blur", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["UCF101", "HMDB51"], "metric": ["3-fold Accuracy", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "Spatiotemporal Contrastive Video Representation Learning"} {"abstract": "A coreset is a subset of the training set, using which a machine learning algorithm obtains performances similar to what it would deliver if trained over the whole original data. Coreset discovery is an active and open line of research as it allows improving training speed for the algorithms and may help human understanding the results. Building on previous works, a novel approach is presented: candidate corsets are iteratively optimized, adding and removing samples. As there is an obvious trade-off between limiting training size and quality of the results, a multi-objective evolutionary algorithm is used to minimize simultaneously the number of points in the set and the classification error. Experimental results on non-trivial benchmarks show that the proposed approach is able to deliver results that allow a classifier to obtain lower error and better ability of generalizing on unseen data than state-of-the-art coreset discovery techniques.", "field": ["Graph Embeddings"], "task": ["Core set discovery", "Feature Selection"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["Letter", "Kr-vs-kp", "JM1", "Soybean", "Abalone", "ISOLET", "Mozilla4", "Credit-g", "Glass identification", "Amazon-employee-access", "Electricity", "UCI GAS", "MNIST", "micro-mass"], "metric": ["F1(10-fold)"], "title": "Uncovering Coresets for Classification With Multi-Objective Evolutionary Algorithms"} {"abstract": "Self-attention is a useful mechanism to build generative models for language\nand images. It determines the importance of context elements by comparing each\nelement to the current time step. In this paper, we show that a very\nlightweight convolution can perform competitively to the best reported\nself-attention results. Next, we introduce dynamic convolutions which are\nsimpler and more efficient than self-attention. We predict separate convolution\nkernels based solely on the current time-step in order to determine the\nimportance of context elements. The number of operations required by this\napproach scales linearly in the input length, whereas self-attention is\nquadratic. Experiments on large-scale machine translation, language modeling\nand abstractive summarization show that dynamic convolutions improve over\nstrong self-attention models. On the WMT'14 English-German test set dynamic\nconvolutions achieve a new state of the art of 29.7 BLEU.", "field": ["Regularization", "Output Functions", "Activation Functions", "Convolutions", "Feedforward Networks"], "task": ["Abstractive Text Summarization", "Language Modelling", "Machine Translation"], "method": ["Depthwise Convolution", "LightConv", "GLU", "Gated Linear Unit", "Softmax", "Convolution", "DropConnect", "DynamicConv", "Linear Layer", "Dynamic Convolution", "Lightweight Convolution"], "dataset": ["CNN / Daily Mail", "WMT 2017 English-Chinese", "WMT2014 English-German", "IWSLT2014 German-English", "WMT2014 English-French", "One Billion Word"], "metric": ["Number of params", "ROUGE-1", "ROUGE-2", "PPL", "BLEU score", "ROUGE-L"], "title": "Pay Less Attention with Lightweight and Dynamic Convolutions"} {"abstract": "The goal of this paper is to serve as a guide for selecting a detection\narchitecture that achieves the right speed/memory/accuracy balance for a given\napplication and platform. To this end, we investigate various ways to trade\naccuracy for speed and memory usage in modern convolutional object detection\nsystems. A number of successful systems have been proposed in recent years, but\napples-to-apples comparisons are difficult due to different base feature\nextractors (e.g., VGG, Residual Networks), different default image resolutions,\nas well as different hardware and software platforms. We present a unified\nimplementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]\nand SSD [Liu et al., 2015] systems, which we view as \"meta-architectures\" and\ntrace out the speed/accuracy trade-off curve created by using alternative\nfeature extractors and varying other critical parameters such as image size\nwithin each of these meta-architectures. On one extreme end of this spectrum\nwhere speed and memory are critical, we present a detector that achieves real\ntime speeds and can be deployed on a mobile device. On the opposite end in\nwhich accuracy is critical, we present a detector that achieves\nstate-of-the-art performance measured on the COCO detection task.", "field": ["Regularization", "Convolutional Neural Networks", "Proposal Filtering", "Output Functions", "Activation Functions", "RoI Feature Extractors", "Convolutions", "Feedforward Networks", "Pooling Operations", "Region Proposal", "Object Detection Models"], "task": ["Object Detection"], "method": ["RPN", "VGG", "Faster R-CNN", "Softmax", "Non Maximum Suppression", "SSD", "Convolution", "RoIPool", "Rectified Linear Units", "ReLU", "1x1 Convolution", "Position-Sensitive RoI Pooling", "R-FCN", "Dropout", "Region Proposal Network", "Dense Connections", "Max Pooling", "Region-based Fully Convolutional Network"], "dataset": ["COCO test-dev"], "metric": ["box AP"], "title": "Speed/accuracy trade-offs for modern convolutional object detectors"} {"abstract": "The linear-chain Conditional Random Field (CRF) model is one of the most widely-used neural sequence labeling approaches. Exact probabilistic inference algorithms such as the forward-backward and Viterbi algorithms are typically applied in training and prediction stages of the CRF model. However, these algorithms require sequential computation that makes parallelization impossible. In this paper, we propose to employ a parallelizable approximate variational inference algorithm for the CRF model. Based on this algorithm, we design an approximate inference network that can be connected with the encoder of the neural CRF model to form an end-to-end network, which is amenable to parallelization for faster training and prediction. The empirical results show that our proposed approaches achieve a 12.7-fold improvement in decoding speed with long sentences and a competitive accuracy compared with the traditional CRF approach.", "field": ["Structured Prediction"], "task": ["Chunking", "Variational Inference"], "method": ["Conditional Random Field", "CRF"], "dataset": ["CoNLL 2003 (English)", "CoNLL 2003 (German)"], "metric": ["F1"], "title": "AIN: Fast and Accurate Sequence Labeling with Approximate Inference Network"} {"abstract": "Region proposal algorithms play an important role in most state-of-the-art two-stage object detection networks by hypothesizing object locations in the image. Nonetheless, region proposal algorithms are known to be the bottleneck in most two-stage object detection networks, increasing the processing time for each image and resulting in slow networks not suitable for real-time applications such as autonomous driving vehicles. In this paper we introduce RRPN, a Radar-based real-time region proposal algorithm for object detection in autonomous driving vehicles. RRPN generates object proposals by mapping Radar detections to the image coordinate system and generating pre-defined anchor boxes for each mapped Radar detection point. These anchor boxes are then transformed and scaled based on the object's distance from the vehicle, to provide more accurate proposals for the detected objects. We evaluate our method on the newly released NuScenes dataset [1] using the Fast R-CNN object detection network [2]. Compared to the Selective Search object proposal algorithm [3], our model operates more than 100x faster while at the same time achieves higher detection precision and recall. Code has been made publicly available at https://github.com/mrnabati/RRPN .", "field": ["Output Functions", "RoI Feature Extractors", "Convolutions", "Region Proposal", "Object Detection Models"], "task": ["Autonomous Driving", "Autonomous Vehicles", "Object Detection", "Region Proposal", "Sensor Fusion"], "method": ["Fast R-CNN", "Softmax", "RoIPool", "Convolution", "Selective Search"], "dataset": ["nuScenes-F", "nuScenes-FB"], "metric": ["ARs", "ARI", "ARm", "AP75", "AP", "AP50", "AR"], "title": "RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles"} {"abstract": "Computer-aided assessment of physical rehabilitation entails evaluation of patient performance in completing prescribed rehabilitation exercises, based on processing movement data captured with a sensory system. Despite the essential role of rehabilitation assessment toward improved patient outcomes and reduced healthcare costs, existing approaches lack versatility, robustness, and practical relevance. In this paper, we propose a deep learning-based framework for automated assessment of the quality of physical rehabilitation exercises. The main components of the framework are metrics for quantifying movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for generating quality scores of input movements via supervised learning. The proposed performance metric is defined based on the log-likelihood of a Gaussian mixture model, and encodes low-dimensional data representation obtained with a deep autoencoder network. The proposed deep spatio-temporal neural network arranges data into temporal pyramids, and exploits the spatial characteristics of human movements by using sub-networks to process joint displacements of individual body parts. The presented framework is validated using a dataset of ten rehabilitation exercises. The significance of this work is that it is the first that implements deep neural networks for assessment of rehabilitation performance.", "field": ["Generative Models", "Degridding", "Pooling Operations"], "task": ["Action Quality Assessment", "Dimensionality Reduction", "Motion Estimation"], "method": ["Hierarchical Feature Fusion", "AutoEncoder", "Spatial Pyramid Pooling"], "dataset": ["KIMORE", "UI-PRMD"], "metric": ["Average mean absolute error"], "title": "A Deep Learning Framework for Assessing Physical Rehabilitation Exercises"} {"abstract": "We propose a simple architecture to address unpaired image-to-image translation tasks: style or class transfer, denoising, deblurring, deblocking, etc. We start from an image autoencoder architecture with fixed weights. For each task we learn a residual block operating in the latent space, which is iteratively called until the target domain is reached. A specific training schedule is required to alleviate the exponentiation effect of the iterations. At test time, it offers several advantages: the number of weight parameters is limited and the compositional design allows one to modulate the strength of the transformation with the number of iterations. This is useful, for instance, when the type or amount of noise to suppress is not known in advance. Experimentally, we provide proofs of concepts showing the interest of our method for many transformations. The performance of our model is comparable or better than CycleGAN with significantly fewer parameters.", "field": ["Discriminators", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Generative Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Deblurring", "Denoising", "Image-to-Image Translation"], "method": ["Cycle Consistency Loss", "Instance Normalization", "PatchGAN", "GAN Least Squares Loss", "Convolution", "Tanh Activation", "Batch Normalization", "ReLU", "CycleGAN", "Residual Connection", "AutoEncoder", "Leaky ReLU", "Residual Block", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["vangogh2photo", "zebra2horse", "horse2zebra", "photo2vangogh"], "metric": ["Number of params", "Number of Params", "Frechet Inception Distance"], "title": "Powers of layers for image-to-image translation"} {"abstract": "Sleep stage classification constitutes an important element of sleep disorder diagnosis. It relies on the visual inspection of polysomnography records by trained sleep technologists. Automated approaches have been designed to alleviate this resource-intensive task. However, such approaches are usually compared to a single human scorer annotation despite an inter-rater agreement of about 85 % only. The present study introduces two publicly-available datasets, DOD-H including 25 healthy volunteers and DOD-O including 55 patients suffering from obstructive sleep apnea (OSA). Both datasets have been scored by 5 sleep technologists from different sleep centers. We developed a framework to compare automated approaches to a consensus of multiple human scorers. Using this framework, we benchmarked and compared the main literature approaches. We also developed and benchmarked a new deep learning method, SimpleSleepNet, inspired by current state-of-the-art. We demonstrated that many methods can reach human-level performance on both datasets. SimpleSleepNet achieved an F1 of 89.9 % vs 86.8 % on average for human scorers on DOD-H, and an F1 of 88.3 % vs 84.8 % on DOD-O. Our study highlights that using state-of-the-art automated sleep staging outperforms human scorers performance for healthy volunteers and patients suffering from OSA. Consideration could be made to use automated approaches in the clinical setting.", "field": ["Recurrent Neural Networks"], "task": ["Automatic Sleep Stage Classification", "Multimodal Sleep Stage Detection", "Sleep Stage Detection"], "method": ["Gated Recurrent Unit", "GRU"], "dataset": ["DODH", "MASS SS3", "DODO"], "metric": ["Kappa", "Accuracy"], "title": "Dreem Open Datasets: Multi-Scored Sleep Datasets to compare Human and Automated sleep staging"} {"abstract": "Sign language recognition is a challenging problem where signs are identified by simultaneous local and global articulations of multiple sources, i.e. hand shape and orientation, hand movements, body posture, and facial expressions. Solving this problem computationally for a large vocabulary of signs in real life settings is still a challenge, even with the state-of-the-art models. In this study, we present a new largescale multi-modal Turkish Sign Language dataset (AUTSL) with a benchmark and provide baseline models for performance evaluations. Our dataset consists of 226 signs performed by 43 different signers and 38,336 isolated sign video samples in total. Samples contain a wide variety of backgrounds recorded in indoor and outdoor environments. Moreover, spatial positions and the postures of signers also vary in the recordings. Each sample is recorded with Microsoft Kinect v2 and contains RGB, depth, and skeleton modalities. We prepared benchmark training and test sets for user independent assessments of the models. We trained several deep learning based models and provide empirical evaluations using the benchmark; we used CNNs to extract features, unidirectional and bidirectional LSTM models to characterize temporal information. We also incorporated feature pooling modules and temporal attention to our models to improve the performances. We evaluated our baseline models on AUTSL and Montalbano datasets. Our models achieved competitive results with the state-of-the-art methods on Montalbano dataset, i.e. 96.11% accuracy. In AUTSL random train-test splits, our models performed up to 95.95% accuracy. In the proposed user-independent benchmark dataset our best baseline model achieved 62.02% accuracy. The gaps in the performances of the same baseline models show the challenges inherent in our benchmark dataset. AUTSL benchmark dataset is publicly available at https://cvml.ankara.edu.tr.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Sign Language Recognition"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["AUTSL"], "metric": ["Rank-1 Recognition Rate"], "title": "AUTSL: A Large Scale Multi-modal Turkish Sign Language Dataset and Baseline Methods"} {"abstract": "We propose a reparameterization of LSTM that brings the benefits of batch\nnormalization to recurrent neural networks. Whereas previous works only apply\nbatch normalization to the input-to-hidden transformation of RNNs, we\ndemonstrate that it is both possible and beneficial to batch-normalize the\nhidden-to-hidden transition, thereby reducing internal covariate shift between\ntime steps. We evaluate our proposal on various sequential problems such as\nsequence classification, language modeling and question answering. Our\nempirical results show that our batch-normalized LSTM consistently leads to\nfaster convergence and improved generalization.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Language Modelling", "Question Answering", "Reading Comprehension", "Sequential Image Classification"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["Text8", "Sequential MNIST"], "metric": ["Number of params", "Unpermuted Accuracy", "Bit per Character (BPC)", "Permuted Accuracy"], "title": "Recurrent Batch Normalization"} {"abstract": "We present a novel 3D pose estimation method based on joint interdependency (JI) for acquiring 3D joints from the human pose of an RGB image. The JI incorporates the body part based structural connectivity of joints to learn the high spatial correlation of human posture on our method. Towards this goal, we propose a new long short-term memory (LSTM)-based deep learning architecture named propagating LSTM networks (p-LSTMs), where each LSTM is connected sequentially to reconstruct 3D depth from the centroid to edge joints through learning the intrinsic JI. In the first LSTM, the seed joints of 3D pose are created and reconstructed into the whole-body joints through the connected LSTMs. Utilizing the p-LSTMs, we achieve the higher accuracy of about 11.2% than state-of-the-art methods on the largest publicly available database. Importantly, we demonstrate that the JI drastically reduces the structural errors at body edges, thereby leads to a significant improvement.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Propagating LSTM: 3D Pose Estimation based on Joint Interdependency"} {"abstract": "Accurate detection of objects in 3D point clouds is a central problem in many\napplications, such as autonomous navigation, housekeeping robots, and\naugmented/virtual reality. To interface a highly sparse LiDAR point cloud with\na region proposal network (RPN), most existing efforts have focused on\nhand-crafted feature representations, for example, a bird's eye view\nprojection. In this work, we remove the need of manual feature engineering for\n3D point clouds and propose VoxelNet, a generic 3D detection network that\nunifies feature extraction and bounding box prediction into a single stage,\nend-to-end trainable deep network. Specifically, VoxelNet divides a point cloud\ninto equally spaced 3D voxels and transforms a group of points within each\nvoxel into a unified feature representation through the newly introduced voxel\nfeature encoding (VFE) layer. In this way, the point cloud is encoded as a\ndescriptive volumetric representation, which is then connected to a RPN to\ngenerate detections. Experiments on the KITTI car detection benchmark show that\nVoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a\nlarge margin. Furthermore, our network learns an effective discriminative\nrepresentation of objects with various geometries, leading to encouraging\nresults in 3D detection of pedestrians and cyclists, based on only LiDAR.", "field": ["Region Proposal"], "task": ["3D Object Detection", "Autonomous Navigation", "Feature Engineering", "Object Detection", "Object Localization", "Region Proposal"], "method": ["Region Proposal Network", "RPN"], "dataset": ["KITTI Cars Hard", "KITTI Pedestrians Hard", "KITTI Cars Moderate", "KITTI Cyclists Hard", "KITTI Pedestrians Moderate", "KITTI Cyclists Moderate", "KITTI Cars Moderate val", "KITTI Pedestrian Moderate val", "KITTI Cars Hard val", "KITTI Pedestrian Easy val", "KITTI Pedestrians Easy", "KITTI Cyclist Moderate val", "KITTI Cyclist Easy val", "KITTI Cyclists Easy", "KITTI Cyclist Hard val", "KITTI Pedestrian Hard val", "KITTI Cars Easy val", "KITTI Cars Easy"], "metric": ["AP"], "title": "VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection"} {"abstract": "Experience replay lets online reinforcement learning agents remember and\nreuse experiences from the past. In prior work, experience transitions were\nuniformly sampled from a replay memory. However, this approach simply replays\ntransitions at the same frequency that they were originally experienced,\nregardless of their significance. In this paper we develop a framework for\nprioritizing experience, so as to replay important transitions more frequently,\nand therefore learn more efficiently. We use prioritized experience replay in\nDeep Q-Networks (DQN), a reinforcement learning algorithm that achieved\nhuman-level performance across many Atari games. DQN with prioritized\nexperience replay achieves a new state-of-the-art, outperforming DQN with\nuniform replay on 41 out of 49 games.", "field": ["Q-Learning Networks", "Off-Policy TD Control", "Convolutions", "Feedforward Networks", "Replay Memory"], "task": ["Atari Games"], "method": ["Q-Learning", "Prioritized Experience Replay", "Convolution", "Experience Replay", "DQN", "Dense Connections", "Deep Q-Network"], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari-57", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score", "Medium Human-Normalized Score"], "title": "Prioritized Experience Replay"} {"abstract": "Deep residual networks were shown to be able to scale up to thousands of\nlayers and still have improving performance. However, each fraction of a\npercent of improved accuracy costs nearly doubling the number of layers, and so\ntraining very deep residual networks has a problem of diminishing feature\nreuse, which makes these networks very slow to train. To tackle these problems,\nin this paper we conduct a detailed experimental study on the architecture of\nResNet blocks, based on which we propose a novel architecture where we decrease\ndepth and increase width of residual networks. We call the resulting network\nstructures wide residual networks (WRNs) and show that these are far superior\nover their commonly used thin and very deep counterparts. For example, we\ndemonstrate that even a simple 16-layer-deep wide residual network outperforms\nin accuracy and efficiency all previous deep residual networks, including\nthousand-layer-deep networks, achieving new state-of-the-art results on CIFAR,\nSVHN, COCO, and significant improvements on ImageNet. Our code and models are\navailable at https://github.com/szagoruyko/wide-residual-networks", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Image Models", "Skip Connection Blocks"], "task": ["Image Classification"], "method": ["Weight Decay", "Average Pooling", "Random Horizontal Flip", "Random Resized Crop", "Convolution", "Batch Normalization", "ReLU", "Residual Connection", "Dropout", "Wide Residual Block", "Nesterov Accelerated Gradient", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units", "WideResNet"], "dataset": ["CIFAR-100", "SVHN", "ImageNet", "CIFAR-10"], "metric": ["Number of params", "Top 1 Accuracy", "Percentage error", "Percentage correct", "Top 5 Accuracy"], "title": "Wide Residual Networks"} {"abstract": "Action recognition has already been a heated research topic recently, which attempts to classify different human actions in videos. The current main-stream methods generally utilize ImageNet-pretrained model as features extractor, however it's not the optimal choice to pretrain a model for classifying videos on a huge still image dataset. What's more, very few works notice that 3D convolution neural network(3D CNN) is better for low-level spatial-temporal features extraction while recurrent neural network(RNN) is better for modelling high-level temporal feature sequences. Consequently, a novel model is proposed in our work to address the two problems mentioned above. First, we pretrain 3D CNN model on huge video action recognition dataset Kinetics to improve generality of the model. And then long short term memory(LSTM) is introduced to model the high-level temporal features produced by the Kinetics-pretrained 3D CNN model. Our experiments results show that the Kinetics-pretrained model can generally outperform ImageNet-pretrained model. And our proposed network finally achieve leading performance on UCF-101 dataset.", "field": ["Convolutions"], "task": ["Action Recognition", "Temporal Action Localization"], "method": ["3D Convolution", "Convolution"], "dataset": ["UCF101"], "metric": ["3-fold Accuracy"], "title": "I3D-LSTM: A New Model for Human Action Recognition"} {"abstract": "Current state-of-the-art convolutional architectures for object detection are\nmanually designed. Here we aim to learn a better architecture of feature\npyramid network for object detection. We adopt Neural Architecture Search and\ndiscover a new feature pyramid architecture in a novel scalable search space\ncovering all cross-scale connections. The discovered architecture, named\nNAS-FPN, consists of a combination of top-down and bottom-up connections to\nfuse features across scales. NAS-FPN, combined with various backbone models in\nthe RetinaNet framework, achieves better accuracy and latency tradeoff compared\nto state-of-the-art object detection models. NAS-FPN improves mobile detection\naccuracy by 2 AP compared to state-of-the-art SSDLite with MobileNetV2 model in\n[32] and achieves 48.3 AP which surpasses Mask R-CNN [10] detection accuracy\nwith less computation time.", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Learning Rate Schedules", "Feature Extractors", "Recurrent Neural Networks", "Activation Functions", "Output Functions", "RoI Feature Extractors", "Normalization", "Loss Functions", "Convolutions", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Object Detection Models", "Image Models", "Skip Connection Blocks"], "task": ["Neural Architecture Search", "Object Detection", "Real-Time Object Detection"], "method": ["Depthwise Convolution", "Average Pooling", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "RoIAlign", "Spatially Separable Convolution", "ResNet", "MobileNetV2", "Convolution", "NAS-FPN", "ReLU", "Residual Connection", "AmoebaNet", "Focal Loss", "Batch Normalization", "Residual Network", "Pointwise Convolution", "Kaiming Initialization", "Step Decay", "Sigmoid Activation", "DropBlock", "Inverted Residual Block", "Softmax", "LSTM", "Bottleneck Residual Block", "Depthwise Separable Convolution", "Mask R-CNN", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO"], "metric": ["inference time (ms)", "FPS", "MAP"], "title": "NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection"} {"abstract": "This paper presents MAST, a new model for Multimodal Abstractive Text Summarization that utilizes information from all three modalities -- text, audio and video -- in a multimodal video. Prior work on multimodal abstractive text summarization only utilized information from the text and video modalities. We examine the usefulness and challenges of deriving information from the audio modality and present a sequence-to-sequence trimodal hierarchical attention-based model that overcomes these challenges by letting the model pay more attention to the text modality. MAST outperforms the current state of the art model (video-text) by 2.51 points in terms of Content F1 score and 1.00 points in terms of Rouge-L score on the How2 dataset for multimodal language understanding.", "field": ["Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models"], "task": ["Abstractive Text Summarization", "Multimodal Abstractive Text Summarization", "Text Summarization"], "method": ["Long Short-Term Memory", "Tanh Activation", "Sequence to Sequence", "LSTM", "Seq2Seq", "Sigmoid Activation"], "dataset": ["How2 300h"], "metric": ["ROUGE-L"], "title": "MAST: Multimodal Abstractive Summarization with Trimodal Hierarchical Attention"} {"abstract": "We present a novel hierarchical triplet loss (HTL) capable of automatically\ncollecting informative training samples (triplets) via a defined hierarchical\ntree that encodes global context information. This allows us to cope with the\nmain limitation of random sampling in training a conventional triplet loss,\nwhich is a central issue for deep metric learning. Our main contributions are\ntwo-fold. (i) we construct a hierarchical class-level tree where neighboring\nclasses are merged recursively. The hierarchical structure naturally captures\nthe intrinsic data distribution over the whole database. (ii) we formulate the\nproblem of triplet collection by introducing a new violate margin, which is\ncomputed dynamically based on the designed hierarchical tree. This allows it to\nautomatically select meaningful hard samples with the guide of global context.\nIt encourages the model to learn more discriminative features from visual\nsimilar classes, leading to faster convergence and better performance. Our\nmethod is evaluated on the tasks of image retrieval and face recognition, where\nit outperforms the standard triplet loss substantially by 1%-18%. It achieves\nnew state-of-the-art performance on a number of benchmarks, with much fewer\nlearning iterations.", "field": ["Loss Functions"], "task": ["Face Recognition", "Hierarchical structure", "Image Retrieval", "Metric Learning"], "method": ["Triplet Loss"], "dataset": ["CARS196"], "metric": ["R@1"], "title": "Deep Metric Learning with Hierarchical Triplet Loss"} {"abstract": "Mutual information is widely applied to learn latent representations of observations, whilst its implication in classification neural networks remain to be better explained. We show that optimising the parameters of classification neural networks with softmax cross-entropy is equivalent to maximising the mutual information between inputs and labels under the balanced data assumption. Through experiments on synthetic and real datasets, we show that softmax cross-entropy can estimate mutual information approximately. When applied to image classification, this relation helps approximate the point-wise mutual information between an input image and a label without modifying the network structure. To this end, we propose infoCAM, informative class activation map, which highlights regions of the input image that are the most relevant to a given label based on differences in information. The activation map helps localise the target object in an input image. Through experiments on the semi-supervised object localisation task with two real-world datasets, we evaluate the effectiveness of our information-theoretic approach.", "field": ["Output Functions"], "task": ["Fine-Grained Image Classification", "Image Classification", "Weakly-Supervised Object Localization"], "method": ["Softmax"], "dataset": [" CUB-200-2011", "Tiny ImageNet", "Imbalanced CUB-200-2011"], "metric": ["Top-1 Error Rate", "Top-1 Localization Accuracy", "Average Per-Class Accuracy", "Accuracy"], "title": "Rethinking Softmax with Cross-Entropy: Neural Network Classifier as Mutual Information Estimator"} {"abstract": "Learning compressed representations of multivariate time series (MTS) facilitates data analysis in the presence of noise and redundant information, and for a large number of variates and time steps. However, classical dimensionality reduction approaches are designed for vectorial data and cannot deal explicitly with missing values. In this work, we propose a novel autoencoder architecture based on recurrent neural networks to generate compressed representations of MTS. The proposed model can process inputs characterized by variable lengths and it is specifically designed to handle missing data. Our autoencoder learns fixed-length vectorial representations, whose pairwise similarities are aligned to a kernel function that operates in input space and that handles missing values. This allows to learn good representations, even in the presence of a significant amount of missing data. To show the effectiveness of the proposed approach, we evaluate the quality of the learned representations in several classification tasks, including those involving medical data, and we compare to other methods for dimensionality reduction. Successively, we design two frameworks based on the proposed architecture: one for imputing missing data and another for one-class classification. Finally, we analyze under what circumstances an autoencoder with recurrent layers can learn better compressed representations of MTS than feed-forward architectures.", "field": ["Generative Models"], "task": ["Dimensionality Reduction", "Imputation", "Time Series", "Time Series Classification"], "method": ["AutoEncoder"], "dataset": ["Physionet 2017 Atrial Fibrillation"], "metric": ["AUC"], "title": "Learning representations for multivariate time series with missing data using Temporal Kernelized Autoencoders"} {"abstract": "Face recognition in the unconstrained environment is an ongoing research challenge. Although several covariates of face recognition such as pose and low resolution have received significant attention\u201c, disguise\u201d is considered an onerous covariate of face recognition. One of the primary reasons for this is the scarcity of large and representative labeled databases, along with the lack of algorithms that work well for multiple covariates in such environments. In order to address the problem of face recognition in the presence of disguise, the paper proposes an active learning framework termed as A2-LINK. Starting with a face recognition machine-learning model, A2-LINK intelligently selects training samples from the target domain to be labeled and, using hybrid noises such as adversarial noise, fine-tunes a model that works well both in the presence and absence of disguise. Experimental results demonstrate the effectiveness and generalization of the proposed framework on the DFW and DFW2019 datasets with state-of-the-art deep learning featurization models such as LCSSE, ArcFace, and DenseNet.", "field": ["Initialization", "Output Functions", "Regularization", "Convolutional Neural Networks", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks"], "task": ["Active Learning", "Domain Adaptation", "Face Recognition", "Heterogeneous Face Recognition"], "method": ["Dense Block", "Average Pooling", "Dense Connections", "Global Average Pooling", "Softmax", "Concatenated Skip Connection", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Dropout", "DenseNet", "Kaiming Initialization", "ArcFace", "Additive Angular Margin Loss", "Rectified Linear Units", "Max Pooling"], "dataset": ["Disguised Faces in the Wild", "Disguised Faces in the Wild 2019"], "metric": ["GAR @0.01% FAR Plastic Surgery", "GAR @1% FAR Impersonation", "GAR @0.01% FAR Overall", "GAR @0.01% FAR Impersonation", "GAR @0.1% FAR Obfuscation", "GAR @0.1% FAR Plastic Surgery", "GAR @1% FAR Obfuscation", "GAR @0.01% FAR Obfuscation", "GAR @0.1% FAR Overall", "GAR @0.1% FAR Impersonation", "GAR @1% FAR Overall"], "title": "A2-LINK: Recognizing Disguised Faces via Active Learning and Adversarial Noise based Inter-Domain Knowledge"} {"abstract": "Presently the most successful approaches to semi-supervised learning are\nbased on consistency regularization, whereby a model is trained to be robust to\nsmall perturbations of its inputs and parameters. To understand consistency\nregularization, we conceptually explore how loss geometry interacts with\ntraining procedures. The consistency loss dramatically improves generalization\nperformance over supervised-only training; however, we show that SGD struggles\nto converge on the consistency loss and continues to make large steps that lead\nto changes in predictions on the test data. Motivated by these observations, we\npropose to train consistency-based methods with Stochastic Weight Averaging\n(SWA), a recent approach which averages weights along the trajectory of SGD\nwith a modified learning rate schedule. We also propose fast-SWA, which further\naccelerates convergence by averaging multiple points within each cycle of a\ncyclical learning rate schedule. With weight averaging, we achieve the best\nknown semi-supervised results on CIFAR-10 and CIFAR-100, over many different\nquantities of labeled training data. For example, we achieve 5.0% error on\nCIFAR-10 with only 4000 labels, compared to the previous best result in the\nliterature of 6.3%.", "field": ["Stochastic Optimization"], "task": ["Domain Adaptation", "Semi-Supervised Image Classification"], "method": ["Stochastic Gradient Descent", "SGD"], "dataset": ["CIFAR-10, 4000 Labels"], "metric": ["Accuracy"], "title": "There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average"} {"abstract": "Consecutive frames in a video are highly redundant. Therefore, to perform the task of video object detection, executing single frame detectors on every frame without reusing any information is quite wasteful. It is with this idea in mind that we propose RN-VID (standing for RetinaNet-VIDeo), a novel approach to video object detection. Our contributions are twofold. First, we propose a new architecture that allows the usage of information from nearby frames to enhance feature maps. Second, we propose a novel module to merge feature maps of same dimensions using re-ordering of channels and 1 x 1 convolutions. We then demonstrate that RN-VID achieves better mean average precision (mAP) than corresponding single frame detectors with little additional cost during inference.", "field": ["Convolutions"], "task": ["Object Detection", "Video Object Detection"], "method": ["1x1 Convolution"], "dataset": ["UAVDT", "UA-DETRAC"], "metric": ["mAP"], "title": "RN-VID: A Feature Fusion Architecture for Video Object Detection"} {"abstract": "Classical work on line segment detection is knowledge-based; it uses carefully designed geometric priors using either image gradients, pixel groupings, or Hough transform variants. Instead, current deep learning methods do away with all prior knowledge and replace priors by training deep networks on large manually annotated datasets. Here, we reduce the dependency on labeled data by building on the classic knowledge-based priors while using deep networks to learn features. We add line priors through a trainable Hough transform block into a deep network. Hough transform provides the prior knowledge about global line parameterizations, while the convolutional layers can learn the local gradient-like line features. On the Wireframe (ShanghaiTech) and York Urban datasets we show that adding prior knowledge improves data efficiency as line priors no longer need to be learned from data. Keywords: Hough transform; global line prior, line segment detection.", "field": ["Graph Embeddings"], "task": ["Line Segment Detection"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["York Urban Dataset", "wireframe dataset"], "metric": ["sAP10", "sAP5"], "title": "Deep Hough-Transform Line Priors"} {"abstract": "In this work, we build on recent advances in distributional reinforcement\nlearning to give a generally applicable, flexible, and state-of-the-art\ndistributional variant of DQN. We achieve this by using quantile regression to\napproximate the full quantile function for the state-action return\ndistribution. By reparameterizing a distribution over the sample space, this\nyields an implicitly defined return distribution and gives rise to a large\nclass of risk-sensitive policies. We demonstrate improved performance on the 57\nAtari 2600 games in the ALE, and use our algorithm's implicitly defined\ndistributions to study the effects of risk-sensitive policies in Atari games.", "field": ["Q-Learning Networks", "Convolutions", "Feedforward Networks", "Off-Policy TD Control"], "task": ["Atari Games", "Distributional Reinforcement Learning", "Regression"], "method": ["Q-Learning", "Convolution", "DQN", "Dense Connections", "Deep Q-Network"], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Pitfall!", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Defender", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Solaris", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Surround", "Atari 2600 Yars Revenge", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "Implicit Quantile Networks for Distributional Reinforcement Learning"} {"abstract": "The success of deep neural networks generally requires a vast amount of\ntraining data to be labeled, which is expensive and unfeasible in scale,\nespecially for video collections. To alleviate this problem, in this paper, we\npropose 3DRotNet: a fully self-supervised approach to learn spatiotemporal\nfeatures from unlabeled videos. A set of rotations are applied to all videos,\nand a pretext task is defined as prediction of these rotations. When\naccomplishing this task, 3DRotNet is actually trained to understand the\nsemantic concepts and motions in videos. In other words, it learns a\nspatiotemporal video representation, which can be transferred to improve video\nunderstanding tasks in small datasets. Our extensive experiments successfully\ndemonstrate the effectiveness of the proposed framework on action recognition,\nleading to significant improvements over the state-of-the-art self-supervised\nmethods. With the self-supervised pre-trained 3DRotNet from large datasets, the\nrecognition accuracy is boosted up by 20.4% on UCF101 and 16.7% on HMDB51\nrespectively, compared to the models trained from scratch.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Action Recognition", "Self-Supervised Action Recognition", "Temporal Action Localization", "Video Understanding"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["UCF101", "HMDB51"], "metric": ["3-fold Accuracy", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "Self-Supervised Spatiotemporal Feature Learning via Video Rotation Prediction"} {"abstract": "In this work, we present a simple, highly efficient and modularized Dual Path\nNetwork (DPN) for image classification which presents a new topology of\nconnection paths internally. By revealing the equivalence of the\nstate-of-the-art Residual Network (ResNet) and Densely Convolutional Network\n(DenseNet) within the HORNN framework, we find that ResNet enables feature\nre-usage while DenseNet enables new features exploration which are both\nimportant for learning good representations. To enjoy the benefits from both\npath topologies, our proposed Dual Path Network shares common features while\nmaintaining the flexibility to explore new features through dual path\narchitectures. Extensive experiments on three benchmark datasets, ImagNet-1k,\nPlaces365 and PASCAL VOC, clearly demonstrate superior performance of the\nproposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset,\na shallow DPN surpasses the best ResNeXt-101(64x4d) with 26% smaller model\nsize, 25% less computational cost and 8% lower memory consumption, and a deeper\nDPN (DPN-131) further pushes the state-of-the-art single model performance with\nabout 2 times faster training speed. Experiments on the Places365 large-scale\nscene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation\ndataset also demonstrate its consistently better performance than DenseNet,\nResNet and the latest ResNeXt model over various applications.", "field": ["Image Data Augmentation", "Regularization", "Convolutional Neural Networks", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Image Classification"], "method": ["Stochastic Gradient Descent", "DPN", "Dense Block", "Average Pooling", "Softmax", "Random Horizontal Flip", "Random Resized Crop", "DPN Block", "Concatenated Skip Connection", "Convolution", "SGD", "1x1 Convolution", "Dual Path Network", "Dropout", "DenseNet", "Dense Connections", "Step Decay"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 5 Accuracy", "Top 1 Accuracy"], "title": "Dual Path Networks"} {"abstract": "We consider the problem of distance metric learning (DML), where the task is to learn an effective similarity measure between images. We revisit ProxyNCA and incorporate several enhancements. We find that low temperature scaling is a performance-critical component and explain why it works. Besides, we also discover that Global Max Pooling works better in general when compared to Global Average Pooling. Additionally, our proposed fast moving proxies also addresses small gradient issue of proxies, and this component synergizes well with low temperature scaling and Global Max Pooling. Our enhanced model, called ProxyNCA++, achieves a 22.9 percentage point average improvement of Recall@1 across four different zero-shot retrieval datasets compared to the original ProxyNCA algorithm. Furthermore, we achieve state-of-the-art results on the CUB200, Cars196, Sop, and InShop datasets, achieving Recall@1 scores of 72.2, 90.1, 81.4, and 90.9, respectively.", "field": ["Pooling Operations"], "task": ["Image Retrieval", "Metric Learning"], "method": ["Global Average Pooling", "Max Pooling", "Average Pooling"], "dataset": ["SOP", " CUB-200-2011", "In-Shop", "CARS196", "CUB-200-2011", "Stanford Online Products"], "metric": ["R@1"], "title": "ProxyNCA++: Revisiting and Revitalizing Proxy Neighborhood Component Analysis"} {"abstract": "In this paper, we address the problem of joint detection of objects like dog and its semantic parts like face, leg, etc. Our model is created on top of two Faster-RCNN models that share their features to perform a novel Attention-based feature fusion of related Object and Part features to get enhanced representations of both. These representations are used for final classification and bounding box regression separately for both models. Our experiments on the PASCAL-Part 2010 dataset show that joint detection can simultaneously improve both object detection and part detection in terms of mean Average Precision (mAP) at IoU=0.5.", "field": ["Object Detection Models", "Output Functions", "Attention Modules", "RoI Feature Extractors", "Convolutions", "Attention Mechanisms", "Region Proposal"], "task": ["Object Detection", "Regression", "Semantic Part Detection"], "method": ["RPN", "Single-Headed Attention", "Faster R-CNN", "Softmax", "Additive Attention", "Convolution", "RoIPool", "Region Proposal Network"], "dataset": ["PASCAL Part 2010 - Animals"], "metric": ["mAP@0.5"], "title": "Attention-based Joint Detection of Object and Semantic Part"} {"abstract": "Although using convolutional neural networks (CNNs) as backbones achieves great successes in computer vision, this work investigates a simple backbone network useful for many dense prediction tasks without convolutions. Unlike the recently-proposed Transformer model (e.g., ViT) that is specially designed for image classification, we propose Pyramid Vision Transformer~(PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to prior arts. (1) Different from ViT that typically has low-resolution outputs and high computational and memory cost, PVT can be not only trained on dense partitions of the image to achieve high output resolution, which is important for dense predictions but also using a progressive shrinking pyramid to reduce computations of large feature maps. (2) PVT inherits the advantages from both CNN and Transformer, making it a unified backbone in various vision tasks without convolutions by simply replacing CNN backbones. (3) We validate PVT by conducting extensive experiments, showing that it boosts the performance of many downstream tasks, e.g., object detection, semantic, and instance segmentation. For example, with a comparable number of parameters, RetinaNet+PVT achieves 40.4 AP on the COCO dataset, surpassing RetinNet+ResNet50 (36.3 AP) by 4.1 absolute AP. We hope PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future researches. Code is available at https://github.com/whai362/PVT.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Normalization", "Subword Segmentation", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Image Classification", "Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["COCO minival"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions"} {"abstract": "We present Adaptive Instance Selection network architecture for class-agnostic instance segmentation. Given an input image and a point $(x, y)$, it generates a mask for the object located at $(x, y)$. The network adapts to the input point with a help of AdaIN layers, thus producing different masks for different objects on the same image. AdaptIS generates pixel-accurate object masks, therefore it accurately segments objects of complex shape or severely occluded ones. AdaptIS can be easily combined with standard semantic segmentation pipeline to perform panoptic segmentation. To illustrate the idea, we perform experiments on a challenging toy problem with difficult occlusions. Then we extensively evaluate the method on panoptic segmentation benchmarks. We obtain state-of-the-art results on Cityscapes and Mapillary even without pretraining on COCO, and show competitive results on a challenging COCO dataset. The source code of the method and the trained models are available at https://github.com/saic-vul/adaptis.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Panoptic Segmentation", "Semantic Segmentation"], "method": ["ResNet", "ResNeXt Block", "Average Pooling", "Grouped Convolution", "ResNeXt", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Mapillary val", "Cityscapes val", "COCO test-dev"], "metric": ["PQst", "mIoU", "PQth", "PQ", "AP"], "title": "AdaptIS: Adaptive Instance Selection Network"} {"abstract": "Face detection has witnessed significant progress due to the advances of deep convolutional neural networks (CNNs). Its central issue in recent years is how to improve the detection performance of tiny faces. To this end, many recent works propose some specific strategies, redesign the architecture and introduce new loss functions for tiny object detection. In this report, we start from the popular one-stage RetinaNet approach and apply some recent tricks to obtain a high performance face detector. Specifically, we apply the Intersection over Union (IoU) loss function for regression, employ the two-step classification and regression for detection, revisit the data augmentation based on data-anchor-sampling for training, utilize the max-out operation for classification and use the multi-scale testing strategy for inference. As a consequence, the proposed face detection method achieves state-of-the-art performance on the most popular and challenging face detection benchmark WIDER FACE dataset.", "field": ["Feature Extractors", "Convolutions", "Object Detection Models", "Loss Functions"], "task": ["Data Augmentation", "Face Detection", "Object Detection", "Regression"], "method": ["Focal Loss", "Feature Pyramid Network", "Convolution", "1x1 Convolution", "FPN", "RetinaNet"], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)"], "metric": ["AP"], "title": "Accurate Face Detection for High Performance"} {"abstract": "Recent advances in deep learning, especially deep convolutional neural\nnetworks (CNNs), have led to significant improvement over previous semantic\nsegmentation systems. Here we show how to improve pixel-wise semantic\nsegmentation by manipulating convolution-related operations that are of both\ntheoretical and practical value. First, we design dense upsampling convolution\n(DUC) to generate pixel-level prediction, which is able to capture and decode\nmore detailed information that is generally missing in bilinear upsampling.\nSecond, we propose a hybrid dilated convolution (HDC) framework in the encoding\nphase. This framework 1) effectively enlarges the receptive fields (RF) of the\nnetwork to aggregate global information; 2) alleviates what we call the\n\"gridding issue\" caused by the standard dilated convolution operation. We\nevaluate our approaches thoroughly on the Cityscapes dataset, and achieve a\nstate-of-art result of 80.1% mIOU in the test set at the time of submission. We\nalso have achieved state-of-the-art overall on the KITTI road estimation\nbenchmark and the PASCAL VOC2012 segmentation task. Our source code can be\nfound at https://github.com/TuSimple/TuSimple-DUC .", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Semantic Segmentation"], "method": ["ResNet", "Dilated Convolution", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["PASCAL VOC 2012 test", "Cityscapes test"], "metric": ["Mean IoU", "Mean IoU (class)"], "title": "Understanding Convolution for Semantic Segmentation"} {"abstract": "Language representation models such as BERT could effectively capture contextual semantic information from plain text, and have been proved to achieve promising results in lots of downstream NLP tasks with appropriate fine-tuning. However, most existing language representation models cannot explicitly handle coreference, which is essential to the coherent understanding of the whole discourse. To address this issue, we present CorefBERT, a novel language representation model that can capture the coreferential relations in context. The experimental results show that, compared with existing baseline models, CorefBERT can achieve significant improvements consistently on various downstream NLP tasks that require coreferential reasoning, while maintaining comparable performance to previous models on other common NLP tasks. The source code and experiment details of this paper can be obtained from https://github.com/thunlp/CorefBERT.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Relation Extraction"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["DocRED"], "metric": ["Ign F1", "F1"], "title": "Coreferential Reasoning Learning for Language Representation"} {"abstract": "This paper describes a simple UCCA semantic graph parsing approach. The key idea is to convert a UCCA semantic graph into a constituent tree, in which extra labels are deliberately designed to mark remote edges and discontinuous nodes for future recovery. In this way, we can make use of existing syntactic parsing techniques. Based on the data statistics, we recover discontinuous nodes directly according to the output labels of the constituent parser and use a biaffine classification model to recover the more complex remote edges. The classification model and the constituent parser are simultaneously trained under the multi-task learning framework. We use the multilingual BERT as extra features in the open tracks. Our system ranks the first place in the six English/German closed/open tracks among seven participating systems. For the seventh cross-lingual track, where there is little training data for French, we propose a language embedding approach to utilize English and German training data, and our result ranks the second place.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Multi-Task Learning", "UCCA Parsing"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SemEval 2019 Task 1"], "metric": ["English-20K (open) F1", "English-Wiki (open) F1"], "title": "HLT@SUDA at SemEval-2019 Task 1: UCCA Graph Parsing as Constituent Tree Parsing"} {"abstract": "Despite recent progress in generative image modeling, successfully generating\nhigh-resolution, diverse samples from complex datasets such as ImageNet remains\nan elusive goal. To this end, we train Generative Adversarial Networks at the\nlargest scale yet attempted, and study the instabilities specific to such\nscale. We find that applying orthogonal regularization to the generator renders\nit amenable to a simple \"truncation trick,\" allowing fine control over the\ntrade-off between sample fidelity and variety by reducing the variance of the\nGenerator's input. Our modifications lead to models which set the new state of\nthe art in class-conditional image synthesis. When trained on ImageNet at\n128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of\n166.5 and Frechet Inception Distance (FID) of 7.4, improving over the previous\nbest IS of 52.52 and FID of 18.6.", "field": ["Normalization", "Optimization", "Attention Mechanisms", "Generative Adversarial Networks", "Discriminators", "Regularization", "Attention Modules", "Activation Functions", "Latent Variable Sampling", "Convolutions", "Image Feature Extractors", "Stochastic Optimization", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Output Functions", "Generative Models", "Skip Connections", "Image Model Blocks"], "task": ["Conditional Image Generation", "Image Generation"], "method": ["TTUR", "Truncation Trick", "Off-Diagonal Orthogonal Regularization", "Spectral Normalization", "Self-Attention GAN", "Adam", "Projection Discriminator", "Orthogonal Regularization", "Early Stopping", "GAN Hinge Loss", "1x1 Convolution", "Convolution", "SAGAN Self-Attention Module", "ReLU", "Residual Connection", "Linear Layer", "Two Time-scale Update Rule", "Dense Connections", "Feedforward Network", "Conditional Batch Normalization", "Non-Local Operation", "Batch Normalization", "Dot-Product Attention", "SAGAN", "Non-Local Block", "BigGAN-deep", "Softmax", "BigGAN", "Bottleneck Residual Block", "Residual Block", "Rectified Linear Units"], "dataset": ["ImageNet 128x128", "CIFAR-10"], "metric": ["Inception score", "FID", "IS"], "title": "Large Scale GAN Training for High Fidelity Natural Image Synthesis"} {"abstract": "Deep generative models are becoming a cornerstone of modern machine learning. Recent work on conditional generative adversarial networks has shown that learning complex, high-dimensional distributions over natural images is within reach. While the latest models are able to generate high-fidelity, diverse natural images at high resolution, they rely on a vast quantity of labeled data. In this work we demonstrate how one can benefit from recent work on self- and semi-supervised learning to outperform the state of the art on both unsupervised ImageNet synthesis, as well as in the conditional setting. In particular, the proposed approach is able to match the sample quality (as measured by FID) of the current state-of-the-art conditional model BigGAN on ImageNet using only 10% of the labels and outperform it using 20% of the labels.", "field": ["Normalization", "Optimization", "Attention Mechanisms", "Generative Adversarial Networks", "Discriminators", "Regularization", "Attention Modules", "Activation Functions", "Latent Variable Sampling", "Convolutions", "Image Feature Extractors", "Stochastic Optimization", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Output Functions", "Generative Models", "Skip Connections", "Image Model Blocks"], "task": ["Conditional Image Generation", "Image Generation"], "method": ["TTUR", "Truncation Trick", "Off-Diagonal Orthogonal Regularization", "Spectral Normalization", "Self-Attention GAN", "Adam", "Projection Discriminator", "Early Stopping", "GAN Hinge Loss", "1x1 Convolution", "Convolution", "SAGAN Self-Attention Module", "ReLU", "Residual Connection", "Linear Layer", "Two Time-scale Update Rule", "Dense Connections", "Feedforward Network", "Conditional Batch Normalization", "Non-Local Operation", "Batch Normalization", "Dot-Product Attention", "SAGAN", "Non-Local Block", "Softmax", "BigGAN", "Residual Block", "Rectified Linear Units"], "dataset": ["ImageNet 128x128"], "metric": ["Inception score", "FID"], "title": "High-Fidelity Image Generation With Fewer Labels"} {"abstract": "This work is a solution to densely packed scenes dataset SKU-110k. Our work is modified from Cascade R-CNN. To solve the problem, we proposed a random crop strategy to ensure both the sampling rate and input scale is relatively sufficient as a contrast to the regular random crop. And we adopted some of trick and optimized the hyper-parameters. To grasp the essential feature of the densely packed scenes, we analysis the stages of a detector and investigate the bottleneck which limits the performance. As a result, our method obtains 58.7 mAP on test set of SKU-110k.", "field": ["Object Detection Models"], "task": ["Dense Object Detection"], "method": ["Cascade R-CNN"], "dataset": ["SKU-110K"], "metric": ["AP75", "AP"], "title": "A Solution to Product detection in Densely Packed Scenes"} {"abstract": "Unsupervised image-to-image translation has gained considerable attention due\nto the recent impressive progress based on generative adversarial networks\n(GANs). However, previous methods often fail in challenging cases, in\nparticular, when an image has multiple target instances and a translation task\ninvolves significant changes in shape, e.g., translating pants to skirts in\nfashion images. To tackle the issues, we propose a novel method, coined\ninstance-aware GAN (InstaGAN), that incorporates the instance information\n(e.g., object segmentation masks) and improves multi-instance transfiguration.\nThe proposed method translates both an image and the corresponding set of\ninstance attributes while maintaining the permutation invariance property of\nthe instances. To this end, we introduce a context preserving loss that\nencourages the network to learn the identity function outside of target\ninstances. We also propose a sequential mini-batch inference/training technique\nthat handles multiple instances with a limited GPU memory and enhances the\nnetwork to generalize better for multiple instances. Our comparative evaluation\ndemonstrates the effectiveness of the proposed method on different image\ndatasets, in particular, in the aforementioned challenging cases. Code and\nresults are available in https://github.com/sangwoomo/instagan", "field": ["Generative Models", "Convolutions"], "task": ["Image-to-Image Translation", "Semantic Segmentation", "Unsupervised Image-To-Image Translation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["Object Transfiguration (sheep-to-giraffe)"], "metric": ["classification score"], "title": "InstaGAN: Instance-aware Image-to-Image Translation"} {"abstract": "In this paper, we propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the sampling-based method for generating linear sequences proposed by Perozzi et al. (2014). The advantages of our approach will be illustrated from both theorical and empirical perspectives. We also give a new perspective for the matrix factorization method proposed by Levy and Goldberg (2014), in which the pointwise mutual information (PMI) matrix is considered as an analytical solution to the objective function of the skip-gram model with negative sampling proposed by Mikolov et al. (2013). Unlike their approach which involves the use of the SVD for finding the low-dimensitonal projections from the PMI matrix, however, the stacked denoising autoencoder is introduced in our model to extract complex features and model non-linearities. To demonstrate the effectiveness of our model, we conduct experiments on clustering and visualization tasks, employing the learned vertex representations as features. Empirical results on datasets of varying sizes show that our model outperforms other stat-of-the-art models in such tasks.", "field": ["Generative Models"], "task": ["Denoising", "Graph Clustering"], "method": ["AutoEncoder", "Denoising Autoencoder"], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Deep neural networks for learning graph representations"} {"abstract": "Recent work has made significant progress in improving spatial resolution for\npixelwise labeling with Fully Convolutional Network (FCN) framework by\nemploying Dilated/Atrous convolution, utilizing multi-scale features and\nrefining boundaries. In this paper, we explore the impact of global contextual\ninformation in semantic segmentation by introducing the Context Encoding\nModule, which captures the semantic context of scenes and selectively\nhighlights class-dependent featuremaps. The proposed Context Encoding Module\nsignificantly improves semantic segmentation results with only marginal extra\ncomputation cost over FCN. Our approach has achieved new state-of-the-art\nresults 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single\nmodel achieves a final score of 0.5567 on ADE20K test set, which surpass the\nwinning entry of COCO-Place Challenge in 2017. In addition, we also explore how\nthe Context Encoding Module can improve the feature representation of\nrelatively shallow networks for the image classification on CIFAR-10 dataset.\nOur 14 layer network has achieved an error rate of 3.45%, which is comparable\nwith state-of-the-art approaches with over 10 times more layers. The source\ncode for the complete system are publicly available.", "field": ["Semantic Segmentation Models", "Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Max Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "SyncBN", "Synchronized Batch Normalization", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Fully Convolutional Network", "FCN"], "dataset": ["ADE20K", "PASCAL Context", "PASCAL VOC 2012 test", "ADE20K val"], "metric": ["Mean IoU", "Validation mIoU", "Test Score", "mIoU"], "title": "Context Encoding for Semantic Segmentation"} {"abstract": "We introduce effective training algorithms for Generative Adversarial\nNetworks (GAN) to alleviate mode collapse and gradient vanishing. In our\nsystem, we constrain the generator by an Autoencoder (AE). We propose a\nformulation to consider the reconstructed samples from AE as \"real\" samples for\nthe discriminator. This couples the convergence of the AE with that of the\ndiscriminator, effectively slowing down the convergence of discriminator and\nreducing gradient vanishing. Importantly, we propose two novel distance\nconstraints to improve the generator. First, we propose a latent-data distance\nconstraint to enforce compatibility between the latent sample distances and the\ncorresponding data sample distances. We use this constraint to explicitly\nprevent the generator from mode collapse. Second, we propose a\ndiscriminator-score distance constraint to align the distribution of the\ngenerated samples with that of the real samples through the discriminator\nscore. We use this constraint to guide the generator to synthesize samples that\nresemble the real ones. Our proposed GAN using these distance constraints,\nnamely Dist-GAN, can achieve better results than state-of-the-art methods\nacross benchmark datasets: synthetic, MNIST, MNIST-1K, CelebA, CIFAR-10 and\nSTL-10 datasets. Our code is published here (https://github.com/tntrung/gan)\nfor research.", "field": ["Generative Models", "Dimensionality Reduction", "Convolutions"], "task": ["Image Generation"], "method": ["Generative Adversarial Network", "AE", "Autoencoders", "GAN", "Convolution", "AutoEncoder"], "dataset": ["STL-10", "CIFAR-10"], "metric": ["FID"], "title": "Dist-GAN: An Improved GAN using Distance Constraints"} {"abstract": "In this paper we aim at facilitating generalization for deep networks while supporting interpretability of the learned representations. Towards this goal, we propose a clustering based regularization that encourages parsimonious representations. Our k-means style objective is easy to optimize and flexible supporting various forms of clustering, including sample and spatial clustering as well as co-clustering. We demonstrate the effectiveness of our approach on the tasks of unsupervised learning, classification, fine grained categorization and zero-shot learning.", "field": ["Image Models"], "task": ["Few-Shot Image Classification", "Zero-Shot Learning"], "method": ["Interpretability"], "dataset": ["CUB-200 - 0-Shot Learning"], "metric": ["Accuracy"], "title": "Learning Deep Parsimonious Representations"} {"abstract": "Graph convolutional network (GCN) provides a powerful means for graph-based\nsemi-supervised tasks. However, as a localized first-order approximation of\nspectral graph convolution, the classic GCN can not take full advantage of\nunlabeled data, especially when the unlabeled node is far from labeled ones. To\ncapitalize on the information from unlabeled nodes to boost the training for\nGCN, we propose a novel framework named Self-Ensembling GCN (SEGCN), which\nmarries GCN with Mean Teacher - another powerful model in semi-supervised\nlearning. SEGCN contains a student model and a teacher model. As a student, it\nnot only learns to correctly classify the labeled nodes, but also tries to be\nconsistent with the teacher on unlabeled nodes in more challenging situations,\nsuch as a high dropout rate and graph collapse. As a teacher, it averages the\nstudent model weights and generates more accurate predictions to lead the\nstudent. In such a mutual-promoting process, both labeled and unlabeled samples\ncan be fully utilized for backpropagating effective gradients to train GCN. In\nthree article classification tasks, i.e. Citeseer, Cora and Pubmed, we validate\nthat the proposed method matches the state of the arts in the classification\naccuracy.", "field": ["Regularization", "Graph Models"], "task": ["Node Classification"], "method": ["Graph Convolutional Network", "Dropout", "GCN"], "dataset": ["Cora: fixed 20 node per class", "Cora", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Every Node Counts: Self-Ensembling Graph Convolutional Networks for Semi-Supervised Learning"} {"abstract": "Dialog is an effective way to exchange information, but subtle details and nuances are extremely important. While significant progress has paved a path to address visual dialog with algorithms, details and nuances remain a challenge. Attention mechanisms have demonstrated compelling results to extract details in visual question answering and also provide a convincing framework for visual dialog due to their interpretability and effectiveness. However, the many data utilities that accompany visual dialog challenge existing attention techniques. We address this issue and develop a general attention mechanism for visual dialog which operates on any number of data utilities. To this end, we design a factor graph based attention mechanism which combines any number of utility representations. We illustrate the applicability of the proposed approach on the challenging and recently introduced VisDial datasets, outperforming recent state-of-the-art methods by 1.1% for VisDial0.9 and by 2% for VisDial1.0 on MRR. Our ensemble model improved the MRR score on VisDial1.0 by more than 6%.", "field": ["Image Models"], "task": ["Question Answering", "Visual Dialog", "Visual Question Answering"], "method": ["Interpretability"], "dataset": ["Visual Dialog v1.0 test-std", "VisDial v0.9 val"], "metric": ["MRR (x 100)", "R@10", "NDCG (x 100)", "R@5", "Mean Rank", "MRR", "Mean", "R@1"], "title": "Factor Graph Attention"} {"abstract": "Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers from quadratic compute and memory requirements with respect to sequence length. Successful approaches to reduce this complexity focused on attending to local sliding windows or a small set of locations independent of content. Our work proposes to learn dynamic sparse attention patterns that avoid allocating computation and memory to attend to content unrelated to the query of interest. This work builds upon two lines of research: it combines the modeling flexibility of prior work on content-based sparse attention with the efficiency gains from approaches based on local, temporal sparse attention. Our model, the Routing Transformer, endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention to $O\\left(n^{1.5}d\\right)$ from $O\\left(n^2d\\right)$ for sequence length $n$ and hidden dimension $d$. We show that our model outperforms comparable sparse attention models on language modeling on Wikitext-103 (15.8 vs 18.3 perplexity) as well as on image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers. Additionally, we set a new state-of-the-art on the newly released PG-19 data-set, obtaining a test perplexity of 33.2 with a 22 layer Routing Transformer model trained on sequences of length 8192.", "field": ["Regularization", "Attention Modules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections", "Attention Patterns"], "task": ["Image Generation", "Language Modelling"], "method": ["Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Routing Transformer", "ReLU", "Residual Connection", "Scaled Dot-Product Attention", "Routing Attention", "Dropout", "Dense Connections", "Rectified Linear Units"], "dataset": ["ImageNet 64x64", "enwik8", "WikiText-103"], "metric": ["Bit per Character (BPC)", "Test perplexity", "Bits per dim"], "title": "Efficient Content-Based Sparse Attention with Routing Transformers"} {"abstract": "Training with more data has always been the most stable and effective way of improving performance in deep learning era. As the largest object detection dataset so far, Open Images brings great opportunities and challenges for object detection in general and sophisticated scenarios. However, owing to its semi-automatic collecting and labeling pipeline to deal with the huge data scale, Open Images dataset suffers from label-related problems that objects may explicitly or implicitly have multiple labels and the label distribution is extremely imbalanced. In this work, we quantitatively analyze these label problems and provide a simple but effective solution. We design a concurrent softmax to handle the multi-label problems in object detection and propose a soft-sampling methods with hybrid training scheduler to deal with the label imbalance. Overall, our method yields a dramatic improvement of 3.34 points, leading to the best single model with 60.90 mAP on the public object detection test set of Open Images. And our ensembling result achieves 67.17 mAP, which is 4.29 points higher than the best result of Open Images public test 2018.", "field": ["Output Functions"], "task": ["Long-tail Learning", "Object Detection"], "method": ["Softmax"], "dataset": ["ImageNet-LT"], "metric": ["Per-Class Accuracy"], "title": "Large-Scale Object Detection in the Wild from Imbalanced Multi-Labels"} {"abstract": "Synthesizing realistic profile faces is promising for more efficiently training deep pose-invariant models for large-scale unconstrained face recognition, by populating samples with extreme poses and avoiding tedious annotations. However, learning from synthetic faces may not achieve the desired performance due to the discrepancy between distributions of the synthetic and real face images. To narrow this gap, we propose a Dual-Agent Generative Adversarial Network (DA-GAN) model, which can improve the realism of a face simulator's output using unlabeled real faces, while preserving the identity information during the realism refinement. The dual agents are specifically designed for distinguishing real v.s. fake and identities simultaneously. In particular, we employ an off-the-shelf 3D face model as a simulator to generate profile face images with varying poses. DA-GAN leverages a fully convolutional network as the generator to generate high-resolution images and an auto-encoder as the discriminator with the dual agents. Besides the novel architecture, we make several key modifications to the standard GAN to preserve pose and texture, preserve identity and stabilize training process: (i) a pose perception loss; (ii) an identity perception loss; (iii) an adversarial loss with a boundary equilibrium regularization term. Experimental results show that DA-GAN not only presents compelling perceptual results but also significantly outperforms state-of-the-arts on the large-scale and challenging NIST IJB-A unconstrained face recognition benchmark. In addition, the proposed DA-GAN is also promising as a new approach for solving generic transfer learning problems more effectively.", "field": ["Generative Models", "Convolutions"], "task": ["Face Generation", "Face Model", "Face Recognition", "Face Verification", "Transfer Learning"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["IJB-A"], "metric": ["TAR @ FAR=0.01"], "title": "Dual-Agent GANs for Photorealistic and Identity Preserving Profile Face Synthesis"} {"abstract": "Current methods for skeleton-based human action recognition usually work with complete skeletons. However, in real scenarios, it is inevitable to capture incomplete or noisy skeletons, which could significantly deteriorate the performance of current methods when some informative joints are occluded or disturbed. To improve the robustness of action recognition models, a multi-stream graph convolutional network (GCN) is proposed to explore sufficient discriminative features spreading over all skeleton joints, so that the distributed redundant representation reduces the sensitivity of the action models to non-standard skeletons. Concretely, the backbone GCN is extended by a series of ordered streams which is responsible for learning discriminative features from the joints less activated by preceding streams. Here, the activation degrees of skeleton joints of each GCN stream are measured by the class activation maps (CAM), and only the information from the unactivated joints will be passed to the next stream, by which rich features over all active joints are obtained. Thus, the proposed method is termed richly activated GCN (RA-GCN). Compared to the state-of-the-art (SOTA) methods, the RA-GCN achieves comparable performance on the standard NTU RGB+D 60 and 120 datasets. More crucially, on the synthetic occlusion and jittering datasets, the performance deterioration due to the occluded and disturbed joints can be significantly alleviated by utilizing the proposed RA-GCN.", "field": ["Graph Models"], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["NTU RGB+D", "NTU RGB+D 120"], "metric": ["Accuracy (Cross-Subject)", "Accuracy (Cross-Setup)", "Accuracy (CV)", "Accuracy (CS)"], "title": "Richly Activated Graph Convolutional Network for Robust Skeleton-based Action Recognition"} {"abstract": "Aspect-based sentiment analysis (ABSA) task is a multi-grained task of natural language processing and consists of two subtasks: aspect term extraction (ATE) and aspect polarity classification (APC). Most of the existing work focuses on the subtask of aspect term polarity inferring and ignores the significance of aspect term extraction. Besides, the existing researches do not pay attention to the research of the Chinese-oriented ABSA task. Based on the local context focus (LCF) mechanism, this paper firstly proposes a multi-task learning model for Chinese-oriented aspect-based sentiment analysis, namely LCF-ATEPC. Compared with existing models, this model equips the capability of extracting aspect term and inferring aspect term polarity synchronously, moreover, this model is effective to analyze both Chinese and English comments simultaneously and the experiment on a multilingual mixed dataset proved its availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model achieved the state-of-the-art performance of aspect term extraction and aspect polarity classification in four Chinese review datasets. Besides, the experimental results on the most commonly used SemEval-2014 task4 Restaurant and Laptop datasets outperform the state-of-the-art performance on the ATE and APC subtask.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Aspect-Based Sentiment Analysis", "Multi-Task Learning", "Sentiment Analysis"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction"} {"abstract": "For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort to use NAS to optimize DNN architectures for other vision tasks. In this work, we present what we believe to be the first proxyless hardware-aware search targeted for dense semantic segmentation. With this approach, we advance the state-of-the-art accuracy for latency-optimized networks on the Cityscapes semantic segmentation dataset. Our latency-optimized small SqueezeNAS network achieves 68.02% validation class mIOU with less than 35 ms inference times on the NVIDIA AGX Xavier. Our latency-optimized large SqueezeNAS network achieves 73.62% class mIOU with less than 100 ms inference times. We demonstrate that significant performance gains are possible by utilizing NAS to find networks optimized for both the specific task and inference hardware. We also present detailed analysis comparing our networks to recent state-of-the-art architectures.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions"], "task": ["Image Classification", "Neural Architecture Search", "Semantic Segmentation"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["Cityscapes val", "Cityscapes test"], "metric": ["Mean IoU (class)", "mIoU"], "title": "SqueezeNAS: Fast neural architecture search for faster semantic segmentation"} {"abstract": "This paper describes the HUJI-KU system submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2020 Conference for Computational Language Learning (CoNLL), employing TUPA and the HIT-SCIR parser, which were, respectively, the baseline system and winning system in the 2019 MRP shared task. Both are transition-based parsers using BERT contextualized embeddings. We generalized TUPA to support the newly-added MRP frameworks and languages, and experimented with multitask learning with the HIT-SCIR parser. We reached 4th place in both the cross-framework and cross-lingual tracks.", "field": ["Output Functions", "Attention Modules", "Stochastic Optimization", "Regularization", "Learning Rate Schedules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Semantic Parsing"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["PTG (english, MRP 2020)", "PTG (czech, MRP 2020)", "DRG (english, MRP 2020)", "UCCA (english, MRP 2020)", "EDS (english, MRP 2020)", "UCCA (german, MRP 2020)", "DRG (german, MRP 2020)", "AMR (chinese, MRP 2020)", "AMR (english, MRP 2020)"], "metric": ["F1"], "title": "HUJI-KU at MRP~2020: Two Transition-based Neural Parsers"} {"abstract": "Sliding-window object detectors that generate bounding-box object predictions over a dense, regular grid have advanced rapidly and proven popular. In contrast, modern instance segmentation approaches are dominated by methods that first detect object bounding boxes, and then crop and segment these regions, as popularized by Mask R-CNN. In this work, we investigate the paradigm of dense sliding-window instance segmentation, which is surprisingly under-explored. Our core observation is that this task is fundamentally different than other dense prediction tasks such as semantic segmentation or bounding-box object detection, as the output at every spatial location is itself a geometric structure with its own spatial dimensions. To formalize this, we treat dense instance segmentation as a prediction task over 4D tensors and present a general framework called TensorMask that explicitly captures this geometry and enables novel operators on 4D tensors. We demonstrate that the tensor view leads to large gains over baselines that ignore this structure, and leads to results comparable to Mask R-CNN. These promising results suggest that TensorMask can serve as a foundation for novel advances in dense mask prediction and a more complete understanding of the task. Code will be made available.", "field": ["Initialization", "Output Functions", "Convolutional Neural Networks", "RoI Feature Extractors", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "RoIAlign", "Bottleneck Residual Block", "Mask R-CNN", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO test-dev"], "metric": ["mask AP"], "title": "TensorMask: A Foundation for Dense Object Segmentation"} {"abstract": "We propose a method for the weakly supervised detection of objects in\npaintings. At training time, only image-level annotations are needed. This,\ncombined with the efficiency of our multiple-instance learning method, enables\none to learn new classes on-the-fly from globally annotated databases, avoiding\nthe tedious task of manually marking objects. We show on several databases that\ndropping the instance-level annotations only yields mild performance losses. We\nalso introduce a new database, IconArt, on which we perform detection\nexperiments on classes that could not be learned on photographs, such as Jesus\nChild or Saint Sebastian. To the best of our knowledge, these are the first\nexperiments dealing with the automatic (and in our case weakly supervised)\ndetection of iconographic elements in paintings. We believe that such a method\nis of great benefit for helping art historians to explore large digital\ndatabases.", "field": ["Object Detection Models", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Region Proposal", "Skip Connection Blocks"], "task": ["Multiple Instance Learning", "Object Detection", "Weakly Supervised Object Detection"], "method": ["RPN", "ResNet", "Average Pooling", "Faster R-CNN", "Softmax", "Batch Normalization", "RoIPool", "1x1 Convolution", "ReLU", "Convolution", "Residual Connection", "Bottleneck Residual Block", "Residual Network", "Region Proposal Network", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["IconArt", "Watercolor2k", "PeopleArt"], "metric": ["MAP"], "title": "Weakly Supervised Object Detection in Artworks"} {"abstract": "We present HERO, a novel framework for large-scale video+language omni-representation learning. HERO encodes multimodal inputs in a hierarchical structure, where local context of a video frame is captured by a Cross-modal Transformer via multimodal fusion, and global video context is captured by a Temporal Transformer. In addition to standard Masked Language Modeling (MLM) and Masked Frame Modeling (MFM) objectives, we design two new pre-training tasks: (i) Video-Subtitle Matching (VSM), where the model predicts both global and local temporal alignment; and (ii) Frame Order Modeling (FOM), where the model predicts the right order of shuffled video frames. HERO is jointly trained on HowTo100M and large-scale TV datasets to gain deep understanding of complex social dynamics with multi-character interactions. Comprehensive experiments demonstrate that HERO achieves new state of the art on multiple benchmarks over Text-based Video/Video-moment Retrieval, Video Question Answering (QA), Video-and-language Inference and Video Captioning tasks across different domains. We also introduce two new challenging benchmarks How2QA and How2R for Video QA and Retrieval, collected from diverse video content over multimodalities.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Hierarchical structure", "Language Modelling", "Question Answering", "Representation Learning", "Video Captioning", "Video Question Answering", "Video Retrieval"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["TVR", "Howto100M-QA", "TVQA"], "metric": ["R@10", "R@1", "R@100", "Accuracy"], "title": "HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training"} {"abstract": "Network architecture search (NAS) achieves state-of-the-art results in various tasks such as classification and semantic segmentation. Recently, a reinforcement learning-based approach has been proposed for Generative Adversarial Networks (GANs) search. In this work, we propose an alternative strategy for GAN search by using a method called DEGAS (Differentiable Efficient GenerAtor Search), which focuses on efficiently finding the generator in the GAN. Our search algorithm is inspired by the differential architecture search strategy and the Global Latent Optimization (GLO) procedure. This leads to both an efficient and stable GAN search. After the generator architecture is found, it can be plugged into any existing framework for GAN training. For CTGAN, which we use in this work, the new model outperforms the original inception score results by 0.25 for CIFAR-10 and 0.77 for STL. It also gets better results than the RL based GAN search methods in shorter search time.", "field": ["Generative Models", "Convolutions"], "task": ["Image Generation", "Neural Architecture Search"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["STL-10", "CIFAR-10"], "metric": ["Inception score", "FID"], "title": "DEGAS: Differentiable Efficient Generator Search"} {"abstract": "Purpose: Real-time surgical tool tracking is a core component of the future\nintelligent operating room (OR), because it is highly instrumental to analyze\nand understand the surgical activities. Current methods for surgical tool\ntracking in videos need to be trained on data in which the spatial positions of\nthe tools are manually annotated. Generating such training data is difficult\nand time-consuming. Instead, we propose to use solely binary presence\nannotations to train a tool tracker for laparoscopic videos. Methods: The\nproposed approach is composed of a CNN + Convolutional LSTM (ConvLSTM) neural\nnetwork trained end-to-end, but weakly supervised on tool binary presence\nlabels only. We use the ConvLSTM to model the temporal dependencies in the\nmotion of the surgical tools and leverage its spatio-temporal ability to smooth\nthe class peak activations in the localization heat maps (Lh-maps).\n Results: We build a baseline tracker on top of the CNN model and demonstrate\nthat our approach based on the ConvLSTM outperforms the baseline in tool\npresence detection, spatial localization, and motion tracking by over 5.0%,\n13.9%, and 12.6%, respectively.\n Conclusions: In this paper, we demonstrate that binary presence labels are\nsufficient for training a deep learning tracking model using our proposed\nmethod. We also show that the ConvLSTM can leverage the spatio-temporal\ncoherence of consecutive image frames across a surgical video to improve tool\npresence detection, spatial localization, and motion tracking.\n keywords: Surgical workflow analysis, tool tracking, weak supervision,\nspatio-temporal coherence, ConvLSTM, endoscopic videos", "field": ["Convolutions", "Activation Functions", "Recurrent Neural Networks"], "task": ["Instrument Recognition", "Object Detection", "Surgical tool detection", "Video Object Tracking", "Weakly-Supervised Object Localization"], "method": ["ConvLSTM", "Long Short-Term Memory", "Convolution", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["Cholec80"], "metric": ["mAP"], "title": "Weakly Supervised Convolutional LSTM Approach for Tool Tracking in Laparoscopic Videos"} {"abstract": "In this work we present a new agent architecture, called Reactor, which\ncombines multiple algorithmic and architectural contributions to produce an\nagent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,\n2016) and Categorical DQN (Bellemare et al., 2017), while giving better\nrun-time performance than A3C (Mnih et al., 2016). Our first contribution is a\nnew policy evaluation algorithm called Distributional Retrace, which brings\nmulti-step off-policy updates to the distributional reinforcement learning\nsetting. The same approach can be used to convert several classes of multi-step\npolicy evaluation algorithms designed for expected value evaluation into\ndistributional ones. Next, we introduce the \\b{eta}-leave-one-out policy\ngradient algorithm which improves the trade-off between variance and bias by\nusing action values as a baseline. Our final algorithmic contribution is a new\nprioritized replay algorithm for sequences, which exploits the temporal\nlocality of neighboring observations for more efficient replay prioritization.\nUsing the Atari 2600 benchmarks, we show that each of these innovations\ncontribute to both the sample efficiency and final agent performance. Finally,\nwe demonstrate that Reactor reaches state-of-the-art performance after 200\nmillion frames and less than a day of training.", "field": ["Q-Learning Networks", "Policy Gradient Methods", "Output Functions", "Regularization", "Off-Policy TD Control", "Convolutions", "Feedforward Networks", "Value Function Estimation"], "task": ["Atari Games", "Distributional Reinforcement Learning"], "method": ["Q-Learning", "Retrace", "Softmax", "A3C", "Entropy Regularization", "Convolution", "DQN", "Dense Connections", "Deep Q-Network"], "dataset": ["Atari 2600 Amidar", "Atari 2600 Beam Rider", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari-57", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Bank Heist", "Atari 2600 Assault", "Atari 2600 Bowling", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Crazy Climber", "Atari 2600 Asteroids", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Atlantis", "Atari 2600 Chopper Command", "Atari 2600 Centipede", "Atari 2600 Defender"], "metric": ["Score", "Medium Human-Normalized Score"], "title": "The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning"} {"abstract": "Sentence-level representations are necessary for various NLP tasks. Recurrent neural networks have proven to be very effective in learning distributed representations and can be trained efficiently on natural language inference tasks. We build on top of one such model and propose a hierarchy of BiLSTM and max pooling layers that implements an iterative refinement strategy and yields state of the art results on the SciTail dataset as well as strong results for SNLI and MultiNLI. We can show that the sentence embeddings learned in this way can be utilized in a wide variety of transfer learning tasks, outperforming InferSent on 7 out of 10 and SkipThought on 8 out of 9 SentEval sentence embedding evaluation tasks. Furthermore, our model beats the InferSent model in 8 out of 10 recently published SentEval probing tasks designed to evaluate sentence embeddings' ability to capture some of the important linguistic properties of sentences.", "field": ["Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models", "Pooling Operations", "Bidirectional Recurrent Neural Networks"], "task": ["Natural Language Inference", "Sentence Embedding", "Sentence Embeddings", "Transfer Learning"], "method": ["HBMP", "Hierarchical BiLSTM Max Pooling", "Long Short-Term Memory", "BiLSTM", "Max Pooling", "Tanh Activation", "Bidirectional LSTM", "LSTM", "Sigmoid Activation"], "dataset": ["SciTail", "SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy", "Accuracy"], "title": "Sentence Embeddings in NLI with Iterative Refinement Encoders"} {"abstract": "We propose a simple yet efficient anchor-free instance segmentation, called CenterMask, that adds a novel spatial attention-guided mask (SAG-Mask) branch to anchor-free one stage object detector (FCOS) in the same vein with Mask R-CNN. Plugged into the FCOS object detector, the SAG-Mask branch predicts a segmentation mask on each box with the spatial attention map that helps to focus on informative pixels and suppress noise. We also present an improved backbone networks, VoVNetV2, with two effective strategies: (1) residual connection for alleviating the optimization problem of larger VoVNet \\cite{lee2019energy} and (2) effective Squeeze-Excitation (eSE) dealing with the channel information loss problem of original SE. With SAG-Mask and VoVNetV2, we deign CenterMask and CenterMask-Lite that are targeted to large and small models, respectively. Using the same ResNet-101-FPN backbone, CenterMask achieves 38.3%, surpassing all previous state-of-the-art methods while at a much faster speed. CenterMask-Lite also outperforms the state-of-the-art by large margins at over 35fps on Titan Xp. We hope that CenterMask and VoVNetV2 can serve as a solid baseline of real-time instance segmentation and backbone network for various vision tasks, respectively. The Code is available at https://github.com/youngwanLEE/CenterMask.", "field": ["Proposal Filtering", "Convolutional Neural Networks", "Feature Extractors", "Normalization", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Object Detection Models", "Mask Branches", "Stochastic Optimization", "Feedforward Networks", "Skip Connection Blocks", "Initialization", "Output Functions", "RoI Feature Extractors", "Instance Segmentation Models", "Skip Connections", "Image Model Blocks"], "task": ["Instance Segmentation", "Object Detection", "Panoptic Segmentation", "Real-time Instance Segmentation", "Real-Time Object Detection", "Semantic Segmentation"], "method": ["Weight Decay", "Average Pooling", "VoVNetV2", "1x1 Convolution", "RoIAlign", "ResNet", "CenterMask", "Convolution", "ReLU", "Residual Connection", "FPN", "Effective Squeeze-and-Excitation Block", "One-Shot Aggregation", "Dense Connections", "Grouped Convolution", "Non Maximum Suppression", "Batch Normalization", "OSA (identity mapping + eSE)", "Residual Network", "Kaiming Initialization", "Sigmoid Activation", "VoVNet", "ResNeXt Block", "SGD with Momentum", "ResNeXt", "Softmax", "Feature Pyramid Network", "Concatenated Skip Connection", "Spatial Attention-Guided Mask", "Bottleneck Residual Block", "Mask R-CNN", "Residual Block", "FCOS", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO", "COCO minival", "COCO test-dev", "MSCOCO"], "metric": ["APM", "FPS", "box AP", "APS", "AP75", "APL", "AP50", "mask AP"], "title": "CenterMask : Real-Time Anchor-Free Instance Segmentation"} {"abstract": "Recurrent neural networks are a powerful tool for modeling sequential data,\nbut the dependence of each timestep's computation on the previous timestep's\noutput limits parallelism and makes RNNs unwieldy for very long sequences. We\nintroduce quasi-recurrent neural networks (QRNNs), an approach to neural\nsequence modeling that alternates convolutional layers, which apply in parallel\nacross timesteps, and a minimalist recurrent pooling function that applies in\nparallel across channels. Despite lacking trainable recurrent layers, stacked\nQRNNs have better predictive accuracy than stacked LSTMs of the same hidden\nsize. Due to their increased parallelism, they are up to 16 times faster at\ntrain and test time. Experiments on language modeling, sentiment\nclassification, and character-level neural machine translation demonstrate\nthese advantages and underline the viability of QRNNs as a basic building block\nfor a variety of sequence tasks.", "field": ["Regularization", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Word Embeddings", "Convolutions", "Skip Connections", "Image Model Blocks"], "task": ["Language Modelling", "Machine Translation", "Sentiment Analysis"], "method": ["Weight Decay", "RMSProp", "Adam", "Long Short-Term Memory", "Tanh Activation", "GloVe Embeddings", "Convolution", "ReLU", "GloVe", "Dense Block", "Masked Convolution", "Zoneout", "QRNN", "SGD", "Sigmoid Activation", "Stochastic Gradient Descent", "Quasi-Recurrent Neural Network", "Concatenated Skip Connection", "LSTM", "Dropout", "Rectified Linear Units"], "dataset": ["IWSLT2015 German-English"], "metric": ["BLEU score"], "title": "Quasi-Recurrent Neural Networks"} {"abstract": "Neural network-based representations (\"embeddings\") have dramatically advanced natural language processing (NLP) tasks, including clinical NLP tasks such as concept extraction. Recently, however, more advanced embedding methods and representations (e.g., ELMo, BERT) have further pushed the state-of-the-art in NLP, yet there are no common best practices for how to integrate these representations into clinical tasks. The purpose of this study, then, is to explore the space of possible options in utilizing these new models for clinical concept extraction, including comparing these to traditional word embedding methods (word2vec, GloVe, fastText). Both off-the-shelf open-domain embeddings and pre-trained clinical embeddings from MIMIC-III are evaluated. We explore a battery of embedding methods consisting of traditional word embeddings and contextual embeddings, and compare these on four concept extraction corpora: i2b2 2010, i2b2 2012, SemEval 2014, and SemEval 2015. We also analyze the impact of the pre-training time of a large language model like ELMo or BERT on the extraction performance. Last, we present an intuitive way to understand the semantic information encoded by contextual embeddings. Contextual embeddings pre-trained on a large clinical corpus achieves new state-of-the-art performances across all concept extraction tasks. The best-performing model outperforms all state-of-the-art methods with respective F1-measures of 90.25, 93.18 (partial), 80.74, and 81.65. We demonstrate the potential of contextual embeddings through the state-of-the-art performance these methods achieve on clinical concept extraction. Additionally, we demonstrate contextual embeddings encode valuable semantic information not accounted for in traditional word representations.", "field": ["Regularization", "Attention Modules", "Learning Rate Schedules", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Output Functions", "Subword Segmentation", "Word Embeddings", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections", "Bidirectional Recurrent Neural Networks"], "task": ["Clinical Concept Extraction", "Language Modelling", "Word Embeddings"], "method": ["Weight Decay", "Adam", "Long Short-Term Memory", "BiLSTM", "Tanh Activation", "GloVe Embeddings", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Bidirectional LSTM", "Residual Connection", "GloVe", "Dense Connections", "ELMo", "Layer Normalization", "GELU", "Sigmoid Activation", "WordPiece", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "LSTM", "Dropout", "BERT"], "dataset": ["2010 i2b2/VA"], "metric": ["Exact Span F1"], "title": "Enhancing Clinical Concept Extraction with Contextual Embeddings"} {"abstract": "We introduce SpERT, an attention model for span-based joint entity and relation extraction. Our key contribution is a light-weight reasoning on BERT embeddings, which features entity recognition and filtering, as well as relation classification with a localized, marker-free context representation. The model is trained using strong within-sentence negative samples, which are efficiently extracted in a single BERT pass. These aspects facilitate a search over all spans in the sentence. In ablation studies, we demonstrate the benefits of pre-training, strong negative sampling and localized context. Our model outperforms prior work by up to 2.6% F1 score on several datasets for joint entity and relation extraction.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Joint Entity and Relation Extraction", "Named Entity Recognition", "Relation Classification", "Relation Extraction"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SciERC", "ADE Corpus", "CoNLL04"], "metric": ["RE+ Micro F1", "Entity F1", "NER Macro F1", "RE+ Macro F1", "Relation F1", "F1", "NER Micro F1", "RE+ Macro F1 "], "title": "Span-based Joint Entity and Relation Extraction with Transformer Pre-training"} {"abstract": "Instance segmentation is an important task for scene understanding. Compared to the fully-developed 2D, 3D instance segmentation for point clouds have much room to improve. In this paper, we present PointGroup, a new end-to-end bottom-up architecture, specifically focused on better grouping the points by exploring the void space between objects. We design a two-branch network to extract point features and predict semantic labels and offsets, for shifting each point towards its respective instance centroid. A clustering component is followed to utilize both the original and offset-shifted point coordinate sets, taking advantage of their complementary strength. Further, we formulate the ScoreNet to evaluate the candidate instances, followed by the Non-Maximum Suppression (NMS) to remove duplicates. We conduct extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, on which our method achieves the highest performance, 63.6% and 64.0%, compared to 54.9% and 54.4% achieved by former best solutions in terms of mAP with IoU threshold 0.5.", "field": ["Convolutions"], "task": ["3D Instance Segmentation", "Instance Segmentation", "Scene Understanding", "Semantic Segmentation"], "method": ["Submanifold Convolution"], "dataset": ["ScanNet(v2)", "S3DIS"], "metric": ["mRec", "mAP", "Mean AP @ 0.5", "mPrec"], "title": "PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation"} {"abstract": "In this paper, we introduce Random Erasing, a new data augmentation method\nfor training the convolutional neural network (CNN). In training, Random\nErasing randomly selects a rectangle region in an image and erases its pixels\nwith random values. In this process, training images with various levels of\nocclusion are generated, which reduces the risk of over-fitting and makes the\nmodel robust to occlusion. Random Erasing is parameter learning free, easy to\nimplement, and can be integrated with most of the CNN-based recognition models.\nAlbeit simple, Random Erasing is complementary to commonly used data\naugmentation techniques such as random cropping and flipping, and yields\nconsistent improvement over strong baselines in image classification, object\ndetection and person re-identification. Code is available at:\nhttps://github.com/zhunzhong07/Random-Erasing.", "field": ["Image Data Augmentation"], "task": ["Data Augmentation", "Image Augmentation", "Image Classification", "Object Detection", "Person Re-Identification"], "method": ["Random Erasing"], "dataset": ["DukeMTMC-reID", "PASCAL VOC 2007", "Fashion-MNIST"], "metric": ["Percentage error", "Rank-1", "MAP"], "title": "Random Erasing Data Augmentation"} {"abstract": "Crowd counting or density estimation is a challenging task in computer vision due to large scale variations, perspective distortions and serious occlusions, etc. Existing methods generally suffers from two issues: 1) the model averaging effects in multi-scale CNNs induced by the widely adopted L2 regression loss; and 2) inconsistent estimation across different scaled inputs. To explicitly address these issues, we propose a novel crowd counting (density estimation) framework called Adversarial Cross-Scale Consistency Pursuit (ACSCP). On one hand, a U-net structural network is designed to generate density map from input patch, and an adversarial loss is employed to shrink the solution onto a realistic subspace, thus attenuating the blurry effects of density map estimation. On the other hand, we design a novel scale-consistency regularizer which enforces that the sum up of the crowd counts from local patches (i.e., small scale) is coherent with the overall count of their region union (i.e., large scale). The above losses are integrated via a joint training scheme, so as to help boost density estimation performance by further exploring the collaboration between both objectives. Extensive experiments on four benchmarks have well demonstrated the effectiveness of the proposed innovations as well as the superior performance over prior art.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Crowd Counting", "Density Estimation", "Regression"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["UCF CC 50", "ShanghaiTech A", "WorldExpo\u201910", "ShanghaiTech B"], "metric": ["MAE", "Average MAE"], "title": "Crowd Counting via Adversarial Cross-Scale Consistency Pursuit"} {"abstract": "In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations in order to adapt to new tasks and domains. MT-DNN extends the model proposed in Liu et al. (2015) by incorporating a pre-trained bidirectional transformer language model, known as BERT (Devlin et al., 2018). MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.7% (2.2% absolute improvement). We also demonstrate using the SNLI and SciTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations. The code and pre-trained models are publicly available at https://github.com/namisan/mt-dnn.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Domain Adaptation", "Language Modelling", "Linguistic Acceptability", "Natural Language Inference", "Natural Language Understanding", "Paraphrase Identification", "Sentiment Analysis"], "method": ["Weight Decay", "Adam", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Transformer", "ReLU", "Residual Connection", "Dense Connections", "Layer Normalization", "Label Smoothing", "GELU", "WordPiece", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Dropout", "BERT", "Rectified Linear Units"], "dataset": ["MultiNLI", "SST-2 Binary classification", "SNLI", "Quora Question Pairs", "CoLA", "SciTail"], "metric": ["% Test Accuracy", "Matched", "Parameters", "F1", "Accuracy", "Mismatched", "% Train Accuracy"], "title": "Multi-Task Deep Neural Networks for Natural Language Understanding"} {"abstract": "We present FoveaBox, an accurate, flexible, and completely anchor-free framework for object detection. While almost all state-of-the-art object detectors utilize predefined anchors to enumerate possible locations, scales and aspect ratios for the search of the objects, their performance and generalization ability are also limited to the design of anchors. Instead, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object. The scales of target boxes are naturally associated with feature pyramid representations. In FoveaBox, an instance is assigned to adjacent feature levels to make the model more accurate.We demonstrate its effectiveness on standard benchmarks and report extensive experimental analysis. Without bells and whistles, FoveaBox achieves state-of-the-art single model performance on the standard COCO and Pascal VOC object detection benchmark. More importantly, FoveaBox avoids all computation and hyper-parameters related to anchor boxes, which are often sensitive to the final detection performance. We believe the simple and effective approach will serve as a solid baseline and help ease future research for object detection. The code has been made publicly available at https://github.com/taokong/FoveaBox .", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Proposal Filtering", "Learning Rate Schedules", "Stochastic Optimization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Object Detection Models", "Skip Connection Blocks"], "task": ["Object Detection"], "method": ["Weight Decay", "Average Pooling", "1x1 Convolution", "ResNet", "FoveaBox", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "Grouped Convolution", "Non Maximum Suppression", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Step Decay", "ResNeXt Block", "SGD with Momentum", "ResNeXt", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "FoveaBox: Beyond Anchor-based Object Detector"} {"abstract": "We present a method for detecting objects in images using a single deep\nneural network. Our approach, named SSD, discretizes the output space of\nbounding boxes into a set of default boxes over different aspect ratios and\nscales per feature map location. At prediction time, the network generates\nscores for the presence of each object category in each default box and\nproduces adjustments to the box to better match the object shape. Additionally,\nthe network combines predictions from multiple feature maps with different\nresolutions to naturally handle objects of various sizes. Our SSD model is\nsimple relative to methods that require object proposals because it completely\neliminates proposal generation and subsequent pixel or feature resampling stage\nand encapsulates all computation in a single network. This makes SSD easy to\ntrain and straightforward to integrate into systems that require a detection\ncomponent. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets\nconfirm that SSD has comparable accuracy to methods that utilize an additional\nobject proposal step and is much faster, while providing a unified framework\nfor both training and inference. Compared to other single stage methods, SSD\nhas much better accuracy, even with a smaller input image size. For $300\\times\n300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan\nX and for $500\\times 500$ input, SSD achieves 75.1% mAP, outperforming a\ncomparable state of the art Faster R-CNN model. Code is available at\nhttps://github.com/weiliu89/caffe/tree/ssd .", "field": ["Regularization", "Proposal Filtering", "Stochastic Optimization", "Convolutional Neural Networks", "Output Functions", "Activation Functions", "Convolutions", "Feedforward Networks", "Pooling Operations", "Object Detection Models"], "task": ["Object Detection"], "method": ["Weight Decay", "SGD with Momentum", "VGG", "Softmax", "Non Maximum Suppression", "SSD", "Convolution", "1x1 Convolution", "ReLU", "Dropout", "Dense Connections", "Rectified Linear Units", "Max Pooling"], "dataset": ["PASCAL VOC 2012", "PASCAL VOC 2007", "COCO test-dev"], "metric": ["AP50", "box AP", "AP75", "MAP"], "title": "SSD: Single Shot MultiBox Detector"} {"abstract": "We present a novel method for detecting 3D model instances and estimating\ntheir 6D poses from RGB data in a single shot. To this end, we extend the\npopular SSD paradigm to cover the full 6D pose space and train on synthetic\nmodel data only. Our approach competes or surpasses current state-of-the-art\nmethods that leverage RGB-D data on multiple challenging datasets. Furthermore,\nour method produces these results at around 10Hz, which is many times faster\nthan the related methods. For the sake of reproducibility, we make our trained\nnetworks and detection code publicly available.", "field": ["Convolutions", "Object Detection Models", "Proposal Filtering"], "task": ["6D Pose Estimation", "6D Pose Estimation using RGB", "Pose Estimation"], "method": ["1x1 Convolution", "Non Maximum Suppression", "SSD", "Convolution"], "dataset": ["OCCLUSION", "LineMOD", "Tejani"], "metric": ["Mean IoU", "VSS-3D", "VSS-2D", "IoU-3D", "MAP", "IoU-2D", "Mean ADD"], "title": "SSD-6D: Making RGB-based 3D detection and 6D pose estimation great again"} {"abstract": "This paper provides an extensive analysis of the performance of the EfficientNet image classifiers with several recent training procedures, in particular one that corrects the discrepancy between train and test images. The resulting network, called FixEfficientNet, significantly outperforms the initial architecture with the same number of parameters. For instance, our FixEfficientNet-B0 trained without additional training data achieves 79.3% top-1 accuracy on ImageNet with 5.3M parameters. This is a +0.5% absolute improvement over the Noisy student EfficientNet-B0 trained with 300M unlabeled images. An EfficientNet-L2 pre-trained with weak supervision on 300M unlabeled images and further optimized with FixRes achieves 88.5% top-1 accuracy (top-5: 98.7%), which establishes the new state of the art for ImageNet with a single crop. These improvements are thoroughly evaluated with cleaner protocols than the one usually employed for Imagenet, and particular we show that our improvement remains in the experimental setting of ImageNet-v2, that is less prone to overfitting, and with ImageNet Real Labels. In both cases we also establish the new state of the art.", "field": ["Image Data Augmentation", "Image Scaling Strategies", "Regularization", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["Data Augmentation", "Image Classification"], "method": ["Depthwise Convolution", "Average Pooling", "EfficientNet", "RMSProp", "1x1 Convolution", "Random Horizontal Flip", "Convolution", "ReLU", "Dense Connections", "Swish", "Random Resized Crop", "FixRes", "Batch Normalization", "Label Smoothing", "ColorJitter", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Sigmoid Activation", "Color Jitter", "Inverted Residual Block", "Dropout", "Depthwise Separable Convolution", "Rectified Linear Units"], "dataset": ["ImageNet ReaL", "ImageNet"], "metric": ["Number of params", "Top 1 Accuracy", "Params", "Accuracy", "Top 5 Accuracy"], "title": "Fixing the train-test resolution discrepancy: FixEfficientNet"} {"abstract": "Link prediction is critical for the application of incomplete knowledge graph (KG) in the downstream tasks. As a family of effective approaches for link predictions, embedding methods try to learn low-rank representations for both entities and relations such that the bilinear form defined therein is a well-behaved scoring function. Despite of their successful performances, existing bilinear forms overlook the modeling of relation compositions, resulting in lacks of interpretability for reasoning on KG. To fulfill this gap, we propose a new model called DihEdral, named after dihedral symmetry group. This new model learns knowledge graph embeddings that can capture relation compositions by nature. Furthermore, our approach models the relation embeddings parametrized by discrete values, thereby decrease the solution space drastically. Our experiments show that DihEdral is able to capture all desired properties such as (skew-) symmetry, inversion and (non-) Abelian composition, and outperforms existing bilinear form based approach and is comparable to or better than deep learning models such as ConvE.", "field": ["Image Models"], "task": ["Knowledge Graph Embeddings", "Link Prediction"], "method": ["Interpretability"], "dataset": ["WN18RR", "YAGO3-10"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "Relation Embedding with Dihedral Group in Knowledge Graph"} {"abstract": "Relation classification is one of the most important tasks in the field of information extraction, and also a key component of systems that require relational understanding of unstructured text. Existing relation classification approaches mainly rely on exploiting external resources and background knowledge to improve the performance and ignore the correlations between entity pairs which are helpful for relation classification. We present the concept of entity pair graph to represent the correlations between entity pairs and propose a novel entity pair graph based neural network (EPGNN) model, relying on graph convolutional network to capture the topological features of an entity pair graph. EPGNN combines sentence semantic features generated by pre-trained BERT model with graph topological features for relation classification. Our proposed model makes full use of a given corpus and forgoes the need of external resources and background knowledge. The experimental results on two widely used dataset: SemEval 2010 Task 8 and ACE 2005, show that our method outperforms the state-of-the-art approaches.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Relation Classification", "Relation Extraction"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SemEval-2010 Task 8"], "metric": ["F1"], "title": "Improving Relation Classification by Entity Pair Graph"} {"abstract": "Convolutional networks are not aware of an object's geometric variations, which leads to inefficient utilization of model and data capacity. To overcome this issue, recent works on deformation modeling seek to spatially reconfigure the data towards a common arrangement such that semantic recognition suffers less from deformation. This is typically done by augmenting static operators with learned free-form sampling grids in the image space, dynamically tuned to the data and task for adapting the receptive field. Yet adapting the receptive field does not quite reach the actual goal -- what really matters to the network is the \"effective\" receptive field (ERF), which reflects how much each pixel contributes. It is thus natural to design other approaches to adapt the ERF directly during runtime. In this work, we instantiate one possible solution as Deformable Kernels (DKs), a family of novel and generic convolutional operators for handling object deformations by directly adapting the ERF while leaving the receptive field untouched. At the heart of our method is the ability to resample the original kernel space towards recovering the deformation of objects. This approach is justified with theoretical insights that the ERF is strictly determined by data sampling locations and kernel values. We implement DKs as generic drop-in replacements of rigid kernels and conduct a series of empirical studies whose results conform with our theories. Over several tasks and standard base models, our approach compares favorably against prior works that adapt during runtime. In addition, further experiments suggest a working mechanism orthogonal and complementary to previous works.", "field": ["Object Detection Models", "Image Data Augmentation", "Initialization", "Output Functions", "Regularization", "Stochastic Optimization", "Feature Extractors", "Learning Rate Schedules", "RoI Feature Extractors", "Activation Functions", "Convolutional Neural Networks", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Region Proposal", "Image Models", "Skip Connection Blocks"], "task": ["Image Classification", "Object Detection"], "method": ["Weight Decay", "Depthwise Convolution", "Cosine Annealing", "Faster R-CNN", "Average Pooling", "1x1 Convolution", "Region Proposal Network", "ResNet", "Random Horizontal Flip", "MobileNetV2", "RoIPool", "Convolution", "ReLU", "Residual Connection", "FPN", "Max Pooling", "RPN", "Random Resized Crop", "Batch Normalization", "Residual Network", "Pointwise Convolution", "Kaiming Initialization", "SGD with Momentum", "Inverted Residual Block", "Softmax", "Feature Pyramid Network", "Linear Warmup With Cosine Annealing", "Bottleneck Residual Block", "Depthwise Separable Convolution", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Deformable Kernel"], "dataset": ["ImageNet", "COCO test-dev"], "metric": ["APM", "Top 1 Accuracy", "box AP", "APL", "APS"], "title": "Deformable Kernels: Adapting Effective Receptive Fields for Object Deformation"} {"abstract": "Emotion detection in conversations (EDC) is to detect the emotion for each utterance in conversations that have multiple speakers. Different from the traditional non-conversational emotion detection, the model for EDC should be context-sensitive (e.g., understanding the whole conversation rather than one utterance) and speaker-sensitive (e.g., understanding which utterance belongs to which speaker). In this paper, we propose a transformer-based context- and speaker-sensitive model for EDC, namely HiTrans, which consists of two hierarchical transformers. We utilize BERT as the low-level transformer to generate local utterance representations, and feed them into another high-level transformer so that utterance representations could be sensitive to the global context of the conversation. Moreover, we exploit an auxiliary task to make our model speaker-sensitive, called pairwise utterance speaker verification (PUSV), which aims to classify whether two utterances belong to the same speaker. We evaluate our model on three benchmark datasets, namely EmoryNLP, MELD and IEMOCAP. Results show that our model outperforms previous state-of-the-art models.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Attention Modules", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Emotion Recognition in Conversation", "Speaker Verification"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["IEMOCAP", "MELD", "EmoryNLP"], "metric": ["Weighted Macro-F1", "F1"], "title": "HiTrans: A Transformer-Based Context- and Speaker-Sensitive Model for Emotion Detection in Conversations"} {"abstract": "We study pseudo-labeling for the semi-supervised training of ResNet, Time-Depth Separable ConvNets, and Transformers for speech recognition, with either CTC or Seq2Seq loss functions. We perform experiments on the standard LibriSpeech dataset, and leverage additional unlabeled data from LibriVox through pseudo-labeling. We show that while Transformer-based acoustic models have superior performance with the supervised dataset alone, semi-supervision improves all models across architectures and loss functions and bridges much of the performance gaps between them. In doing so, we reach a new state-of-the-art for end-to-end acoustic models decoded with an external language model in the standard supervised learning setting, and a new absolute state-of-the-art with semi-supervised training. Finally, we study the effect of leveraging different amounts of unlabeled audio, propose several ways of evaluating the characteristics of unlabeled audio which improve acoustic modeling, and show that acoustic models trained with more audio rely less on external language models.", "field": ["Initialization", "Convolutional Neural Networks", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Sequence To Sequence Models", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["End-To-End Speech Recognition", "Language Modelling", "Speech Recognition"], "method": ["Average Pooling", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "ResNet", "Convolution", "ReLU", "Residual Connection", "Seq2Seq", "Batch Normalization", "Sequence to Sequence", "Residual Network", "Kaiming Initialization", "Sigmoid Activation", "Bottleneck Residual Block", "LSTM", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures"} {"abstract": "The goal of this paper is to advance the state-of-the-art of articulated pose\nestimation in scenes with multiple people. To that end we contribute on three\nfronts. We propose (1) improved body part detectors that generate effective\nbottom-up proposals for body parts; (2) novel image-conditioned pairwise terms\nthat allow to assemble the proposals into a variable number of consistent body\npart configurations; and (3) an incremental optimization strategy that explores\nthe search space more efficiently thus leading both to better performance and\nsignificant speed-up factors. Evaluation is done on two single-person and two\nmulti-person pose estimation benchmarks. The proposed approach significantly\noutperforms best known multi-person pose estimation results while demonstrating\ncompetitive performance on the task of single person pose estimation. Models\nand code available at http://pose.mpi-inf.mpg.de", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Keypoint Detection", "Multi-Person Pose Estimation", "Pose Estimation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["MPII Multi-Person", "WAF", "Leeds Sports Poses", "MPII Human Pose"], "metric": ["AOP", "PCKh-0.5", "AP", "mAP@0.5", "PCK"], "title": "DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model"} {"abstract": "In recent years, deep learning-based networks have achieved state-of-the-art performance in medical image segmentation. Among the existing networks, U-Net has been successfully applied on medical image segmentation. In this paper, we propose an extension of U-Net, Bi-directional ConvLSTM U-Net with Densely connected convolutions (BCDU-Net), for medical image segmentation, in which we take full advantages of U-Net, bi-directional ConvLSTM (BConvLSTM) and the mechanism of dense convolutions. Instead of a simple concatenation in the skip connection of U-Net, we employ BConvLSTM to combine the feature maps extracted from the corresponding encoding path and the previous decoding up-convolutional layer in a non-linear way. To strengthen feature propagation and encourage feature reuse, we use densely connected convolutions in the last convolutional layer of the encoding path. Finally, we can accelerate the convergence speed of the proposed network by employing batch normalization (BN). The proposed model is evaluated on three datasets of: retinal blood vessel segmentation, skin lesion segmentation, and lung nodule segmentation, achieving state-of-the-art performance.", "field": ["Semantic Segmentation Models", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Lesion Segmentation", "Lung Nodule Segmentation", "Medical Image Segmentation", "Semantic Segmentation"], "method": ["U-Net", "ConvLSTM", "Concatenated Skip Connection", "Max Pooling", "Convolution", "Tanh Activation", "Batch Normalization", "ReLU", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["ISIC 2018", "Lung Nodule ", "LUNA", "DRIVE"], "metric": ["F1-Score", "AUC", "F1 score", "Dice Score"], "title": "Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions"} {"abstract": "In this work we present In-Place Activated Batch Normalization (InPlace-ABN)\n- a novel approach to drastically reduce the training memory footprint of\nmodern deep neural networks in a computationally efficient way. Our solution\nsubstitutes the conventionally used succession of BatchNorm + Activation layers\nwith a single plugin layer, hence avoiding invasive framework surgery while\nproviding straightforward applicability for existing deep learning frameworks.\nWe obtain memory savings of up to 50% by dropping intermediate results and by\nrecovering required information during the backward pass through the inversion\nof stored forward results, with only minor increase (0.8-2%) in computation\ntime. Also, we demonstrate how frequently used checkpointing approaches can be\nmade computationally as efficient as InPlace-ABN. In our experiments on image\nclassification, we demonstrate on-par results on ImageNet-1k with\nstate-of-the-art approaches. On the memory-demanding task of semantic\nsegmentation, we report results for COCO-Stuff, Cityscapes and Mapillary\nVistas, obtaining new state-of-the-art results on the latter without additional\ntraining data but in a single-scale and -model scenario. Code can be found at\nhttps://github.com/mapillary/inplace_abn .", "field": ["Normalization"], "task": ["Image Classification", "Semantic Segmentation"], "method": ["In-Place Activated Batch Normalization", "InPlace-ABN", "Batch Normalization"], "dataset": ["Cityscapes test"], "metric": ["Mean IoU (class)"], "title": "In-Place Activated BatchNorm for Memory-Optimized Training of DNNs"} {"abstract": "In this paper, we present a new Mask R-CNN based text detection approach\nwhich can robustly detect multi-oriented and curved text from natural scene\nimages in a unified manner. To enhance the feature representation ability of\nMask R-CNN for text detection tasks, we propose to use the Pyramid Attention\nNetwork (PAN) as a new backbone network of Mask R-CNN. Experiments demonstrate\nthat PAN can suppress false alarms caused by text-like backgrounds more\neffectively. Our proposed approach has achieved superior performance on both\nmulti-oriented (ICDAR-2015, ICDAR-2017 MLT) and curved (SCUT-CTW1500) text\ndetection benchmark tasks by only using single-scale and single-model testing.", "field": ["Convolutions", "RoI Feature Extractors", "Output Functions", "Instance Segmentation Models"], "task": ["Curved Text Detection", "Scene Text", "Scene Text Detection"], "method": ["Mask R-CNN", "Softmax", "RoIAlign", "Convolution"], "dataset": ["ICDAR 2017 MLT", "ICDAR 2015", "SCUT-CTW1500"], "metric": ["F-Measure", "Recall", "Precision", "TIoU"], "title": "Mask R-CNN with Pyramid Attention Network for Scene Text Detection"} {"abstract": "Challenging problems such as open-domain question answering, fact checking, slot filling and entity linking require access to large, external knowledge sources. While some models do well on individual tasks, developing general models is difficult as each task might require computationally expensive indexing of custom knowledge sources, in addition to dedicated infrastructure. To catalyze research on models that condition on specific information in large textual resources, we present a benchmark for knowledge-intensive language tasks (KILT). All tasks in KILT are grounded in the same snapshot of Wikipedia, reducing engineering turnaround through the re-use of components, as well as accelerating research into task-agnostic memory architectures. We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance. We find that a shared dense vector index coupled with a seq2seq model is a strong baseline, outperforming more tailor-made approaches for fact checking, open-domain question answering and dialogue, and yielding competitive results on entity linking and slot filling, by generating disambiguated text. KILT data and code are available at https://github.com/facebookresearch/KILT.", "field": ["Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models"], "task": ["Entity Linking", "Fact Verification", "Open-Domain Dialog", "Open-Domain Question Answering", "Question Answering", "Slot Filling"], "method": ["Long Short-Term Memory", "Tanh Activation", "Sequence to Sequence", "LSTM", "Seq2Seq", "Sigmoid Activation"], "dataset": ["KILT: WNED-WIKI", "KILT: Wizard of Wikipedia", "KILT: HotpotQA", "KILT: FEVER", "KILT: Natural Questions", "KILT: AIDA-YAGO2", "KILT: ELI5", "KILT: Zero Shot RE", "KILT: T-REx", "KILT: WNED-CWEB", "KILT: TriviaQA"], "metric": ["KILT-EM", "Recall@5", "EM", "F1", "KILT-F1", "R-Prec", "KILT-RL", "ROUGE-L", "Accuracy", "KILT-AC"], "title": "KILT: a Benchmark for Knowledge Intensive Language Tasks"} {"abstract": "Data mixing augmentation has proved effective in training deep models. Recent methods mix labels mainly based on the mixture proportion of image pixels. As the main discriminative information of a fine-grained image usually resides in subtle regions, methods along this line are prone to heavy label noise in fine-grained recognition. We propose in this paper a novel scheme, termed as Semantically Proportional Mixing (SnapMix), which exploits class activation map (CAM) to lessen the label noise in augmenting fine-grained data. SnapMix generates the target label for a mixed image by estimating its intrinsic semantic composition, and allows for asymmetric mixing operations and ensures semantic correspondence between synthetic images and target labels. Experiments show that our method consistently outperforms existing mixed-based approaches on various datasets and under different network depths. Furthermore, by incorporating the mid-level features, the proposed SnapMix achieves top-level performance, demonstrating its potential to serve as a solid baseline for fine-grained recognition. Our code is available at https://github.com/Shaoli-Huang/SnapMix.git.", "field": ["Graph Embeddings"], "task": ["Fine-Grained Image Classification", "Semantic Composition"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": [" CUB-200-2011"], "metric": ["Accuracy"], "title": "SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data"} {"abstract": "Although the inherently ambiguous task of predicting what resides beyond all four edges of an image has rarely been explored before, we demonstrate that GANs hold powerful potential in producing reasonable extrapolations. Two outpainting methods are proposed that aim to instigate this line of research: the first approach uses a context encoder inspired by common inpainting architectures and paradigms, while the second approach adds an extra post-processing step using a single-image generative model. This way, the hallucinated details are integrated with the style of the original image, in an attempt to further boost the quality of the result and possibly allow for arbitrary output resolutions to be supported.", "field": ["Graph Embeddings"], "task": ["Conditional Image Generation", "Image Outpainting"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["Places365-Standard"], "metric": ["MSE", "L1"], "title": "Image Outpainting and Harmonization using Generative Adversarial Networks"} {"abstract": "Recently, Barbu et al introduced a dataset called ObjectNet which includes objects in daily life situations. They showed a dramatic performance drop of the state of the art object recognition models on this dataset. Due to the importance and implications of their results regarding generalization ability of deep models, we take a second look at their findings. We highlight a major problem with their work which is applying object recognizers to the scenes containing multiple objects rather than isolated objects. The latter results in around 20-30% performance gain using our code. Compared with the results reported in the ObjectNet paper, we observe that around 10-15 % of the performance loss can be recovered, without any test time data augmentation. In accordance with Barbu et al.'s conclusions, however, we also conclude that deep models suffer drastically on this dataset. Thus, we believe that ObjectNet remains a challenging dataset for testing the generalization power of models beyond datasets on which they have been trained.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Data Augmentation", "Image Classification", "Object Recognition"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ObjectNet (Bounding Box)"], "metric": ["Top 5 Accuracy"], "title": "ObjectNet Dataset: Reanalysis and Correction"} {"abstract": "Recommender systems (RS) are ubiquitous in the digital space. This paper develops a deep learning-based approach to address three practical challenges in RS: complex structures of high-dimensional data, noise in relational information, and the black-box nature of machine learning algorithms. Our method\u2014Multi-Graph Graph Attention Network (MG-GAT)\u2014learns latent user and business representations by aggregating a diverse set of information from neighbors of each user (business) on a neighbor importance graph. MG-GAT out-performs state-of-the-art deep learning models in the recommendation task using two large-scale datasets collected from Yelp and four other standard datasets in RS. The improved performance highlights MG-GAT\u2019s advantage in incorporating multi-modal features in a principled manner. The features importance, neighbor importance graph and latent representations reveal business insights on predictive features and explainable characteristics of business and users. Moreover, the learned neighbor importance graph can be used in a variety of management applications, such as targeting customers, promoting new businesses, and designing information acquisition strategies. Our paper presents a quintessential big data application of deep learning models in management while providing interpretability essential for real-world decision-making.", "field": ["Graph Models"], "task": ["Decision Making", "Recommendation Systems"], "method": ["Graph Attention Network", "GAT"], "dataset": ["YahooMusic Monti", "Douban Monti", "MovieLens 100K", "Flixster Monti"], "metric": ["RMSE (u1 Splits)", "RMSE"], "title": "Interpretable Recommender System With Heterogeneous Information: A Geometric Deep Learning Perspective"} {"abstract": "Image-level weakly-supervised semantic segmentation (WSSS) aims at learning semantic segmentation by adopting only image class labels. Existing approaches generally rely on class activation maps (CAM) to generate pseudo-masks and then train segmentation models. The main difficulty is that the CAM estimate only covers partial foreground objects. In this paper, we argue that the critical factor preventing to obtain the full object mask is the classification boundary mismatch problem in applying the CAM to WSSS. Because the CAM is optimized by the classification task, it focuses on the discrimination across different image-level classes. However, the WSSS requires to distinguish pixels sharing the same image-level class to separate them into the foreground and the background. To alleviate this contradiction, we propose an efficient end-to-end Intra-Class Discriminator (ICD) framework, which learns intra-class boundaries to help separate the foreground and the background within each image-level class. Without bells and whistles, our approach achieves the state-of-the-art performance of image label based WSSS, with mIoU 68.0% on the VOC 2012 semantic segmentation benchmark, demonstrating the effectiveness of the proposed approach.\r", "field": ["Interpretability"], "task": ["Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": ["Class-activation map", "CAM"], "dataset": ["PASCAL VOC 2012 val"], "metric": ["Mean IoU"], "title": "Learning Integral Objects With Intra-Class Discriminator for Weakly-Supervised Semantic Segmentation"} {"abstract": "Neural Architecture Search (NAS) has gained attraction due to superior classification performance. Differential Architecture Search (DARTS) is a computationally light method. To limit computational resources DARTS makes numerous approximations. These approximations result in inferior performance. We propose to fine-tune DARTS using fixed operations as they are independent of these approximations. Our method offers a good trade-off between the number of parameters and classification accuracy. Our approach improves the top-1 accuracy on Fashion-MNIST, CompCars, and MIO-TCD datasets by 0.56%, 0.50%, and 0.39%, respectively compared to the state-of-the-art approaches. Our approach performs better than DARTS, improving the accuracy by 0.28%, 1.64%, 0.34%, 4.5%, and 3.27% compared to DARTS, on CIFAR-10, CIFAR-100, Fashion-MNIST, CompCars, and MIO-TCD datasets, respectively.", "field": ["Policy Gradient Methods", "Regularization", "Output Functions", "Recurrent Neural Networks", "Activation Functions", "Neural Architecture Search"], "task": ["Fine-Grained Image Classification", "Image Classification", "Neural Architecture Search"], "method": ["Softmax", "Long Short-Term Memory", "Entropy Regularization", "Tanh Activation", "Differentiable Architecture Search", "LSTM", "PPO", "Proximal Policy Optimization", "Neural Architecture Search", "DARTS", "Sigmoid Activation"], "dataset": ["CompCars", "Fashion-MNIST"], "metric": ["Percentage error", "Accuracy"], "title": "Fine-Tuning DARTS for Image Classification"} {"abstract": "We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. Training code and pretrained models are available at https://github.com/facebookresearch/detr.", "field": ["Regularization", "Attention Modules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Convolutions", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections", "Object Detection Models"], "task": ["Object Detection", "Panoptic Segmentation"], "method": ["Detection Transformer", "Feedforward Network", "Byte Pair Encoding", "BPE", "Layer Normalization", "Adam", "Softmax", "Multi-Head Attention", "Transformer", "Convolution", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections", "Rectified Linear Units", "Detr"], "dataset": ["COCO minival", "COCO panoptic"], "metric": ["PQst", "RQ", "SQst", "APM", "RQth", "RQst", "SQth", "PQth", "box AP", "PQ", "SQ", "AP75", "APS", "APL", "AP", "AP50"], "title": "End-to-End Object Detection with Transformers"} {"abstract": "In generalized zero shot learning (GZSL), the set of classes are split into\nseen and unseen classes, where training relies on the semantic features of the\nseen and unseen classes and the visual representations of only the seen\nclasses, while testing uses the visual representations of the seen and unseen\nclasses. Current methods address GZSL by learning a transformation from the\nvisual to the semantic space, exploring the assumption that the distribution of\nclasses in the semantic and visual spaces is relatively similar. Such methods\ntend to transform unseen testing visual representations into one of the seen\nclasses' semantic features instead of the semantic features of the correct\nunseen class, resulting in low accuracy GZSL classification. Recently,\ngenerative adversarial networks (GAN) have been explored to synthesize visual\nrepresentations of the unseen classes from their semantic features - the\nsynthesized representations of the seen and unseen classes are then used to\ntrain the GZSL classifier. This approach has been shown to boost GZSL\nclassification accuracy, however, there is no guarantee that synthetic visual\nrepresentations can generate back their semantic feature in a multi-modal\ncycle-consistent manner. This constraint can result in synthetic visual\nrepresentations that do not represent well their semantic features. In this\npaper, we propose the use of such constraint based on a new regularization for\nthe GAN training that forces the generated visual features to reconstruct their\noriginal semantic features. Once our model is trained with this multi-modal\ncycle-consistent semantic compatibility, we can then synthesize more\nrepresentative visual representations for the seen and, more importantly, for\nthe unseen classes. Our proposed approach shows the best GZSL classification\nresults in the field in several publicly available datasets.", "field": ["Generative Models", "Convolutions"], "task": ["Generalized Zero-Shot Learning", "Zero-Shot Learning"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["SUN Attribute", "CUB-200-2011"], "metric": ["average top-1 classification accuracy", "Harmonic mean"], "title": "Multi-modal Cycle-consistent Generalized Zero-Shot Learning"} {"abstract": "The big empirical success of group equivariant networks has led in recent years to the sprouting of a great variety of equivariant network architectures. A particular focus has thereby been on rotation and reflection equivariant CNNs for planar images. Here we give a general description of $E(2)$-equivariant convolutions in the framework of Steerable CNNs. The theory of Steerable CNNs thereby yields constraints on the convolution kernels which depend on group representations describing the transformation laws of feature spaces. We show that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations. A general solution of the kernel space constraint is given for arbitrary representations of the Euclidean group $E(2)$ and its subgroups. We implement a wide range of previously proposed and entirely new equivariant network architectures and extensively compare their performances. $E(2)$-steerable convolutions are further shown to yield remarkable gains on CIFAR-10, CIFAR-100 and STL-10 when used as a drop-in replacement for non-equivariant convolutions.", "field": ["Convolutions"], "task": ["Image Classification"], "method": ["Convolution"], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "General $E(2)$-Equivariant Steerable CNNs"} {"abstract": "To recognize the unseen classes with only few samples, few-shot learning (FSL) uses prior knowledge learned from the seen classes. A major challenge for FSL is that the distribution of the unseen classes is different from that of those seen, resulting in poor generalization even when a model is meta-trained on the seen classes. This class-difference-caused distribution shift can be considered as a special case of domain shift. In this paper, for the first time, we propose a domain adaptation prototypical network with attention (DAPNA) to explicitly tackle such a domain shift problem in a meta-learning framework. Specifically, armed with a set transformer based attention module, we construct each episode with two sub-episodes without class overlap on the seen classes to simulate the domain shift between the seen and unseen classes. To align the feature distributions of the two sub-episodes with limited training samples, a feature transfer network is employed together with a margin disparity discrepancy (MDD) loss. Importantly, theoretical analysis is provided to give the learning bound of our DAPNA. Extensive experiments show that our DAPNA outperforms the state-of-the-art FSL alternatives, often by significant margins.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Domain Adaptation", "Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-ImageNet-CUB 5-way (1-shot)", "Mini-ImageNet-CUB 5-way (5-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Few-Shot Learning as Domain Adaptation: Algorithm and Analysis"} {"abstract": "We consider the task of mapping pseudocode to long programs that are functionally correct. Given test cases as a mechanism to validate programs, we search over the space of possible translations of the pseudocode to find a program that passes the validation. However, without proper credit assignment to localize the sources of program failures, it is difficult to guide search toward more promising programs. We propose to perform credit assignment based on signals from compilation errors, which constitute 88.7% of program failures. Concretely, we treat the translation of each pseudocode line as a discrete portion of the program, and whenever a synthesized program fails to compile, an error localization method tries to identify the portion of the program responsible for the failure. We then focus search over alternative translations of the pseudocode for those portions. For evaluation, we collected the SPoC dataset (Search-based Pseudocode to Code) containing 18,356 programs with human-authored pseudocode and test cases. Under a budget of 100 program compilations, performing search improves the synthesis success rate over using the top-one translation of the pseudocode from 25.6% to 44.7%.", "field": ["Graph Embeddings"], "task": ["Program Synthesis"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["SPoC TestP", "SPoC TestW"], "metric": ["Success rate @budget 100"], "title": "SPoC: Search-based Pseudocode to Code"} {"abstract": "Panoptic segmentation requires segments of both \"things\" (countable object instances) and \"stuff\" (uncountable and amorphous regions) within a single output. A common approach involves the fusion of instance segmentation (for \"things\") and semantic segmentation (for \"stuff\") into a non-overlapping placement of segments, and resolves overlaps. However, instance ordering with detection confidence do not correlate well with natural occlusion relationship. To resolve this issue, we propose a branch that is tasked with modeling how two instance masks should overlap one another as a binary relation. Our method, named OCFusion, is lightweight but particularly effective in the instance fusion process. OCFusion is trained with the ground truth relation derived automatically from the existing dataset annotations. We obtain state-of-the-art results on COCO and show competitive results on the Cityscapes panoptic segmentation benchmark.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Panoptic Segmentation", "Semantic Segmentation"], "method": ["ResNeXt Block", "Average Pooling", "Grouped Convolution", "ResNeXt", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Connection", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["COCO test-dev"], "metric": ["PQst", "PQ", "PQth"], "title": "Learning Instance Occlusion for Panoptic Segmentation"} {"abstract": "Semantic Segmentation using deep convolutional neural network pose more\ncomplex challenge for any GPU intensive task. As it has to compute million of\nparameters, it results to huge memory consumption. Moreover, extracting finer\nfeatures and conducting supervised training tends to increase the complexity.\nWith the introduction of Fully Convolutional Neural Network, which uses finer\nstrides and utilizes deconvolutional layers for upsampling, it has been a go to\nfor any image segmentation task. In this paper, we propose two segmentation\narchitecture which not only needs one-third the parameters to compute but also\ngives better accuracy than the similar architectures. The model weights were\ntransferred from the popular neural net like VGG19 and VGG16 which were trained\non Imagenet classification data-set. Then we transform all the fully connected\nlayers to convolutional layers and use dilated convolution for decreasing the\nparameters. Lastly, we add finer strides and attach four skip architectures\nwhich are element-wise summed with the deconvolutional layers in steps. We\ntrain and test on different sparse and fine data-sets like Pascal VOC2012,\nPascal-Context and NYUDv2 and show how better our model performs in this tasks.\nOn the other hand our model has a faster inference time and consumes less\nmemory for training and testing on NVIDIA Pascal GPUs, making it more efficient\nand less memory consuming architecture for pixel-wise segmentation.", "field": ["Convolutions"], "task": ["Scene Segmentation", "Semantic Segmentation"], "method": ["Dilated Convolution", "Convolution"], "dataset": ["NYU Depth v2", "PASCAL Context", "PASCAL VOC 2012 test"], "metric": ["Mean IoU", "mIoU"], "title": "Efficient Yet Deep Convolutional Neural Networks for Semantic Segmentation"} {"abstract": "Scene parsing is challenging for unrestricted open vocabulary and diverse\nscenes. In this paper, we exploit the capability of global context information\nby different-region-based context aggregation through our pyramid pooling\nmodule together with the proposed pyramid scene parsing network (PSPNet). Our\nglobal prior representation is effective to produce good quality results on the\nscene parsing task, while PSPNet provides a superior framework for pixel-level\nprediction tasks. The proposed approach achieves state-of-the-art performance\non various datasets. It came first in ImageNet scene parsing challenge 2016,\nPASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new\nrecord of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on\nCityscapes.", "field": ["Image Data Augmentation", "Semantic Segmentation Models", "Semantic Segmentation Modules", "Regularization", "Stochastic Optimization", "Learning Rate Schedules", "Initialization", "Activation Functions", "Convolutional Neural Networks", "Normalization", "Convolutions", "Pooling Operations", "Skip Connection Blocks", "Skip Connections", "Miscellaneous Components"], "task": ["Lesion Segmentation", "Real-Time Semantic Segmentation", "Scene Parsing", "Semantic Segmentation", "Video Semantic Segmentation"], "method": ["Weight Decay", "Dilated Convolution", "Average Pooling", "Polynomial Rate Decay", "1x1 Convolution", "Pyramid Pooling Module", "ResNet", "PSPNet", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "RandomRotate", "Fully Convolutional Network", "FCN", "Batch Normalization", "Residual Network", "Kaiming Initialization", "SGD with Momentum", "Random Gaussian Blur", "Auxiliary Classifier", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Cityscapes val", "PASCAL VOC 2012 test", "ADE20K", "ADE20K val", "Anatomical Tracings of Lesions After Stroke (ATLAS) ", "CamVid", "NYU Depth v2", "PASCAL Context", "Cityscapes test"], "metric": ["Speed(ms/f)", "Validation mIoU", "Recall", "Time (ms)", "Mean IoU", "Precision", "mIoU", "Dice", "Mean IoU (class)", "IoU", "Frame (fps)", "Test Score"], "title": "Pyramid Scene Parsing Network"} {"abstract": "Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.", "field": ["Image Data Augmentation", "Regularization"], "task": ["Image Classification"], "method": ["Manifold Mixup", "Mixup"], "dataset": ["CIFAR-100", "CIFAR-10"], "metric": ["Percentage correct"], "title": "Manifold Mixup: Better Representations by Interpolating Hidden States"} {"abstract": "In object detection, the intersection over union (IoU) threshold is frequently used to define positives/negatives. The threshold used to train a detector defines its \\textit{quality}. While the commonly used threshold of 0.5 leads to noisy (low-quality) detections, detection performance frequently degrades for larger thresholds. This paradox of high-quality detection has two causes: 1) overfitting, due to vanishing positive samples for large thresholds, and 2) inference-time quality mismatch between detector and test hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, composed of a sequence of detectors trained with increasing IoU thresholds, is proposed to address these problems. The detectors are trained sequentially, using the output of a detector as training set for the next. This resampling progressively improves hypotheses quality, guaranteeing a positive training set of equivalent size for all detectors and minimizing overfitting. The same cascade is applied at inference, to eliminate quality mismatches between hypotheses and detectors. An implementation of the Cascade R-CNN without bells or whistles achieves state-of-the-art performance on the COCO dataset, and significantly improves high-quality detection on generic and specific object detection datasets, including VOC, KITTI, CityPerson, and WiderFace. Finally, the Cascade R-CNN is generalized to instance segmentation, with nontrivial improvements over the Mask R-CNN. To facilitate future research, two implementations are made available at \\url{https://github.com/zhaoweicai/cascade-rcnn} (Caffe) and \\url{https://github.com/zhaoweicai/Detectron-Cascade-RCNN} (Detectron).", "field": ["Output Functions", "RoI Feature Extractors", "Convolutions", "Instance Segmentation Models", "Object Detection Models"], "task": ["Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": ["Softmax", "Convolution", "RoIAlign", "Mask R-CNN", "Cascade R-CNN"], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Cascade R-CNN: High Quality Object Detection and Instance Segmentation"} {"abstract": "This paper presents a state-of-the-art model for visual question answering\n(VQA), which won the first place in the 2017 VQA Challenge. VQA is a task of\nsignificant importance for research in artificial intelligence, given its\nmultimodal nature, clear evaluation protocol, and potential real-world\napplications. The performance of deep neural networks for VQA is very dependent\non choices of architectures and hyperparameters. To help further research in\nthe area, we describe in detail our high-performing, though relatively simple\nmodel. Through a massive exploration of architectures and hyperparameters\nrepresenting more than 3,000 GPU-hours, we identified tips and tricks that lead\nto its success, namely: sigmoid outputs, soft training targets, image features\nfrom bottom-up attention, gated tanh activations, output embeddings initialized\nusing GloVe and Google Images, large mini-batches, and smart shuffling of\ntraining data. We provide a detailed analysis of their impact on performance to\nassist others in making an appropriate selection.", "field": ["Word Embeddings"], "task": ["Visual Question Answering"], "method": ["GloVe Embeddings", "GloVe"], "dataset": ["VQA v2 test-std", "VQA v2 test-dev"], "metric": ["overall", "Accuracy"], "title": "Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge"} {"abstract": "Collectible card games are played by tens of millions of players worldwide. Their intricate rules and diverse cards make them much harder than traditional card games. To win, players must be proficient in two interdependent tasks: deck building and battling. In this paper, we present a deep reinforcement learning approach for deck building in arena mode - an understudied game mode present in many collectible card games. In arena, the players build decks immediately before battling by drafting one card at a time from randomly presented candidates. We investigate three variants of the approach and perform experiments on Legends of Code and Magic, a collectible card game designed for AI research. Results show that our learned draft strategies outperform those of the best agents of the game. Moreover, a participant of the Strategy Card Game AI competition improves from tenth to fourth place when coupled with our best draft agent.", "field": ["Recurrent Neural Networks", "Activation Functions", "Policy Gradient Methods", "Regularization"], "task": ["Card Games"], "method": ["Long Short-Term Memory", "Entropy Regularization", "Tanh Activation", "LSTM", "PPO", "Proximal Policy Optimization", "Sigmoid Activation"], "dataset": ["Legends of Code and Magic (Self-play)"], "metric": ["Win rate"], "title": "Drafting in Collectible Card Games via Reinforcement Learning"} {"abstract": "BiLSTM has been prevalently used as a core module for NER in a sequence-labeling setup. State-of-the-art approaches use BiLSTM with additional resources such as gazetteers, language-modeling, or multi-task supervision to further improve NER. This paper instead takes a step back and focuses on analyzing problems of BiLSTM itself and how exactly self-attention can bring improvements. We formally show the limitation of (CRF-)BiLSTM in modeling cross-context patterns for each word -- the XOR limitation. Then, we show that two types of simple cross-structures -- self-attention and Cross-BiLSTM -- can effectively remedy the problem. We test the practical impacts of the deficiency on real-world NER datasets, OntoNotes 5.0 and WNUT 2017, with clear and consistent improvements over the baseline, up to 8.7% on some of the multi-token entity mentions. We give in-depth analyses of the improvements across several aspects of NER, especially the identification of multi-token mentions. This study should lay a sound foundation for future improvements on sequence-labeling NER. (Source codes: https://github.com/jacobvsdanniel/cross-ner)", "field": ["Recurrent Neural Networks", "Activation Functions", "Bidirectional Recurrent Neural Networks"], "task": ["Named Entity Recognition"], "method": ["Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Bidirectional LSTM", "LSTM", "Sigmoid Activation"], "dataset": ["Long-tail emerging entities", "Ontonotes v5 (English)"], "metric": ["Precision", "Recall", "F1"], "title": "Why Attention? Analyze BiLSTM Deficiency and Its Remedies in the Case of NER"} {"abstract": "Long document coreference resolution remains a challenging task due to the large memory and runtime requirements of current models. Recent work doing incremental coreference resolution using just the global representation of entities shows practical benefits but requires keeping all entities in memory, which can be impractical for long documents. We argue that keeping all entities in memory is unnecessary, and we propose a memory-augmented neural network that tracks only a small bounded number of entities at a time, thus guaranteeing a linear runtime in length of document. We show that (a) the model remains competitive with models with high memory and computational requirements on OntoNotes and LitBank, and (b) the model learns an efficient memory management strategy easily outperforming a rule-based strategy.", "field": ["Working Memory Models"], "task": ["Coreference Resolution"], "method": ["Memory Network"], "dataset": ["OntoNotes", "CoNLL 2012"], "metric": ["Avg F1", "F1"], "title": "Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks"} {"abstract": "AMR-to-text generation aims to recover a text containing the same meaning as an input AMR graph. Current research develops increasingly powerful graph encoders to better represent AMR graphs, with decoders based on standard language modeling being used to generate outputs. We propose a decoder that back predicts projected AMR graphs on the target sentence during text generation. As the result, our outputs can better preserve the input meaning than standard decoders. Experiments on two AMR benchmarks show the superiority of our model over the previous state-of-the-art system based on graph Transformer.", "field": ["Attention Modules", "Output Functions", "Stochastic Optimization", "Regularization", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["AMR-to-Text Generation", "Data-to-Text Generation", "Language Modelling", "Text Generation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["LDC2017T10"], "metric": ["BLEU"], "title": "Online Back-Parsing for AMR-to-Text Generation"} {"abstract": "Sparsity learning aims to decrease the computational and memory costs of large deep neural networks (DNNs) via pruning neural connections while simultaneously retaining high accuracy. A large body of work has developed sparsity learning approaches, with recent large-scale experiments showing that two main methods, magnitude pruning and Variational Dropout (VD), achieve similar state-of-the-art results for classification tasks. We propose Adaptive Neural Connections (ANC), a method for explicitly parameterizing fine-grained neuron-to-neuron connections via adjacency matrices at each layer that are learned through backpropagation. Explicitly parameterizing neuron-to-neuron connections confers two primary advantages: 1. Sparsity can be explicitly optimized for via norm-based regularization on the adjacency matrices; and 2. When combined with VD (which we term, ANC-VD), the adjacencies can be interpreted as learned weight importance parameters, which we hypothesize leads to improved convergence for VD. Experiments with ResNet18 show that architectures augmented with ANC outperform their vanilla counterparts.", "field": ["Regularization"], "task": ["Model Compression", "Network Pruning", "Sparse Learning"], "method": ["Variational Dropout", "Dropout"], "dataset": ["CINIC-10", "ImageNet32"], "metric": ["Sparsity"], "title": "Adaptive Neural Connections for Sparsity Learning"} {"abstract": "Search space design is very critical to neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit that is much smaller than the ones used in recent NAS algorithms. This search space allows a mix of operations by composing different types of atomic blocks, while the search space in previous methods only allows homogeneous operations. Based on this search space, we propose a resource-aware architecture search framework which automatically assigns the computational resources (e.g., output channel numbers) for each operation by jointly considering the performance and the computational cost. In addition, to accelerate the search process, we propose a dynamic network shrinkage technique which prunes the atomic blocks with negligible influence on outputs on the fly. Instead of a search-and-retrain two-stage paradigm, our method simultaneously searches and trains the target architecture. Our method achieves state-of-the-art performance under several FLOPs configurations on ImageNet with a small searching cost. We open our entire codebase at: https://github.com/meijieru/AtomNAS.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions"], "task": ["Neural Architecture Search"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["ImageNet"], "metric": ["Top-1 Error Rate", "MACs", "Params", "Accuracy"], "title": "AtomNAS: Fine-Grained End-to-End Neural Architecture Search"} {"abstract": "The purpose of this study is to determine whether current video datasets have\nsufficient data for training very deep convolutional neural networks (CNNs)\nwith spatio-temporal three-dimensional (3D) kernels. Recently, the performance\nlevels of 3D CNNs in the field of action recognition have improved\nsignificantly. However, to date, conventional research has only explored\nrelatively shallow 3D architectures. We examine the architectures of various 3D\nCNNs from relatively shallow to very deep ones on current video datasets. Based\non the results of those experiments, the following conclusions could be\nobtained: (i) ResNet-18 training resulted in significant overfitting for\nUCF-101, HMDB-51, and ActivityNet but not for Kinetics. (ii) The Kinetics\ndataset has sufficient data for training of deep 3D CNNs, and enables training\nof up to 152 ResNets layers, interestingly similar to 2D ResNets on ImageNet.\nResNeXt-101 achieved 78.4% average accuracy on the Kinetics test set. (iii)\nKinetics pretrained simple 3D architectures outperforms complex 2D\narchitectures, and the pretrained ResNeXt-101 achieved 94.5% and 70.2% on\nUCF-101 and HMDB-51, respectively. The use of 2D CNNs trained on ImageNet has\nproduced significant progress in various tasks in image. We believe that using\ndeep 3D CNNs together with Kinetics will retrace the successful history of 2D\nCNNs and ImageNet, and stimulate advances in computer vision for videos. The\ncodes and pretrained models used in this study are publicly available.\nhttps://github.com/kenshohara/3D-ResNets-PyTorch", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Action Recognition"], "method": ["ResNeXt Block", "Average Pooling", "Grouped Convolution", "ResNeXt", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Connection", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["UCF101"], "metric": ["3-fold Accuracy"], "title": "Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?"} {"abstract": "Non-local self-similarity is well-known to be an effective prior for the image denoising problem. However, little work has been done to incorporate it in convolutional neural networks, which surpass non-local model-based methods despite only exploiting local information. In this paper, we propose a novel end-to-end trainable neural network architecture employing layers based on graph convolution operations, thereby creating neurons with non-local receptive fields. The graph convolution operation generalizes the classic convolution to arbitrary graphs. In this work, the graph is dynamically computed from similarities among the hidden features of the network, so that the powerful representation learning capabilities of the network are exploited to uncover self-similar patterns. We introduce a lightweight Edge-Conditioned Convolution which addresses vanishing gradient and over-parameterization issues of this particular graph convolution. Extensive experiments show state-of-the-art performance with improved qualitative and quantitative results on both synthetic Gaussian noise and real noise.", "field": ["Convolutions"], "task": ["Denoising", "Image Denoising", "Representation Learning"], "method": ["Convolution"], "dataset": ["Urban100 sigma50", "Urban100 sigma25", "Set12 sigma50", "BSD68 sigma50", "Set12 sigma15", "BSD68 sigma15", "Urban100 sigma15", "Set12 sigma25", "BSD68 sigma25"], "metric": ["PSNR"], "title": "Deep Graph-Convolutional Image Denoising"} {"abstract": "In this paper, we present an efficient neural network for end-to-end general purpose audio source separation. Specifically, the backbone structure of this convolutional network is the SUccessive DOwnsampling and Resampling of Multi-Resolution Features (SuDoRMRF) as well as their aggregation which is performed through simple one-dimensional convolutions. In this way, we are able to obtain high quality audio source separation with limited number of floating point operations, memory requirements, number of parameters and latency. Our experiments on both speech and environmental sound separation datasets show that SuDoRMRF performs comparably and even surpasses various state-of-the-art approaches with significantly higher computational resource requirements.", "field": ["Convolutions"], "task": ["Audio Source Separation", "Speech Separation"], "method": ["Depthwise Convolution"], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "Sudo rm -rf: Efficient Networks for Universal Audio Source Separation"} {"abstract": "Traditionally, most data-to-text applications have been designed using a modular pipeline architecture, in which non-linguistic input data is converted into natural language through several intermediate transformations. In contrast, recent neural models for data-to-text generation have been proposed as end-to-end approaches, where the non-linguistic input is rendered in natural language with much less explicit intermediate representations in-between. This study introduces a systematic comparison between neural pipeline and end-to-end data-to-text approaches for the generation of text from RDF triples. Both architectures were implemented making use of state-of-the art deep learning methods as the encoder-decoder Gated-Recurrent Units (GRU) and Transformer. Automatic and human evaluations together with a qualitative analysis suggest that having explicit intermediate steps in the generation process results in better texts than the ones generated by end-to-end approaches. Moreover, the pipeline models generalize better to unseen inputs. Data and code are publicly available.", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Data-to-Text Generation", "Text Generation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections", "Rectified Linear Units"], "dataset": ["WebNLG Full"], "metric": ["BLEU"], "title": "Neural data-to-text generation: A comparison between pipeline and end-to-end architectures"} {"abstract": "Deep networks excel in learning patterns from large amounts of data. On the other hand, many geometric vision tasks are specified as optimization problems. To seamlessly combine deep learning and geometric vision, it is vital to perform learning and geometric optimization end-to-end. Towards this aim, we present BPnP, a novel network module that backpropagates gradients through a Perspective-n-Points (PnP) solver to guide parameter updates of a neural network. Based on implicit differentiation, we show that the gradients of a \"self-contained\" PnP solver can be derived accurately and efficiently, as if the optimizer block were a differentiable function. We validate BPnP by incorporating it in a deep model that can learn camera intrinsics, camera extrinsics (poses) and 3D structure from training datasets. Further, we develop an end-to-end trainable pipeline for object pose estimation, which achieves greater accuracy by combining feature-based heatmap losses with 2D-3D reprojection errors. Since our approach can be extended to other optimization problems, our work helps to pave the way to perform learnable geometric vision in a principled manner. Our PyTorch implementation of BPnP is available on http://github.com/BoChenYS/BPnP.", "field": ["Output Functions"], "task": ["6D Pose Estimation", "6D Pose Estimation using RGB", "Regression"], "method": ["Heatmap"], "dataset": ["LineMOD"], "metric": ["Mean ADD", "Accuracy (ADD)", "Accuracy"], "title": "End-to-End Learnable Geometric Vision by Backpropagating PnP Optimization"} {"abstract": "Action recognition with skeleton data is a challenging task in computer vision. Graph convolutional networks (GCNs), which directly model the human body skeletons as the graph structure, have achieved remarkable performance. However, current architectures of GCNs are limited to the small receptive field of convolution filters, only capturing local physical dependencies among joints and using all skeleton data indiscriminately. To address these limitations and to achieve a flexible graph representation of the skeleton features, we propose a novel semantics-guided graph convolutional network (Sem-GCN) for skeleton-based action recognition. Three types of semantic graph modules (structural graph extraction module, actional graph inference module and attention graph iteration module) are employed in Sem-GCN to aggregate L-hop joint neighbors' information, to capture action-specific latent dependencies and to distribute importance level. Combing these semantic graphs into a generalized skeleton graph, we further propose the semantics-guided graph convolution block, which stacks semantic graph convolution and temporal convolution, to learn both semantic and temporal features for action recognition. Experimental results demonstrate the effectiveness of our proposed model on the widely used NTU and Kinetics datasets.", "field": ["Convolutions"], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": ["Convolution"], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "A Semantics-Guided Graph Convolutional Network for Skeleton-Based Action Recognition"} {"abstract": "This paper presents SO-Net, a permutation invariant architecture for deep\nlearning with orderless point clouds. The SO-Net models the spatial\ndistribution of point cloud by building a Self-Organizing Map (SOM). Based on\nthe SOM, SO-Net performs hierarchical feature extraction on individual points\nand SOM nodes, and ultimately represents the input point cloud by a single\nfeature vector. The receptive field of the network can be systematically\nadjusted by conducting point-to-node k nearest neighbor search. In recognition\ntasks such as point cloud reconstruction, classification, object part\nsegmentation and shape retrieval, our proposed network demonstrates performance\nthat is similar with or better than state-of-the-art approaches. In addition,\nthe training speed is significantly faster than existing point cloud\nrecognition networks because of the parallelizability and simplicity of the\nproposed architecture. Our code is available at the project website.\nhttps://github.com/lijx10/SO-Net", "field": ["Clustering"], "task": ["3D Part Segmentation", "3D Point Cloud Classification"], "method": ["Self-Organizing Map", "SOM"], "dataset": ["ShapeNet-Part", "ModelNet40"], "metric": ["Overall Accuracy", "Instance Average IoU"], "title": "SO-Net: Self-Organizing Network for Point Cloud Analysis"} {"abstract": "DNNs have been quickly and broadly exploited to improve the data analysis\nquality in many complex science and engineering applications. Today's DNNs are\nbecoming deeper and wider because of increasing demand on the analysis quality\nand more and more complex applications to resolve. The wide and deep DNNs,\nhowever, require large amounts of resources, significantly restricting their\nutilization on resource-constrained systems. Although some network\nsimplification methods have been proposed to address this issue, they suffer\nfrom either low compression ratios or high compression errors, which may\nintroduce a costly retraining process for the target accuracy. In this paper,\nwe propose DeepSZ: an accuracy-loss bounded neural network compression\nframework, which involves four key steps: network pruning, error bound\nassessment, optimization for error bound configuration, and compressed model\ngeneration, featuring a high compression ratio and low encoding time. The\ncontribution is three-fold. (1) We develop an adaptive approach to select the\nfeasible error bounds for each layer. (2) We build a model to estimate the\noverall loss of accuracy based on the accuracy degradation caused by individual\ndecompressed layers. (3) We develop an efficient optimization algorithm to\ndetermine the best-fit configuration of error bounds in order to maximize the\ncompression ratio under the user-set accuracy constraint. Experiments show that\nDeepSZ can compress AlexNet and VGG-16 on the ImageNet by a compression ratio\nof 46X and 116X, respectively, and compress LeNet-300-100 and LeNet-5 on the\nMNIST by a compression ratio of 57X and 56X, respectively, with only up to 0.3%\nloss of accuracy. Compared with other state-of-the-art methods, DeepSZ can\nimprove the compression ratio by up to 1.43X, the DNN encoding performance by\nup to 4.0X (with four Nvidia Tesla V100 GPUs), and the decoding performance by\nup to 6.2X.", "field": ["Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations"], "task": ["Network Pruning", "Neural Network Compression"], "method": ["Grouped Convolution", "Softmax", "Convolution", "1x1 Convolution", "ReLU", "Rectified Linear Units", "AlexNet", "Dropout", "Dense Connections", "Local Response Normalization", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["All"], "title": "DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression"} {"abstract": "Depthwise separable convolutions reduce the number of parameters and\ncomputation used in convolutional operations while increasing representational\nefficiency. They have been shown to be successful in image classification\nmodels, both in obtaining better models than previously possible for a given\nparameter count (the Xception architecture) and considerably reducing the\nnumber of parameters required to perform at a given level (the MobileNets\nfamily of architectures). Recently, convolutional sequence-to-sequence networks\nhave been applied to machine translation tasks with good results. In this work,\nwe study how depthwise separable convolutions can be applied to neural machine\ntranslation. We introduce a new architecture inspired by Xception and ByteNet,\ncalled SliceNet, which enables a significant reduction of the parameter count\nand amount of computation needed to obtain results like ByteNet, and, with a\nsimilar parameter count, achieves new state-of-the-art results. In addition to\nshowing that depthwise separable convolutions perform well for machine\ntranslation, we investigate the architectural changes that they enable: we\nobserve that thanks to depthwise separability, we can increase the length of\nconvolution windows, removing the need for filter dilation. We also introduce a\nnew \"super-separable\" convolution operation that further reduces the number of\nparameters and computational cost for obtaining state-of-the-art results.", "field": ["Convolutions"], "task": ["Machine Translation"], "method": ["Convolution"], "dataset": ["WMT2014 English-German"], "metric": ["BLEU score"], "title": "Depthwise Separable Convolutions for Neural Machine Translation"} {"abstract": "Current end-to-end machine reading and question answering (Q\\&A) models are\nprimarily based on recurrent neural networks (RNNs) with attention. Despite\ntheir success, these models are often slow for both training and inference due\nto the sequential nature of RNNs. We propose a new Q\\&A architecture called\nQANet, which does not require recurrent networks: Its encoder consists\nexclusively of convolution and self-attention, where convolution models local\ninteractions and self-attention models global interactions. On the SQuAD\ndataset, our model is 3x to 13x faster in training and 4x to 9x faster in\ninference, while achieving equivalent accuracy to recurrent models. The\nspeed-up gain allows us to train the model with much more data. We hence\ncombine our model with data generated by backtranslation from a neural machine\ntranslation model. On the SQuAD dataset, our single model, trained with\naugmented data, achieves 84.6 F1 score on the test set, which is significantly\nbetter than the best published F1 score of 81.8.", "field": ["Convolutions"], "task": ["Machine Translation", "Question Answering", "Reading Comprehension"], "method": ["Convolution"], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"} {"abstract": "Atrial fibrillation (AF), a common abnormal heartbeat rhythm, is a life-threatening recurrent disease that affects older adults. Automatic classification is one of the most valuable topics in medical sciences and bioinformatics, especially the detection of atrial fibrillation. However, it is difficult to accurately explain the local characteristics of electrocardiogram (ECG) signals by manual analysis, due to their small amplitude and short duration, coupled with the complexity and non-linearity. Hence, in this paper, we propose a novel deep arrhythmia-diagnosis method, named deep CNN-BLSTM network model, to automatically detect the AF heartbeats using the ECG signals. The model mainly consists of four convolution layers: two BLSTM layers and two fully connected layers. The datasets of RR intervals (called set A) and heartbeat sequences (P-QRS-T waves, called set B) are fed into the above-mentioned model. Most importantly, our proposed approach achieved favorable performances with an accuracy of 99.94% and 98.63% in the training and validation set of set A, respectively. In the testing set (unseen data sets), we obtained an accuracy of 96.59%, a sensitivity of 99.93%, and a specificity of 97.03%. To the best of our knowledge, the algorithm we proposed has shown excellent results compared to many state-of-art researches, which provides a new solution for the AF automatic detection.", "field": ["Convolutions"], "task": ["Arrhythmia Detection", "Atrial Fibrillation Detection", "Electrocardiography (ECG)"], "method": ["Convolution"], "dataset": ["MIT-BIH AF"], "metric": ["Accuracy"], "title": "A Novel Deep Arrhythmia-Diagnosis Network for Atrial Fibrillation Classification Using Electrocardiogram Signals"} {"abstract": "BERT, a pre-trained Transformer model, has achieved ground-breaking performance on multiple NLP tasks. In this paper, we describe BERTSUM, a simple variant of BERT, for extractive summarization. Our system is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L. The codes to reproduce our results are available at https://github.com/nlpyang/BertSum", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Extractive Text Summarization"], "method": ["Weight Decay", "Adam", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Transformer", "ReLU", "Residual Connection", "Dense Connections", "Layer Normalization", "Label Smoothing", "GELU", "WordPiece", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Dropout", "BERT", "Rectified Linear Units"], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Fine-tune BERT for Extractive Summarization"} {"abstract": "We describe the latest improvements to the IBM English conversational\ntelephone speech recognition system. Some of the techniques that were found\nbeneficial are: maxout networks with annealed dropout rates; networks with a\nvery large number of outputs trained on 2000 hours of data; joint modeling of\npartially unfolded recurrent neural networks and convolutional nets by\ncombining the bottleneck and output layers and retraining the resulting model;\nand lastly, sophisticated language model rescoring with exponential and neural\nnetwork LMs. These techniques result in an 8.0% word error rate on the\nSwitchboard part of the Hub5-2000 evaluation test set which is 23% relative\nbetter than our previous best published result.", "field": ["Activation Functions", "Regularization"], "task": ["Language Modelling", "Speech Recognition"], "method": ["Maxout", "Dropout"], "dataset": ["Switchboard + Hub500"], "metric": ["Percentage error"], "title": "The IBM 2015 English Conversational Telephone Speech Recognition System"} {"abstract": "We propose MRU (Multi-Range Reasoning Units), a new fast compositional\nencoder for machine comprehension (MC). Our proposed MRU encoders are\ncharacterized by multi-ranged gating, executing a series of parameterized\ncontract-and-expand layers for learning gating vectors that benefit from long\nand short-term dependencies. The aims of our approach are as follows: (1)\nlearning representations that are concurrently aware of long and short-term\ncontext, (2) modeling relationships between intra-document blocks and (3) fast\nand efficient sequence encoding. We show that our proposed encoder demonstrates\npromising results both as a standalone encoder and as well as a complementary\nbuilding block. We conduct extensive experiments on three challenging MC\ndatasets, namely RACE, SearchQA and NarrativeQA, achieving highly competitive\nperformance on all. On the RACE benchmark, our model outperforms DFN (Dynamic\nFusion Networks) by 1.5%-6% without using any recurrent or convolution layers.\nSimilarly, we achieve competitive performance relative to AMANDA on the\nSearchQA benchmark and BiDAF on the NarrativeQA benchmark without using any\nLSTM/GRU layers. Finally, incorporating MRU encoders with standard BiLSTM\narchitectures further improves performance, achieving state-of-the-art results.", "field": ["Convolutions"], "task": ["Reading Comprehension"], "method": ["Convolution"], "dataset": ["RACE"], "metric": ["RACE-h", "RACE-m", "RACE"], "title": "Multi-range Reasoning for Machine Comprehension"} {"abstract": "This paper is on human pose estimation using Convolutional Neural Networks.\nOur main contribution is a CNN cascaded architecture specifically designed for\nlearning part relationships and spatial context, and robustly inferring pose\neven for the case of severe part occlusions. To this end, we propose a\ndetection-followed-by-regression CNN cascade. The first part of our cascade\noutputs part detection heatmaps and the second part performs regression on\nthese heatmaps. The benefits of the proposed architecture are multi-fold: It\nguides the network where to focus in the image and effectively encodes part\nconstraints and context. More importantly, it can effectively cope with\nocclusions because part detection heatmaps for occluded parts provide low\nconfidence scores which subsequently guide the regression part of our network\nto rely on contextual information in order to predict the location of these\nparts. Additionally, we show that the proposed cascade is flexible enough to\nreadily allow the integration of various CNN architectures for both detection\nand regression, including recent ones based on residual learning. Finally, we\nillustrate that our cascade achieves top performance on the MPII and LSP data\nsets. Code can be downloaded from http://www.cs.nott.ac.uk/~psxab5/", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Pose Estimation", "Regression"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Leeds Sports Poses", "MPII Human Pose"], "metric": ["PCK", "PCKh-0.5"], "title": "Human pose estimation via Convolutional Part Heatmap Regression"} {"abstract": "Dialogue Act recognition associate dialogue acts (i.e., semantic labels) to\nutterances in a conversation. The problem of associating semantic labels to\nutterances can be treated as a sequence labeling problem. In this work, we\nbuild a hierarchical recurrent neural network using bidirectional LSTM as a\nbase unit and the conditional random field (CRF) as the top layer to classify\neach utterance into its corresponding dialogue act. The hierarchical network\nlearns representations at multiple levels, i.e., word level, utterance level,\nand conversation level. The conversation level representations are input to the\nCRF layer, which takes into account not only all previous utterances but also\ntheir dialogue acts, thus modeling the dependency among both, labels and\nutterances, an important consideration of natural dialogue. We validate our\napproach on two different benchmark data sets, Switchboard and Meeting Recorder\nDialogue Act, and show performance improvement over the state-of-the-art\nmethods by $2.2\\%$ and $4.1\\%$ absolute points, respectively. It is worth\nnoting that the inter-annotator agreement on Switchboard data set is $84\\%$,\nand our method is able to achieve the accuracy of about $79\\%$ despite being\ntrained on the noisy data.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Dialogue Act Classification"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["Switchboard corpus", "ICSI Meeting Recorder Dialog Act (MRDA) corpus"], "metric": ["Accuracy"], "title": "Dialogue Act Sequence Labeling using Hierarchical encoder with CRF"} {"abstract": "In this paper we discuss several forms of spatiotemporal convolutions for\nvideo analysis and study their effects on action recognition. Our motivation\nstems from the observation that 2D CNNs applied to individual frames of the\nvideo have remained solid performers in action recognition. In this work we\nempirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within\nthe framework of residual learning. Furthermore, we show that factorizing the\n3D convolutional filters into separate spatial and temporal components yields\nsignificantly advantages in accuracy. Our empirical study leads to the design\nof a new spatiotemporal convolutional block \"R(2+1)D\" which gives rise to CNNs\nthat achieve results comparable or superior to the state-of-the-art on\nSports-1M, Kinetics, UCF101 and HMDB51.", "field": ["Image Data Augmentation", "Regularization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Video Sampling", "Feedforward Networks", "Pooling Operations", "Skip Connections"], "task": ["Action Classification", "Action Recognition", "Temporal Action Localization"], "method": ["Weight Decay", "R(2+1)D", "SGD with Momentum", "Average Pooling", "Temporal Jittering", "Random Resized Crop", "(2+1)D Convolution", "Batch Normalization", "Rectified Linear Units", "ReLU", "Residual Connection", "Linear Warmup", "Global Average Pooling", "Dense Connections"], "dataset": ["Kinetics-400", "UCF101", "Sports-1M", "HMDB-51"], "metric": ["3-fold Accuracy", "Vid acc@5", "Video hit@1 ", "Vid acc@1", "Video hit@5", "Average accuracy of 3 splits", "Clip Hit@1"], "title": "A Closer Look at Spatiotemporal Convolutions for Action Recognition"} {"abstract": "Feature extraction plays a significant part in computer vision tasks. In this\npaper, we propose a method which transfers rich deep features from a pretrained\nmodel on face verification task and feeds the features into Bayesian ridge\nregression algorithm for facial beauty prediction. We leverage the deep neural\nnetworks that extracts more abstract features from stacked layers. Through\nsimple but effective feature fusion strategy, our method achieves improved or\ncomparable performance on SCUT-FBP dataset and ECCV HotOrNot dataset. Our\nexperiments demonstrate the effectiveness of the proposed method and clarify\nthe inner interpretability of facial beauty perception.", "field": ["Image Models"], "task": ["Face Verification", "Facial Beauty Prediction", "Regression"], "method": ["Interpretability"], "dataset": ["SCUT-FBP", "ECCV HotOrNot"], "metric": ["MAE", "Pearson Correlation"], "title": "Transferring Rich Deep Features for Facial Beauty Prediction"} {"abstract": "The prevalent approach to sequence to sequence learning maps an input\nsequence to a variable length output sequence via recurrent neural networks. We\nintroduce an architecture based entirely on convolutional neural networks.\nCompared to recurrent models, computations over all elements can be fully\nparallelized during training and optimization is easier since the number of\nnon-linearities is fixed and independent of the input length. Our use of gated\nlinear units eases gradient propagation and we equip each decoder layer with a\nseparate attention module. We outperform the accuracy of the deep LSTM setup of\nWu et al. (2016) on both WMT'14 English-German and WMT'14 English-French\ntranslation at an order of magnitude faster speed, both on GPU and CPU.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Machine Translation"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["WMT2016 English-Romanian", "IWSLT2015 German-English", "IWSLT2015 English-German", "WMT2014 English-German", "WMT2014 English-French"], "metric": ["BLEU score"], "title": "Convolutional Sequence to Sequence Learning"} {"abstract": "Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Sentiment Analysis", "Text Classification"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Slanted Triangular Learning Rates", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["Yelp Fine-grained classification", "Yelp Binary classification", "Yelp-2", "Yahoo! Answers", "DBpedia", "Yelp-5", "AG News", "IMDb", "Sogou News", "TREC-6"], "metric": ["Error", "Accuracy (2 classes)", "Accuracy (10 classes)", "Accuracy"], "title": "How to Fine-Tune BERT for Text Classification?"} {"abstract": "Multi-person pose estimation in the wild is challenging. Although\nstate-of-the-art human detectors have demonstrated good performance, small\nerrors in localization and recognition are inevitable. These errors can cause\nfailures for a single-person pose estimator (SPPE), especially for methods that\nsolely depend on human detection results. In this paper, we propose a novel\nregional multi-person pose estimation (RMPE) framework to facilitate pose\nestimation in the presence of inaccurate human bounding boxes. Our framework\nconsists of three components: Symmetric Spatial Transformer Network (SSTN),\nParametric Pose Non-Maximum-Suppression (NMS), and Pose-Guided Proposals\nGenerator (PGPG). Our method is able to handle inaccurate bounding boxes and\nredundant detections, allowing it to achieve a 17% increase in mAP over the\nstate-of-the-art methods on the MPII (multi person) dataset.Our model and\nsource codes are publicly available.", "field": ["Image Model Blocks"], "task": ["Human Detection", "Keypoint Detection", "Multi-Person Pose Estimation", "Pose Estimation"], "method": ["Spatial Transformer"], "dataset": ["COCO", "MPII Multi-Person", "COCO test-dev"], "metric": ["Test AP", "APM", "FPS", "AP75", "AP", "APL", "mAP@0.5", "AP50"], "title": "RMPE: Regional Multi-person Pose Estimation"} {"abstract": "Reasoning and inference are central to human and artificial intelligence.\nModeling inference in human language is very challenging. With the availability\nof large annotated data (Bowman et al., 2015), it has recently become feasible\nto train neural network based inference models, which have shown to be very\neffective. In this paper, we present a new state-of-the-art result, achieving\nthe accuracy of 88.6% on the Stanford Natural Language Inference Dataset.\nUnlike the previous top models that use very complicated network architectures,\nwe first demonstrate that carefully designing sequential inference models based\non chain LSTMs can outperform all previous models. Based on this, we further\nshow that by explicitly considering recursive architectures in both local\ninference modeling and inference composition, we achieve additional\nimprovement. Particularly, incorporating syntactic parsing information\ncontributes to our best result---it further improves the performance even when\nadded to the already very strong model.", "field": ["Sequence To Sequence Models"], "task": ["Natural Language Inference"], "method": ["ESIM", "Enhanced Sequential Inference Model"], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Enhanced LSTM for Natural Language Inference"} {"abstract": "Current state-of-the-art approaches for named entity recognition (NER) using BERT-style transformers typically use one of two different approaches: (1) The first fine-tunes the transformer itself on the NER task and adds only a simple linear layer for word-level predictions. (2) The second uses the transformer only to provide features to a standard LSTM-CRF sequence labeling architecture and thus performs no fine-tuning. In this paper, we perform a comparative analysis of both approaches in a variety of settings currently considered in the literature. In particular, we evaluate how well they work when document-level features are leveraged. Our evaluation on the classic CoNLL benchmark datasets for 4 languages shows that document-level features significantly improve NER quality and that fine-tuning generally outperforms the feature-based approaches. We present recommendations for parameters as well as several new state-of-the-art numbers. Our approach is integrated into the Flair framework to facilitate reproduction of our experiments.", "field": ["Feedforward Networks"], "task": ["Named Entity Recognition"], "method": ["Linear Layer"], "dataset": ["CoNLL03"], "metric": ["F1"], "title": "FLERT: Document-Level Features for Named Entity Recognition"} {"abstract": "We present ProxEmo, a novel end-to-end emotion prediction algorithm for socially aware robot navigation among pedestrians. Our approach predicts the perceived emotions of a pedestrian from walking gaits, which is then used for emotion-guided navigation taking into account social and proxemic constraints. To classify emotions, we propose a multi-view skeleton graph convolution-based model that works on a commodity camera mounted onto a moving robot. Our emotion recognition is integrated into a mapless navigation scheme and makes no assumptions about the environment of pedestrian motion. It achieves a mean average emotion prediction precision of 82.47% on the Emotion-Gait benchmark dataset. We outperform current state-of-art algorithms for emotion recognition from 3D gaits. We highlight its benefits in terms of navigation in indoor scenes using a Clearpath Jackal robot.", "field": ["Convolutions"], "task": ["Emotion Classification", "Emotion Recognition", "Gesture Recognition", "Human robot interaction", "Pose Estimation", "Robot Navigation"], "method": ["Grouped Convolution", "1x1 Convolution", "Convolution"], "dataset": ["EWALK"], "metric": ["Accuracy"], "title": "ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation"} {"abstract": "We introduce a fast and efficient convolutional neural network, ESPNet, for\nsemantic segmentation of high resolution images under resource constraints.\nESPNet is based on a new convolutional module, efficient spatial pyramid (ESP),\nwhich is efficient in terms of computation, memory, and power. ESPNet is 22\ntimes faster (on a standard GPU) and 180 times smaller than the\nstate-of-the-art semantic segmentation network PSPNet, while its category-wise\naccuracy is only 8% less. We evaluated ESPNet on a variety of semantic\nsegmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy\nwhole slide image dataset. Under the same constraints on memory and\ncomputation, ESPNet outperforms all the current efficient CNN networks such as\nMobileNet, ShuffleNet, and ENet on both standard metrics and our newly\nintroduced performance metrics that measure efficiency on edge devices. Our\nnetwork can process high resolution images at a rate of 112 and 9 frames per\nsecond on a standard GPU and edge device, respectively.", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Semantic Segmentation Models", "Learning Rate Schedules", "Stochastic Optimization", "Degridding", "Activation Functions", "Convolutions", "Image Model Blocks"], "task": ["Panoptic Segmentation", "Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": ["Weight Decay", "ESPNet", "Dilated Convolution", "Pointwise Convolution", "Random Horizontal Flip", "Adam", "Random Resized Crop", "Random Scaling", "ESP", "Efficient Spatial Pyramid", "Convolution", "1x1 Convolution", "PReLU", "Parameterized ReLU", "Hierarchical Feature Fusion", "Kaiming Initialization", "Step Decay"], "dataset": ["PASCAL VOC 2012 test", "Cityscapes test"], "metric": ["Mean IoU", "Mean IoU (class)"], "title": "ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation"} {"abstract": "As a classic statistical model of 3D facial shape and texture, 3D Morphable\nModel (3DMM) is widely used in facial analysis, e.g., model fitting, image\nsynthesis. Conventional 3DMM is learned from a set of well-controlled 2D face\nimages with associated 3D face scans, and represented by two sets of PCA basis\nfunctions. Due to the type and amount of training data, as well as the linear\nbases, the representation power of 3DMM can be limited. To address these\nproblems, this paper proposes an innovative framework to learn a nonlinear 3DMM\nmodel from a large set of unconstrained face images, without collecting 3D face\nscans. Specifically, given a face image as input, a network encoder estimates\nthe projection, shape and texture parameters. Two decoders serve as the\nnonlinear 3DMM to map from the shape and texture parameters to the 3D shape and\ntexture, respectively. With the projection parameter, 3D shape, and texture, a\nnovel analytically-differentiable rendering layer is designed to reconstruct\nthe original input face. The entire network is end-to-end trainable with only\nweak supervision. We demonstrate the superior representation power of our\nnonlinear 3DMM over its linear counterpart, and its contribution to face\nalignment and 3D reconstruction.", "field": ["Dimensionality Reduction"], "task": ["3D Reconstruction", "Face Alignment", "Image Generation"], "method": ["Principal Components Analysis", "PCA"], "dataset": ["AFLW2000"], "metric": ["Error rate"], "title": "Nonlinear 3D Face Morphable Model"} {"abstract": "We introduce two challenging datasets that reliably cause machine learning model performance to substantially degrade. The datasets are collected with a simple adversarial filtration technique to create datasets with limited spurious cues. Our datasets' real-world, unmodified examples transfer to various unseen models reliably, demonstrating that computer vision models have shared weaknesses. The first dataset is called ImageNet-A and is like the ImageNet test set, but it is far more challenging for existing models. We also curate an adversarial out-of-distribution detection dataset called ImageNet-O, which is the first out-of-distribution detection dataset created for ImageNet models. On ImageNet-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%, and its out-of-distribution detection performance on ImageNet-O is near random chance levels. We find that existing data augmentation techniques hardly boost performance, and using other public training datasets provides improvements that are limited. However, we find that improvements to computer vision architectures provide a promising path towards robust models.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Adversarial Attack", "Data Augmentation", "Domain Generalization", "Out-of-Distribution Detection"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet-A"], "metric": ["Top-1 accuracy %"], "title": "Natural Adversarial Examples"} {"abstract": "A field that has directly benefited from the recent advances in deep learning\nis Automatic Speech Recognition (ASR). Despite the great achievements of the\npast decades, however, a natural and robust human-machine speech interaction\nstill appears to be out of reach, especially in challenging environments\ncharacterized by significant noise and reverberation. To improve robustness,\nmodern speech recognizers often employ acoustic models based on Recurrent\nNeural Networks (RNNs), that are naturally able to exploit large time contexts\nand long-term speech modulations. It is thus of great interest to continue the\nstudy of proper techniques for improving the effectiveness of RNNs in\nprocessing speech signals.\n In this paper, we revise one of the most popular RNN models, namely Gated\nRecurrent Units (GRUs), and propose a simplified architecture that turned out\nto be very effective for ASR. The contribution of this work is two-fold: First,\nwe analyze the role played by the reset gate, showing that a significant\nredundancy with the update gate occurs. As a result, we propose to remove the\nformer from the GRU design, leading to a more efficient and compact single-gate\nmodel. Second, we propose to replace hyperbolic tangent with ReLU activations.\nThis variation couples well with batch normalization and could help the model\nlearn long-term dependencies without numerical issues.\n Results show that the proposed architecture, called Light GRU (Li-GRU), not\nonly reduces the per-epoch training time by more than 30% over a standard GRU,\nbut also consistently improves the recognition accuracy across different tasks,\ninput features, noisy conditions, as well as across different ASR paradigms,\nranging from standard DNN-HMM speech recognizers to end-to-end CTC models.", "field": ["Recurrent Neural Networks", "Activation Functions", "Normalization"], "task": ["Speech Recognition"], "method": ["Gated Recurrent Unit", "Batch Normalization", "ReLU", "GRU", "Rectified Linear Units"], "dataset": ["TIMIT"], "metric": ["Percentage error"], "title": "Light Gated Recurrent Units for Speech Recognition"} {"abstract": "State-of-the-art natural language processing systems rely on supervision in\nthe form of annotated data to learn competent models. These models are\ngenerally trained on data in a single language (usually English), and cannot be\ndirectly used beyond that language. Since collecting data in every language is\nnot realistic, there has been a growing interest in cross-lingual language\nunderstanding (XLU) and low-resource cross-language transfer. In this work, we\nconstruct an evaluation set for XLU by extending the development and test sets\nof the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15\nlanguages, including low-resource languages such as Swahili and Urdu. We hope\nthat our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence\nunderstanding by providing an informative standard evaluation task. In\naddition, we provide several baselines for multilingual sentence understanding,\nincluding two based on machine translation systems, and two that use parallel\ndata to train aligned multilingual bag-of-words and LSTM encoders. We find that\nXNLI represents a practical and challenging evaluation suite, and that directly\ntranslating the test data yields the best performance among available\nbaselines.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Cross-Lingual Natural Language Inference", "Machine Translation", "Natural Language Inference"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["XNLI French"], "metric": ["Accuracy"], "title": "XNLI: Evaluating Cross-lingual Sentence Representations"} {"abstract": "The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on WSC273 strongly improves when fine-tuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We additionally generate a large unsupervised WSC-like dataset. By fine-tuning the BERT language model both on the introduced and on the WSCR dataset, we achieve overall accuracies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous state-of-the-art solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models are also consistently more robust on the \"complex\" subsets of WSC273, introduced by Trichelair et al. (2018).", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Common Sense Reasoning", "Language Modelling", "Natural Language Understanding"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["Winograd Schema Challenge", "WNLI"], "metric": ["Score", "Accuracy"], "title": "A Surprisingly Robust Trick for Winograd Schema Challenge"} {"abstract": "Knowledge graphs are graphical representations of large databases of facts, which typically suffer from incompleteness. Inferring missing relations (links) between entities (nodes) is the task of link prediction. A recent state-of-the-art approach to link prediction, ConvE, implements a convolutional neural network to extract features from concatenated subject and relation vectors. Whilst results are impressive, the method is unintuitive and poorly understood. We propose a hypernetwork architecture that generates simplified relation-specific convolutional filters that (i) outperforms ConvE and all previous approaches across standard datasets; and (ii) can be framed as tensor factorization and thus set within a well established family of factorization models for link prediction. We thus demonstrate that convolution simply offers a convenient computational means of introducing sparsity and parameter tying to find an effective trade-off between non-linear expressiveness and the number of parameters to learn.", "field": ["Convolutions", "Feedforward Networks"], "task": ["Knowledge Graph Embeddings", "Knowledge Graphs", "Link Prediction"], "method": ["HyperNetwork", "Convolution"], "dataset": [" FB15k", "WN18RR", "WN18", "FB15k-237"], "metric": ["Hits@3", "Hits@1", "MR", "MRR", "Hits@10"], "title": "Hypernetwork Knowledge Graph Embeddings"} {"abstract": "Despite significant recent progress on generative models, controlled generation of images depicting multiple and complex object layouts is still a difficult problem. Among the core challenges are the diversity of appearance a given object may possess and, as a result, exponential set of images consistent with a specified layout. To address these challenges, we propose a novel approach for layout-based image generation; we call it Layout2Im. Given the coarse spatial layout (bounding boxes + object categories), our model can generate a set of realistic images which have the correct objects in the desired locations. The representation of each object is disentangled into a specified/certain part (category) and an unspecified/uncertain part (appearance). The category is encoded using a word embedding and the appearance is distilled into a low-dimensional vector sampled from a normal distribution. Individual object representations are composed together using convolutional LSTM, to obtain an encoding of the complete layout, and then decoded to an image. Several loss terms are introduced to encourage accurate and diverse generation. The proposed Layout2Im model significantly outperforms the previous state of the art, boosting the best reported inception score by 24.66% and 28.57% on the very challenging COCO-Stuff and Visual Genome datasets, respectively. Extensive experiments also demonstrate our method's ability to generate complex and diverse images with multiple objects.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Layout-to-Image Generation"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["COCO-Stuff 64x64", "Visual Genome 64x64"], "metric": ["Inception Score", "FID"], "title": "Image Generation from Layout"} {"abstract": "We present a new neural sequence-to-sequence model for extractive summarization called SWAP-NET (Sentences and Words from Alternating Pointer Networks). Extractive summaries comprising a salient subset of input sentences, often also contain important key words. Guided by this principle, we design SWAP-NET that models the interaction of key words and salient sentences using a new two-level pointer network based architecture. SWAP-NET identifies both salient sentences and key words in an input document, and then combines them to form the extractive summary. Experiments on large scale benchmark corpora demonstrate the efficacy of SWAP-NET that outperforms state-of-the-art extractive summarizers.", "field": ["Output Functions", "Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models", "Attention Mechanisms"], "task": ["Abstractive Text Summarization", "Document Summarization", "Machine Translation", "Question Answering", "Text Summarization"], "method": ["Softmax", "Additive Attention", "Long Short-Term Memory", "Pointer Network", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["CNN / Daily Mail (Anonymized)"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks"} {"abstract": "Hand pose estimation from monocular depth images is an important and\nchallenging problem for human-computer interaction. Recently deep convolutional\nnetworks (ConvNet) with sophisticated design have been employed to address it,\nbut the improvement over traditional methods is not so apparent. To promote the\nperformance of directly 3D coordinate regression, we propose a tree-structured\nRegion Ensemble Network (REN), which partitions the convolution outputs into\nregions and integrates the results from multiple regressors on each regions.\nCompared with multi-model ensemble, our model is completely end-to-end\ntraining. The experimental results demonstrate that our approach achieves the\nbest performance among state-of-the-arts on two public datasets.", "field": ["Convolutions"], "task": ["Hand Pose Estimation", "Pose Estimation", "Regression"], "method": ["Convolution"], "dataset": ["ICVL Hands", "NYU Hands", "MSRA Hands"], "metric": ["Average 3D Error"], "title": "Region Ensemble Network: Improving Convolutional Network for Hand Pose Estimation"} {"abstract": "Neural networks provide new possibilities to automatically learn complex language patterns and query-document relations. Neural IR models have achieved promising results in learning query-document relevance patterns, but few explorations have been done on understanding the text content of a query or a document. This paper studies leveraging a recently-proposed contextual neural language model, BERT, to provide deeper text understanding for IR. Experimental results demonstrate that the contextual text representations from BERT are more effective than traditional word embeddings. Compared to bag-of-words retrieval models, the contextual language model can better leverage language structures, bringing large improvements on queries written in natural languages. Combining the text understanding ability with search knowledge leads to an enhanced pre-trained BERT model that can benefit related search tasks where training data are limited.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Ad-Hoc Information Retrieval", "Language Modelling", "Word Embeddings"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["TREC Robust04"], "metric": ["nDCG@20"], "title": "Deeper Text Understanding for IR with Contextual Neural Language Modeling"} {"abstract": "In natural images, information is conveyed at different frequencies where higher frequencies are usually encoded with fine details and lower frequencies are usually encoded with global structures. Similarly, the output feature maps of a convolution layer can also be seen as a mixture of information at different frequencies. In this work, we propose to factorize the mixed feature maps by their frequencies, and design a novel Octave Convolution (OctConv) operation to store and process feature maps that vary spatially \"slower\" at a lower spatial resolution reducing both memory and computation cost. Unlike existing multi-scale methods, OctConv is formulated as a single, generic, plug-and-play convolutional unit that can be used as a direct replacement of (vanilla) convolutions without any adjustments in the network architecture. It is also orthogonal and complementary to methods that suggest better topologies or reduce channel-wise redundancy like group or depth-wise convolutions. We experimentally show that by simply replacing convolutions with OctConv, we can consistently boost accuracy for both image and video recognition tasks, while reducing memory and computational cost. An OctConv-equipped ResNet-152 can achieve 82.9% top-1 classification accuracy on ImageNet with merely 22.2 GFLOPs.", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["Action Classification", "Image Classification", "Video Recognition"], "method": ["Depthwise Convolution", "Cosine Annealing", "Average Pooling", "Mixup", "1x1 Convolution", "ResNet", "MobileNetV2", "ReLU", "Residual Connection", "Dense Connections", "Grouped Convolution", "Batch Normalization", "Residual Network", "Label Smoothing", "Octave Convolution", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Kaiming Initialization", "SGD", "Sigmoid Activation", "Stochastic Gradient Descent", "ResNeXt Block", "Inverted Residual Block", "ResNeXt", "Softmax", "Bottleneck Residual Block", "Depthwise Separable Convolution", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Kinetics-400", "ImageNet"], "metric": ["Number of params", "Top 5 Accuracy", "Vid acc@1", "Top 1 Accuracy"], "title": "Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution"} {"abstract": "Scene Parsing is an important cog for modern autonomousdriving systems. Most of the works in semantic segmenta-tion pertains to day-time scenes with favourable weather andillumination conditions. In this paper, we propose a noveldeep architecture, NiSeNet, that performs semantic segmen-tation of night scenes using a domain mapping approach ofsynthetic to real data. It is a dual-channel network, wherewe designed a Real channel using DeepLabV3+ coupled withan MSE loss to preserve the spatial information. In addition,we used an Adaptive channel reducing the domain gap be-tween synthetic and real night images, which also comple-ments the failures of Real channel output. Apart from thedual channel, we introduced a novel fusion scheme to fuse theoutputs of two channels. In addition to that, we compiled anew dataset Urban Night Driving Dataset (UNDD); it consistsof7125unlabelled day and night images; additionally, it has75night images with pixel-level annotations having classesequivalent to Cityscapes dataset. We evaluated our approachon the Berkley Deep Drive dataset, the challenging Mapil-lary dataset and UNDD dataset to exhibit that the proposedmethod outperforms the state-of-the-art techniques in termsof accuracy and visual quality", "field": ["Semantic Segmentation Models", "Semantic Segmentation Modules", "Normalization", "Convolutions", "Pooling Operations"], "task": ["Scene Parsing", "Semantic Segmentation"], "method": ["Dilated Convolution", "Batch Normalization", "DeepLabv3", "1x1 Convolution", "ASPP", "Atrous Spatial Pyramid Pooling", "Spatial Pyramid Pooling"], "dataset": ["Mapillary val", "BDD100k"], "metric": ["mIoU"], "title": "What's There in the Dark"} {"abstract": "Several dual-domain convolutional neural network-based methods show outstanding performance in reducing image compression artifacts. However, they suffer from handling color images because the compression processes for gray-scale and color images are completely different. Moreover, these methods train a specific model for each compression quality and require multiple models to achieve different compression qualities. To address these problems, we proposed an implicit dual-domain convolutional network (IDCN) with the pixel position labeling map and the quantization tables as inputs. Specifically, we proposed an extractor-corrector framework-based dual-domain correction unit (DCU) as the basic component to formulate the IDCN. A dense block was introduced to improve the performance of extractor in DRU. The implicit dual-domain translation allows the IDCN to handle color images with the discrete cosine transform (DCT)-domain priors. A flexible version of IDCN (IDCN-f) was developed to handle a wide range of compression qualities. Experiments for both objective and subjective evaluations on benchmark datasets show that IDCN is superior to the state-of-the-art methods and IDCN-f exhibits excellent abilities to handle a wide range of compression qualities with little performance sacrifice and demonstrates great potential for practical applications.", "field": ["Activation Functions", "Normalization", "Fourier-related Transforms", "Convolutions", "Skip Connections", "Image Model Blocks"], "task": ["Color Image Compression Artifact Reduction", "Image Compression", "Image Compression Artifact Reduction", "JPEG Artifact Correction", "Quantization"], "method": ["Dense Block", "Concatenated Skip Connection", "Batch Normalization", "Convolution", "Discrete Cosine Transform", "ReLU", "Rectified Linear Units"], "dataset": ["ICB (Quality 10 Color)", "Live1 (Quality 10 Grayscale)", "LIVE1 (Quality 20 Grayscale)", "LIVE1 (Quality 20 Color)", "ICB (Quality 20 Grayscale)", "ICB (Quality 20 Color)", "LIVE1 (Quality 10 Color)", "ICB (Quality 10 Grayscale)"], "metric": ["SSIM", "PSNR", "PSNR-B"], "title": "Implicit Dual-domain Convolutional Network for Robust Color Image Compression Artifact Reduction"} {"abstract": "In skeleton-based action recognition, graph convolutional networks (GCNs), which model the human body skeletons as spatiotemporal graphs, have achieved remarkable performance. However, in existing GCN-based methods, the topology of the graph is set manually, and it is fixed over all layers and input samples. This may not be optimal for the hierarchical GCN and diverse samples in action recognition tasks. In addition, the second-order information (the lengths and directions of bones) of the skeleton data, which is naturally more informative and discriminative for action recognition, is rarely investigated in existing methods. In this work, we propose a novel two-stream adaptive graph convolutional network (2s-AGCN) for skeleton-based action recognition. The topology of the graph in our model can be either uniformly or individually learned by the BP algorithm in an end-to-end manner. This data-driven method increases the flexibility of the model for graph construction and brings more generality to adapt to various data samples. Moreover, a two-stream framework is proposed to model both the first-order and the second-order information simultaneously, which shows notable improvement for the recognition accuracy. Extensive experiments on the two large-scale datasets, NTU-RGBD and Kinetics-Skeleton, demonstrate that the performance of our model exceeds the state-of-the-art with a significant margin.", "field": ["Graph Models"], "task": ["Action Recognition", "graph construction", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["NTU RGB+D", "Kinetics-Skeleton dataset"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition"} {"abstract": "Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or views) of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a swapped prediction mechanism where we predict the cluster assignment of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.", "field": ["Self-Supervised Learning", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Large Batch Optimization", "Convolutions", "Feedforward Networks", "Instance Segmentation Models", "Skip Connections", "Object Detection Models", "Skip Connection Blocks"], "task": ["Data Augmentation", "Image Classification", "Self-Supervised Image Classification", "Semi-Supervised Image Classification"], "method": ["ResNet", "LARS", "Detection Transformer", "SwAV", "Feedforward Network", "Swapping Assignments between Views", "Batch Normalization", "Rectified Linear Units", "ReLU", "1x1 Convolution", "Residual Connection", "Bottleneck Residual Block", "Residual Network", "Convolution", "Mask R-CNN", "Residual Block", "Dense Connections", "Detr"], "dataset": ["iNaturalist 2018", "ImageNet (finetuned)", "ImageNet - 1% labeled data", "ImageNet"], "metric": ["Top 1 Accuracy", "Top-1 Accuracy", "Top 1 Accuracy (kNN, k=20)", "Top 5 Accuracy", "Number of Params"], "title": "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments"} {"abstract": "Two-stage deep object detectors generate a set of regions-of-interest (RoI) in the first stage, then, in the second stage, identify objects among the proposed RoIs that sufficiently overlap with a ground truth (GT) box. The second stage is known to suffer from a bias towards RoIs that have low intersection-over-union (IoU) with the associated GT boxes. To address this issue, we first propose a sampling method to generate bounding boxes (BB) that overlap with a given reference box more than a given IoU threshold. Then, we use this BB generation method to develop a positive RoI (pRoI) generator that produces RoIs following any desired spatial or IoU distribution, for the second-stage. We show that our pRoI generator is able to simulate other sampling methods for positive examples such as hard example mining and prime sampling. Using our generator as an analysis tool, we show that (i) IoU imbalance has an adverse effect on performance, (ii) hard positive example mining improves the performance only for certain input IoU distributions, and (iii) the imbalance among the foreground classes has an adverse effect on performance and that it can be alleviated at the batch level. Finally, we train Faster R-CNN using our pRoI generator and, compared to conventional training, obtain better or on-par performance for low IoUs and significant improvements when trained for higher IoUs for Pascal VOC and MS COCO datasets. The code is available at: https://github.com/kemaloksuz/BoundingBoxGenerator.", "field": ["Output Functions", "RoI Feature Extractors", "Convolutions", "Region Proposal", "Object Detection Models"], "task": ["Object Detection"], "method": ["RPN", "Faster R-CNN", "Softmax", "RoIPool", "Convolution", "Region Proposal Network"], "dataset": ["COCO minival"], "metric": ["AP50", "box AP", "oLRP"], "title": "Generating Positive Bounding Boxes for Balanced Training of Object Detectors"} {"abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Learning Rate Schedules", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Tokenizers", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Abstractive Text Summarization", "Common Sense Reasoning", "Coreference Resolution", "Document Summarization", "Linguistic Acceptability", "Machine Translation", "Natural Language Inference", "Question Answering", "Semantic Textual Similarity", "Sentiment Analysis", "Text Classification", "Transfer Learning", "Word Sense Disambiguation"], "method": ["Inverse Square Root Schedule", "Layer Normalization", "Byte Pair Encoding", "GLU", "Gated Linear Unit", "BPE", "Adafactor", "Multi-Head Attention", "Attention Dropout", "SentencePiece", "Softmax", "T5", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["MultiNLI", "SST-2 Binary classification", "Winograd Schema Challenge", "CommitmentBank", "WMT2014 English-German", "WMT2014 English-French", "SQuAD1.1 dev", "BoolQ", "Words in Context", "STS Benchmark", "WNLI", "CoLA", "QNLI", "COPA", "ReCoRD", "MultiRC", "CNN / Daily Mail", "RTE", "MRPC", "Quora Question Pairs"], "metric": ["Acc", "Pearson Correlation", "ROUGE-1", "Spearman Correlation", "Matched", "ROUGE-2", "ROUGE-L", "F1a", "BLEU score", "F1", "Mismatched", "EM", "Accuracy"], "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"} {"abstract": "We present AVOD, an Aggregate View Object Detection network for autonomous\ndriving scenarios. The proposed neural network architecture uses LIDAR point\nclouds and RGB images to generate features that are shared by two subnetworks:\na region proposal network (RPN) and a second stage detector network. The\nproposed RPN uses a novel architecture capable of performing multimodal feature\nfusion on high resolution feature maps to generate reliable 3D object proposals\nfor multiple object classes in road scenes. Using these proposals, the second\nstage detection network performs accurate oriented 3D bounding box regression\nand category classification to predict the extents, orientation, and\nclassification of objects in 3D space. Our proposed architecture is shown to\nproduce state of the art results on the KITTI 3D object detection benchmark\nwhile running in real time with a low memory footprint, making it a suitable\ncandidate for deployment on autonomous vehicles. Code is at:\nhttps://github.com/kujason/avod", "field": ["Region Proposal"], "task": ["3D Object Detection", "Autonomous Driving", "Autonomous Vehicles", "Object Detection", "Region Proposal", "Regression"], "method": ["Region Proposal Network", "RPN"], "dataset": ["KITTI Cars Hard", "KITTI Pedestrians Hard", "KITTI Cars Moderate", "KITTI Cyclists Moderate", "KITTI Pedestrians Moderate", "KITTI Cyclists Hard", "KITTI Pedestrians Easy", "KITTI Cyclists Easy", "KITTI Cars Easy"], "metric": ["AP"], "title": "Joint 3D Proposal Generation and Object Detection from View Aggregation"} {"abstract": "Absorption imaging is the most common probing technique in experiments with ultracold atoms. The standard procedure involves the division of two frames acquired at successive exposures, one with the atomic absorption signal and one without. A well-known problem is the presence of residual structured noise in the final image, due to small differences between the imaging light in the two exposures. Here we solve this problem by performing absorption imaging with only a single exposure, where instead of a second exposure the reference frame is generated by an unsupervised image-completion autoencoder neural network. The network is trained on images without absorption signal such that it can infer the noise overlaying the atomic signal based only on the information in the region encircling the signal. We demonstrate our approach on data captured with a quantum degenerate Fermi gas. The average residual noise in the resulting images is below that of the standard double-shot technique. Our method simplifies the experimental sequence, reduces the hardware requirements, and can improve the accuracy of extracted physical observables. The trained network and its generating scripts are available as an open-source repository (http://absDL.github.io/).", "field": ["Generative Models"], "task": ["Image Denoising", "Physical Attribute Prediction"], "method": ["AutoEncoder"], "dataset": ["ultracold fermions Technion system, pixelfly"], "metric": ["ODRMSE"], "title": "Single-exposure absorption imaging of ultracold atoms using deep learning"} {"abstract": "Human parsing and pose estimation have recently received considerable\ninterest due to their substantial application potentials. However, the existing\ndatasets have limited numbers of images and annotations and lack a variety of\nhuman appearances and coverage of challenging cases in unconstrained\nenvironments. In this paper, we introduce a new benchmark named \"Look into\nPerson (LIP)\" that provides a significant advancement in terms of scalability,\ndiversity, and difficulty, which are crucial for future developments in\nhuman-centric analysis. This comprehensive dataset contains over 50,000\nelaborately annotated images with 19 semantic part labels and 16 body joints,\nwhich are captured from a broad range of viewpoints, occlusions, and background\ncomplexities. Using these rich annotations, we perform detailed analyses of the\nleading human parsing and pose estimation approaches, thereby obtaining\ninsights into the successes and failures of these methods. To further explore\nand take advantage of the semantic correlation of these two tasks, we propose a\nnovel joint human parsing and pose estimation network to explore efficient\ncontext modeling, which can simultaneously predict parsing and pose with\nextremely high quality. Furthermore, we simplify the network to solve human\nparsing by exploring a novel self-supervised structure-sensitive learning\napproach, which imposes human pose structures into the parsing results without\nresorting to extra supervision. The dataset, code and models are available at\nhttp://www.sysu-hcp.net/lip/.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Human Parsing", "Pose Estimation", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["LIP val"], "metric": ["mIoU"], "title": "Look into Person: Joint Body Parsing & Pose Estimation Network and A New Benchmark"} {"abstract": "Action recognition with skeleton data is attracting more attention in computer vision. Recently, graph convolutional networks (GCNs), which model the human body skeletons as spatiotemporal graphs, have obtained remarkable performance. However, the computational complexity of GCN-based methods are pretty heavy, typically over 15 GFLOPs for one action sample. Recent works even reach about 100 GFLOPs. Another shortcoming is that the receptive fields of both spatial graph and temporal graph are inflexible. Although some works enhance the expressiveness of spatial graph by introducing incremental adaptive modules, their performance is still limited by regular GCN structures. In this paper, we propose a novel shift graph convolutional network (Shift-GCN) to overcome both shortcomings. Instead of using heavy regular graph convolutions, our Shift-GCN is composed of novel shift graph operations and lightweight point-wise convolutions, where the shift graph operations provide flexible receptive fields for both spatial graph and temporal graph. On three datasets for skeleton-based action recognition, the proposed Shift-GCN notably exceeds the state-of-the-art methods with more than 10 times less computational complexity.\r", "field": ["Graph Models"], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["NTU RGB+D", "NTU RGB+D 120"], "metric": ["Accuracy (Cross-Subject)", "Accuracy (Cross-Setup)", "Accuracy (CV)", "Accuracy (CS)"], "title": "Skeleton-Based Action Recognition With Shift Graph Convolutional Network"} {"abstract": "A major challenge faced by online social networks such as Facebook and Twitter is the remarkable rise of fake and automated bot accounts over the last few years. Some of these accounts have been reported to engage in undesirable activities such as spamming, political campaigning and spreading falsehood on the platform. We present an approach to detect bot-like behaviour among Twitter accounts by analyzing their past tweeting activity. We build upon an existing technique of analysis of Twitter accounts called Digital DNA. Digital DNA models the behaviour of Twitter accounts by encoding the post history of a user account as a sequence of characters analogous to an actual DNA sequence. In our approach, we employ a lossless compression algorithm on these Digital DNA sequences and use the compression statistics as a measure of predictability in the behaviour of a group of Twitter accounts. We leverage the information conveyed by the compression statistics to visually represent the posting behaviour by a simple two dimensional scatter plot and categorize the user accounts as bots and genuine users by using an off-the-shelf implementation of the logistic regression classification algorithm.", "field": ["Generalized Linear Models"], "task": ["Regression", "Twitter Bot Detection"], "method": ["Logistic Regression"], "dataset": ["MIB Datasets"], "metric": ["Accuracy"], "title": "Detecting Bot Behaviour in Social Media using Digital DNA Compression"} {"abstract": "Humans are very good at directing their visual attention toward relevant areas when they search for different types of objects. For instance, when we search for cars, we will look at the streets, not at the top of buildings. The motivation of this paper is to train a network to do the same via a multi-task learning approach. To train visual attention, we produce foreground/background segmentation labels in a semi-supervised way, using background subtraction or optical flow. Using these labels, we train an object detection model to produce foreground/background segmentation maps as well as bounding boxes while sharing most model parameters. We use those segmentation maps inside the network as a self-attention mechanism to weight the feature map used to produce the bounding boxes, decreasing the signal of non-relevant areas. We show that by using this method, we obtain a significant mAP improvement on two traffic surveillance datasets, with state-of-the-art results on both UA-DETRAC and UAVDT.", "field": ["Pose Estimation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections", "Image Model Blocks"], "task": ["Instance Segmentation", "Multi-Task Learning", "Object Detection"], "method": ["Convolution", "ReLU", "Residual Connection", "Hourglass Module", "Stacked Hourglass Network", "Rectified Linear Units", "Max Pooling"], "dataset": ["UAVDT", "UA-DETRAC"], "metric": ["mAP"], "title": "SpotNet: Self-Attention Multi-Task Network for Object Detection"} {"abstract": "Automatically learned quality assessment for images has recently become a hot\ntopic due to its usefulness in a wide variety of applications such as\nevaluating image capture pipelines, storage techniques and sharing media.\nDespite the subjective nature of this problem, most existing methods only\npredict the mean opinion score provided by datasets such as AVA [1] and TID2013\n[2]. Our approach differs from others in that we predict the distribution of\nhuman opinion scores using a convolutional neural network. Our architecture\nalso has the advantage of being significantly simpler than other methods with\ncomparable performance. Our proposed approach relies on the success (and\nretraining) of proven, state-of-the-art deep object recognition networks. Our\nresulting network can be used to not only score images reliably and with high\ncorrelation to human perception, but also to assist with adaptation and\noptimization of photo editing/enhancement algorithms in a photographic\npipeline. All this is done without need for a \"golden\" reference image,\nconsequently allowing for single-image, semantic- and perceptually-aware,\nno-reference quality assessment.", "field": ["Discriminators"], "task": ["Aesthetics Quality Assessment", "Image Quality Assessment"], "method": ["Neural Image Assessment", "NIMA"], "dataset": ["AVA"], "metric": ["Accuracy"], "title": "NIMA: Neural Image Assessment"} {"abstract": "We introduce the concrete autoencoder, an end-to-end differentiable method\nfor global feature selection, which efficiently identifies a subset of the most\ninformative features and simultaneously learns a neural network to reconstruct\nthe input data from the selected features. Our method is unsupervised, and is\nbased on using a concrete selector layer as the encoder and using a standard\nneural network as the decoder. During the training phase, the temperature of\nthe concrete selector layer is gradually decreased, which encourages a\nuser-specified number of discrete features to be learned. During test time, the\nselected features can be used with the decoder network to reconstruct the\nremaining input features. We evaluate concrete autoencoders on a variety of\ndatasets, where they significantly outperform state-of-the-art methods for\nfeature selection and data reconstruction. In particular, on a large-scale gene\nexpression dataset, the concrete autoencoder selects a small subset of genes\nwhose expression levels can be use to impute the expression levels of the\nremaining genes. In doing so, it improves on the current widely-used\nexpert-curated L1000 landmark genes, potentially reducing measurement costs by\n20%. The concrete autoencoder can be implemented by adding just a few lines of\ncode to a standard autoencoder.", "field": ["Generative Models"], "task": ["Feature Selection"], "method": ["AutoEncoder"], "dataset": ["Mice Protein", "ISOLET", "Coil-20", "MNIST", "Fashion-MNIST", "Activity"], "metric": ["Accuracy"], "title": "Concrete Autoencoders for Differentiable Feature Selection and Reconstruction"} {"abstract": "This paper describes Facebook FAIR's submission to the WMT19 shared news translation task. We participate in two language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations. This system improves upon our WMT'18 submission by 4.5 BLEU points.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["WMT2019 English-German"], "metric": ["BLEU score", "SacreBLEU"], "title": "Facebook FAIR's WMT19 News Translation Task Submission"} {"abstract": "Given a training dataset composed of images and corresponding category\nlabels, deep convolutional neural networks show a strong ability in mining\ndiscriminative parts for image classification. However, deep convolutional\nneural networks trained with image level labels only tend to focus on the most\ndiscriminative parts while missing other object parts, which could provide\ncomplementary information. In this paper, we approach this problem from a\ndifferent perspective. We build complementary parts models in a weakly\nsupervised manner to retrieve information suppressed by dominant object parts\ndetected by convolutional neural networks. Given image level labels only, we\nfirst extract rough object instances by performing weakly supervised object\ndetection and instance segmentation using Mask R-CNN and CRF-based\nsegmentation. Then we estimate and search for the best parts model for each\nobject instance under the principle of preserving as much diversity as\npossible. In the last stage, we build a bi-directional long short-term memory\n(LSTM) network to fuze and encode the partial information of these\ncomplementary parts into a comprehensive feature for image classification.\nExperimental results indicate that the proposed method not only achieves\nsignificant improvement over our baseline models, but also outperforms\nstate-of-the-art algorithms by a large margin (6.7%, 2.8%, 5.2% respectively)\non Stanford Dogs 120, Caltech-UCSD Birds 2011-200 and Caltech 256.", "field": ["Convolutions", "RoI Feature Extractors", "Output Functions", "Instance Segmentation Models"], "task": ["Fine-Grained Image Classification", "Image Classification", "Instance Segmentation", "Object Detection", "Semantic Segmentation", "Weakly Supervised Object Detection"], "method": ["Mask R-CNN", "Softmax", "RoIAlign", "Convolution"], "dataset": [" CUB-200-2011"], "metric": ["Accuracy"], "title": "Weakly Supervised Complementary Parts Models for Fine-Grained Image Classification from the Bottom Up"} {"abstract": "A deep learning architecture is proposed to predict graspable locations for robotic manipulation. It considers situations where no, one, or multiple object(s) are seen. By defining the learning problem to be classified with null hypothesis competition instead of regression, the deep neural network with red, green, blue and depth (RGB-D) image input predicts multiple grasp candidates for a single object or multiple objects, in a single shot. The method outperforms state-of-the-art approaches on the Cornell dataset with 96.0% and 96.1% accuracy on imagewise and object-wise splits, respectively. Evaluation on a multiobject dataset illustrates the generalization capability of the architecture. Grasping experiments achieve 96.0% grasp localization and 89.0% grasping success rates on a test set of household objects. The real-time process takes less than 0.25 s from image to plan.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Regression", "Robotic Grasping"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Cornell Grasp Dataset"], "metric": ["5 fold cross validation"], "title": "Real-world multiobject, multigrasp detection"} {"abstract": "We present a new method for efficient high-quality image segmentation of objects and scenes. By analogizing classical computer graphics methods for efficient rendering with over- and undersampling challenges faced in pixel labeling tasks, we develop a unique perspective of image segmentation as a rendering problem. From this vantage, we present the PointRend (Point-based Rendering) neural network module: a module that performs point-based segmentation predictions at adaptively selected locations based on an iterative subdivision algorithm. PointRend can be flexibly applied to both instance and semantic segmentation tasks by building on top of existing state-of-the-art models. While many concrete implementations of the general idea are possible, we show that a simple design already achieves excellent results. Qualitatively, PointRend outputs crisp object boundaries in regions that are over-smoothed by previous methods. Quantitatively, PointRend yields significant gains on COCO and Cityscapes, for both instance and semantic segmentation. PointRend's efficiency enables output resolutions that are otherwise impractical in terms of memory or computation compared to existing approaches. Code has been made available at https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend.", "field": ["Image Data Augmentation", "Semantic Segmentation Models", "Regularization", "Semantic Segmentation Modules", "Learning Rate Schedules", "Feature Extractors", "Stochastic Optimization", "Activation Functions", "Output Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Instance Segmentation Models"], "task": ["Instance Segmentation", "Semantic Segmentation"], "method": ["Weight Decay", "Dilated Convolution", "Random Scaling", "1x1 Convolution", "RoIAlign", "PointRend", "Random Horizontal Flip", "Convolution", "ReLU", "FPN", "Dense Connections", "Feedforward Network", "Panoptic FPN", "Batch Normalization", "Sigmoid Activation", "Atrous Spatial Pyramid Pooling", "SGD with Momentum", "Softmax", "Feature Pyramid Network", "DeepLabv3", "Group Normalization", "ASPP", "Mask R-CNN", "Linear Warmup", "Rectified Linear Units", "Spatial Pyramid Pooling"], "dataset": ["Cityscapes val"], "metric": ["mIoU"], "title": "PointRend: Image Segmentation as Rendering"} {"abstract": "Graph-to-text generation aims to generate fluent texts from graph-based data. In this paper, we investigate two recently proposed pretrained language models (PLMs) and analyze the impact of different task-adaptive pretraining strategies for PLMs in graph-to-text generation. We present a study across three graph domains: meaning representations, Wikipedia knowledge graphs (KGs) and scientific KGs. We show that the PLMs BART and T5 achieve new state-of-the-art results and that task-adaptive pretraining strategies improve their performance even further. In particular, we report new state-of-the-art BLEU scores of 49.72 on LDC2017T10, 59.70 on WebNLG, and 25.66 on AGENDA datasets - a relative improvement of 31.8%, 4.5%, and 42.4%, respectively. In an extensive analysis, we identify possible reasons for the PLMs' success on graph-to-text tasks. We find evidence that their knowledge about true facts helps them perform well even when the input graph representation is reduced to a simple bag of node and edge labels.", "field": ["Regularization", "Attention Modules", "Stochastic Optimization", "Learning Rate Schedules", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Tokenizers", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["AMR-to-Text Generation", "Data-to-Text Generation", "KB-to-Language Generation", "Knowledge Graphs", "Text Generation"], "method": ["GLU", "Adam", "T5", "Scaled Dot-Product Attention", "SentencePiece", "Gaussian Linear Error Units", "Inverse Square Root Schedule", "Adafactor", "Residual Connection", "Dense Connections", "Layer Normalization", "GELU", "Byte Pair Encoding", "BPE", "Gated Linear Unit", "Softmax", "Multi-Head Attention", "Attention Dropout", "Dropout", "BART"], "dataset": ["LDC2017T10", "WebNLG Full", "AGENDA", "WebNLG"], "metric": ["BLEU"], "title": "Investigating Pretrained Language Models for Graph-to-Text Generation"} {"abstract": "This paper presents a unified Vision-Language Pre-training (VLP) model. The model is unified in that (1) it can be fine-tuned for either vision-language generation (e.g., image captioning) or understanding (e.g., visual question answering) tasks, and (2) it uses a shared multi-layer transformer network for both encoding and decoding, which differs from many existing methods where the encoder and decoder are implemented using separate models. The unified VLP model is pre-trained on a large amount of image-text pairs using the unsupervised learning objectives of two tasks: bidirectional and sequence-to-sequence (seq2seq) masked vision-language prediction. The two tasks differ solely in what context the prediction conditions on. This is controlled by utilizing specific self-attention masks for the shared transformer network. To the best of our knowledge, VLP is the first reported model that achieves state-of-the-art results on both vision-language generation and understanding tasks, as disparate as image captioning and visual question answering, across three challenging benchmark datasets: COCO Captions, Flickr30k Captions, and VQA 2.0. The code and the pre-trained models are available at https://github.com/LuoweiZhou/VLP.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Image Captioning", "Question Answering", "Text Generation", "Visual Question Answering"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["VQA v2 test-std", "COCO Captions test", "Flickr30k Captions test"], "metric": ["overall", "METEOR", "CIDEr", "SPICE", "BLEU-4"], "title": "Unified Vision-Language Pre-Training for Image Captioning and VQA"} {"abstract": "Recognizing a piece of writing as a poem or prose is usually easy for the majority of people; however, only specialists can determine which meter a poem belongs to. In this paper, we build Recurrent Neural Network (RNN) models that can classify poems according to their meters from plain text. The input text is encoded at the character level and directly fed to the models without feature handcrafting. This is a step forward for machine understanding and synthesis of languages in general, and Arabic language in particular. Among the 16 poem meters of Arabic and the 4 meters of English the networks were able to correctly classify poem with an overall accuracy of 96.38\\% and 82.31\\% respectively. The poem datasets used to conduct this research were massive, over 1.5 million of verses, and were crawled from different nontechnical sources, almost Arabic and English literature sites, and in different heterogeneous and unstructured formats. These datasets are now made publicly available in clean, structured, and documented format for other future research. To the best of the authors' knowledge, this research is the first to address classifying poem meters in a machine learning approach, in general, and in RNN featureless based approach, in particular. In addition, the dataset is the first publicly available dataset ready for the purpose of future computational research.", "field": ["Recurrent Neural Networks", "Activation Functions", "Bidirectional Recurrent Neural Networks"], "task": ["Poem meters classification"], "method": ["Gated Recurrent Unit", "Long Short-Term Memory", "BiLSTM", "BiGRU", "Tanh Activation", "Bidirectional LSTM", "LSTM", "GRU", "Sigmoid Activation", "Bidirectional GRU"], "dataset": ["PCD"], "metric": ["Accuracy"], "title": "Learning meters of Arabic and English poems with Recurrent Neural Networks: a step forward for language understanding and synthesis"} {"abstract": "In this paper, we explore vector quantization for acoustic unit discovery. Leveraging unlabelled data, we aim to learn discrete representations of speech that separate phonetic content from speaker-specific details. We propose two neural models to tackle this challenge - both use vector quantization to map continuous features to a finite set of codes. The first model is a type of vector-quantized variational autoencoder (VQ-VAE). The VQ-VAE encodes speech into a sequence of discrete units before reconstructing the audio waveform. Our second model combines vector quantization with contrastive predictive coding (VQ-CPC). The idea is to learn a representation of speech by predicting future acoustic units. We evaluate the models on English and Indonesian data for the ZeroSpeech 2020 challenge. In ABX phone discrimination tests, both models outperform all submissions to the 2019 and 2020 challenges, with a relative improvement of more than 30%. The models also perform competitively on a downstream voice conversion task. Of the two, VQ-CPC performs slightly better in general and is simpler and faster to train. Finally, probing experiments show that vector quantization is an effective bottleneck, forcing the models to discard speaker information.", "field": ["Generative Models", "Self-Supervised Learning", "Loss Functions"], "task": ["Acoustic Unit Discovery", "Voice Conversion"], "method": ["VQ-VAE", "InfoNCE", "AutoEncoder", "Contrastive Predictive Coding"], "dataset": ["ZeroSpeech 2019 English"], "metric": ["Speaker Similarity", "ABX-across"], "title": "Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge"} {"abstract": "Capturing the composition patterns of relations is a vital task in knowledge graph completion. It also serves as a fundamental step towards multi-hop reasoning over learned knowledge. Previously, rotation-based translational methods, e.g., RotatE, have been developed to model composite relations using the product of a series of complex-valued diagonal matrices. However, RotatE makes several oversimplified assumptions on the composition patterns, forcing the relations to be commutative, independent from entities and fixed in scale. To tackle this problem, we have developed a novel knowledge graph embedding method, named DensE, to provide sufficient modeling capacity for complex composition patterns. In particular, our method decomposes each relation into an SO(3) group-based rotation operator and a scaling operator in the three dimensional (3-D) Euclidean space. The advantages of our method are twofold: (1) For composite relations, the corresponding diagonal relation matrices can be non-commutative and related with entity embeddings; (2) It extends the concept of RotatE to a more expressive setting with lower model complexity and preserves the direct geometrical interpretations, which reveals how relations with distinct patterns (i.e., symmetry/anti-symmetry, inversion and composition) are modeled. Experimental results on multiple benchmark knowledge graphs show that DensE outperforms the current state-of-the-art models for missing link prediction, especially on composite relations.", "field": ["Graph Embeddings", "Negative Sampling"], "task": ["Entity Embeddings", "Graph Embedding", "Knowledge Graph Completion", "Knowledge Graph Embedding", "Knowledge Graphs", "Link Prediction"], "method": ["Self-Adversarial Negative Sampling", "RotatE"], "dataset": ["WN18RR", "YAGO3-10", "WN18", "FB15k-237"], "metric": ["Hits@3", "Hits@1", "MR", "MRR", "Hits@10"], "title": "DensE: An Enhanced Non-Abelian Group Representation for Knowledge Graph Embedding"} {"abstract": "The prevalent perspectives of scene text recognition are from sequence to sequence (seq2seq) and segmentation. In this paper, we propose a new perspective on scene text recognition, in which we model the scene text recognition as an image classification problem. Based on the image classification perspective, a scene text recognition model is proposed, which is named as CSTR. The CSTR model consists of a series of convolutional layers and a global average pooling layer at the end, followed by independent multi-class classification heads, each of which predicts the corresponding character of the word sequence in input image. The CSTR model is easy to train using parallel cross entropy losses. CSTR is as simple as image classification models like ResNet \\cite{he2016deep} which makes it easy to implement, and the fully convolutional neural network architecture makes it efficient to train and deploy. We demonstrate the effectiveness of the classification perspective on scene text recognition with thorough experiments. Futhermore, CSTR achieves nearly state-of-the-art performance on six public benchmarks including regular text, irregular text. The code will be available at https://github.com/Media-Smart/vedastr.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification", "Multi-class Classification", "Scene Text", "Scene Text Recognition"], "method": ["ResNet", "Average Pooling", "Residual Block", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ICDAR2013", "ICDAR2015", "ICDAR 2003", "SVT"], "metric": ["Accuracy"], "title": "CSTR: A Classification Perspective on Scene Text Recognition"} {"abstract": "Major winning Convolutional Neural Networks (CNNs), such as VGGNet, ResNet,\nDenseNet, \\etc, include tens to hundreds of millions of parameters, which\nimpose considerable computation and memory overheads. This limits their\npractical usage in training and optimizing for real-world applications. On the\ncontrary, light-weight architectures, such as SqueezeNet, are being proposed to\naddress this issue. However, they mainly suffer from low accuracy, as they have\ncompromised between the processing power and efficiency. These inefficiencies\nmostly stem from following an ad-hoc designing procedure. In this work, we\ndiscuss and propose several crucial design principles for an efficient\narchitecture design and elaborate intuitions concerning different aspects of\nthe design procedure. Furthermore, we introduce a new layer called {\\it\nSAF-pooling} to improve the generalization power of the network while keeping\nit simple by choosing best features. Based on such principles, we propose a\nsimple architecture called {\\it SimpNet}. We empirically show that SimpNet\nprovides a good trade-off between the computation/memory efficiency and the\naccuracy solely based on these primitive but crucial principles. SimpNet\noutperforms the deeper and more complex architectures such as VGGNet, ResNet,\nWideResidualNet \\etc, on several well-known benchmarks, while having 2 to 25\ntimes fewer number of parameters and operations. We obtain state-of-the-art\nresults (in terms of a balance between the accuracy and the number of involved\nparameters) on standard datasets, such as CIFAR10, CIFAR100, MNIST and SVHN.\nThe implementations are available at\n\\href{url}{https://github.com/Coderx7/SimpNet}.", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Output Functions", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Image Classification"], "method": ["Average Pooling", "1x1 Convolution", "ResNet", "SqueezeNet", "Convolution", "ReLU", "Residual Connection", "Fire Module", "Batch Normalization", "Xavier Initialization", "Residual Network", "Kaiming Initialization", "Softmax", "Bottleneck Residual Block", "Dropout", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CIFAR-100", "CIFAR-10"], "metric": ["Percentage correct"], "title": "Towards Principled Design of Deep Convolutional Networks: Introducing SimpNet"} {"abstract": "Advanced methods of applying deep learning to structured data such as graphs have been proposed in recent years. In particular, studies have focused on generalizing convolutional neural networks to graph data, which includes redefining the convolution and the downsampling (pooling) operations for graphs. The method of generalizing the convolution operation to graphs has been proven to improve performance and is widely used. However, the method of applying downsampling to graphs is still difficult to perform and has room for improvement. In this paper, we propose a graph pooling method based on self-attention. Self-attention using graph convolution allows our pooling method to consider both node features and graph topology. To ensure a fair comparison, the same training procedures and model architectures were used for the existing pooling methods and our method. The experimental results demonstrate that our method achieves superior graph classification performance on the benchmark datasets using a reasonable number of parameters.", "field": ["Convolutions"], "task": ["Graph Classification"], "method": ["Convolution"], "dataset": ["NCI109", "PROTEINS", "D&D", "NCI1", "FRANKENSTEIN"], "metric": ["Accuracy"], "title": "Self-Attention Graph Pooling"} {"abstract": "Graph convolutional networks (GCNs) have recently become one of the most\npowerful tools for graph analytics tasks in numerous applications, ranging from\nsocial networks and natural language processing to bioinformatics and\nchemoinformatics, thanks to their ability to capture the complex relationships\nbetween concepts. At present, the vast majority of GCNs use a neighborhood\naggregation framework to learn a continuous and compact vector, then performing\na pooling operation to generalize graph embedding for the classification task.\nThese approaches have two disadvantages in the graph classification task:\n(1)when only the largest sub-graph structure ($k$-hop neighbor) is used for\nneighborhood aggregation, a large amount of early-stage information is lost\nduring the graph convolution step; (2) simple average/sum pooling or max\npooling utilized, which loses the characteristics of each node and the topology\nbetween nodes. In this paper, we propose a novel framework called, dual\nattention graph convolutional networks (DAGCN) to address these problems. DAGCN\nautomatically learns the importance of neighbors at different hops using a\nnovel attention graph convolution layer, and then employs a second attention\ncomponent, a self-attention pooling layer, to generalize the graph\nrepresentation from the various aspects of a matrix graph embedding. The dual\nattention network is trained in an end-to-end manner for the graph\nclassification task. We compare our model with state-of-the-art graph kernels\nand other deep learning methods. The experimental results show that our\nframework not only outperforms other baselines but also achieves a better rate\nof convergence.", "field": ["Convolutions"], "task": ["Graph Classification", "Graph Embedding"], "method": ["Convolution"], "dataset": ["ENZYMES", "PROTEINS", "NCI1", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "DAGCN: Dual Attention Graph Convolutional Networks"} {"abstract": "Machine learning systems have received much attention recently for their ability to achieve expert-level performance on clinical tasks, particularly in medical imaging. Here, we examine the extent to which state-of-the-art deep learning classifiers trained to yield diagnostic labels from X-ray images are biased with respect to protected attributes. We train convolution neural networks to predict 14 diagnostic labels in 3 prominent public chest X-ray datasets: MIMIC-CXR, Chest-Xray8, CheXpert, as well as a multi-site aggregation of all those datasets. We evaluate the TPR disparity -- the difference in true positive rates (TPR) -- among different protected attributes such as patient sex, age, race, and insurance type as a proxy for socioeconomic status. We demonstrate that TPR disparities exist in the state-of-the-art classifiers in all datasets, for all clinical tasks, and all subgroups. A multi-source dataset corresponds to the smallest disparities, suggesting one way to reduce bias. We find that TPR disparities are not significantly correlated with a subgroup's proportional disease burden. As clinical models move from papers to products, we encourage clinical decision makers to carefully audit for algorithmic disparities prior to deployment. Our code can be found at, https://github.com/LalehSeyyed/CheXclusion", "field": ["Convolutions"], "task": ["Fairness", "Medical Diagnosis", "Multi-Label Classification", "Multi-Label Learning"], "method": ["Convolution"], "dataset": ["ChestX-ray14", "CheXpert", "MIMIC-CXR"], "metric": ["Average AUC on 14 label"], "title": "CheXclusion: Fairness gaps in deep chest X-ray classifiers"} {"abstract": "Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing. Current pre-training procedures usually focus on training the model with several simple tasks to grasp the co-occurrence of words or sentences. However, besides co-occurring, there exists other valuable lexical, syntactic and semantic information in training corpora, such as named entity, semantic closeness and discourse relations. In order to extract to the fullest extent, the lexical, syntactic and semantic information from training corpora, we propose a continual pre-training framework named ERNIE 2.0 which builds and learns incrementally pre-training tasks through constant multi-task learning. Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on 16 tasks including English tasks on GLUE benchmarks and several common tasks in Chinese. The source codes and pre-trained models have been released at https://github.com/PaddlePaddle/ERNIE.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Tokenizers", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Chinese Named Entity Recognition", "Chinese Reading Comprehension", "Chinese Sentence Pair Classification", "Chinese Sentiment Analysis", "Linguistic Acceptability", "Multi-Task Learning", "Named Entity Recognition", "Natural Language Inference", "Open-Domain Question Answering", "Question Answering", "Semantic Textual Similarity", "Sentiment Analysis"], "method": ["Weight Decay", "Adam", "Scaled Dot-Product Attention", "SentencePiece", "Gaussian Linear Error Units", "XLNet", "Residual Connection", "Dense Connections", "Layer Normalization", "GELU", "WordPiece", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Dropout", "BERT"], "dataset": ["MultiNLI", "XNLI Chinese Dev", "SST-2 Binary classification", "MSRA", "Quora Question Pairs", "MSRA Dev", "RTE", "WNLI", "MRPC", "STS Benchmark", "CoLA", "DuReader", "QNLI", "XNLI Chinese"], "metric": ["Pearson Correlation", "Matched", "F1", "Accuracy", "Mismatched", "EM"], "title": "ERNIE 2.0: A Continual Pre-training Framework for Language Understanding"} {"abstract": "Point clouds are among the popular geometry representations for 3D vision applications. However, without regular structures like 2D images, processing and summarizing information over these unordered data points are very challenging. Although a number of previous works attempt to analyze point clouds and achieve promising performances, their performances would degrade significantly when data variations like shift and scale changes are presented. In this paper, we propose 3D Graph Convolution Networks (3D-GCN), which is designed to extract local 3D features from point clouds across scales, while shift and scale-invariance properties are introduced. The novelty of our 3D-GCN lies in the definition of learnable kernels with a graph max-pooling mechanism. We show that 3D-GCN can be applied to 3D classification and segmentation tasks, with ablation studies and visualizations verifying the design of 3D-GCN.\r", "field": ["Convolutions"], "task": ["3D Classification", "3D Part Segmentation"], "method": ["Convolution"], "dataset": ["ShapeNet-Part"], "metric": ["Class Average IoU", "Instance Average IoU"], "title": "Convolution in the Cloud: Learning Deformable Kernels in 3D Graph Convolution Networks for Point Cloud Analysis"} {"abstract": "A major challenge in Entity Linking (EL) is making effective use of\ncontextual information to disambiguate mentions to Wikipedia that might refer\nto different entities in different contexts. The problem exacerbates with\ncross-lingual EL which involves linking mentions written in non-English\ndocuments to entries in the English Wikipedia: to compare textual clues across\nlanguages we need to compute similarity between textual fragments across\nlanguages. In this paper, we propose a neural EL model that trains fine-grained\nsimilarities and dissimilarities between the query and candidate document from\nmultiple perspectives, combined with convolution and tensor networks. Further,\nwe show that this English-trained system can be applied, in zero-shot learning,\nto other languages by making surprisingly effective use of multi-lingual\nembeddings. The proposed system has strong empirical evidence yielding\nstate-of-the-art results in English as well as cross-lingual: Spanish and\nChinese TAC 2015 datasets.", "field": ["Convolutions"], "task": ["Cross-Lingual Entity Linking", "Entity Disambiguation", "Entity Linking", "Tensor Networks", "Zero-Shot Learning"], "method": ["Convolution"], "dataset": ["TAC2010", "AIDA-CoNLL"], "metric": ["Micro Precision", "In-KB Accuracy"], "title": "Neural Cross-Lingual Entity Linking"} {"abstract": "In this paper, we proposed a sentence encoding-based model for recognizing\ntext entailment. In our approach, the encoding of sentence is a two-stage\nprocess. Firstly, average pooling was used over word-level bidirectional LSTM\n(biLSTM) to generate a first-stage sentence representation. Secondly, attention\nmechanism was employed to replace average pooling on the same sentence for\nbetter representations. Instead of using target sentence to attend words in\nsource sentence, we utilized the sentence's first-stage representation to\nattend words appeared in itself, which is called \"Inner-Attention\" in our paper\n. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus\nhas proved the effectiveness of \"Inner-Attention\" mechanism. With less number\nof parameters, our model outperformed the existing best sentence encoding-based\napproach by a large margin.", "field": ["Pooling Operations"], "task": ["Natural Language Inference"], "method": ["Average Pooling"], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention"} {"abstract": "Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Medical Named Entity Recognition", "Medical Relation Extraction", "Named Entity Recognition", "Question Answering", "Relation Extraction", "Sentence Classification"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["ChemProt", "NCBI-disease", "JNLPBA"], "metric": ["F1"], "title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining"} {"abstract": "One-stage detector basically formulates object detection as dense classification and localization. The classification is usually optimized by Focal Loss and the box location is commonly learned under Dirac delta distribution. A recent trend for one-stage detectors is to introduce an individual prediction branch to estimate the quality of localization, where the predicted quality facilitates the classification to improve detection performance. This paper delves into the representations of the above three fundamental elements: quality estimation, classification and localization. Two problems are discovered in existing practices, including (1) the inconsistent usage of the quality estimation and classification between training and inference and (2) the inflexible Dirac delta distribution for localization when there is ambiguity and uncertainty in complex scenes. To address the problems, we design new representations for these elements. Specifically, we merge the quality estimation into the class prediction vector to form a joint representation of localization quality and classification, and use a vector to represent arbitrary distribution of box locations. The improved representations eliminate the inconsistency risk and accurately depict the flexible distribution in real data, but contain continuous labels, which is beyond the scope of Focal Loss. We then propose Generalized Focal Loss (GFL) that generalizes Focal Loss from its discrete form to the continuous version for successful optimization. On COCO test-dev, GFL achieves 45.0\\% AP using ResNet-101 backbone, surpassing state-of-the-art SAPD (43.5\\%) and ATSS (43.6\\%) with higher or comparable inference speed, under the same backbone and training settings. Notably, our best model can achieve a single-model single-scale AP of 48.2\\%, at 10 FPS on a single 2080Ti GPU. Code and models are available at https://github.com/implus/GFocal.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Pooling Operations", "Prioritized Sampling", "Skip Connections", "Skip Connection Blocks"], "task": ["Dense Object Detection", "Object Detection"], "method": ["ResNeXt Block", "Average Pooling", "Adaptive Training Sample Selection", "Generalized Focal Loss", "Grouped Convolution", "ResNeXt", "Focal Loss", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Connection", "ATSS", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units", "Deformable Convolution"], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection"} {"abstract": "Unlike ReLU, newer activation functions (like Swish, H-swish, Mish) that are frequently employed in popular efficient architectures can also result in negative activation values, with skewed positive and negative ranges. Typical learnable quantization schemes [PACT, LSQ] assume unsigned quantization for activations and quantize all negative activations to zero which leads to significant loss in performance. Naively using signed quantization to accommodate these negative values requires an extra sign bit which is expensive for low-bit (2-, 3-, 4-bit) quantization. To solve this problem, we propose LSQ+, a natural extension of LSQ, wherein we introduce a general asymmetric quantization scheme with trainable scale and offset parameters that can learn to accommodate the negative activations. Gradient-based learnable quantization schemes also commonly suffer from high instability or variance in the final training performance, hence requiring a great deal of hyper-parameter tuning to reach a satisfactory performance. LSQ+ alleviates this problem by using an MSE-based initialization scheme for the quantization parameters. We show that this initialization leads to significantly lower variance in final performance across multiple training runs. Overall, LSQ+ shows state-of-the-art results for EfficientNet and MixNet and also significantly outperforms LSQ for low-bit quantization of neural nets with Swish activations (e.g.: 1.8% gain with W4A4 quantization and upto 5.6% gain with W2A2 quantization of EfficientNet-B0 on ImageNet dataset). To the best of our knowledge, ours is the first work to quantize such architectures to extremely low bit-widths.", "field": ["Regularization", "Convolutional Neural Networks", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["Image Classification", "Quantization"], "method": ["Depthwise Convolution", "MixConv", "Average Pooling", "EfficientNet", "RMSProp", "1x1 Convolution", "Convolution", "ReLU", "Dense Connections", "MixNet", "Swish", "Grouped Convolution", "Batch Normalization", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Mixed Depthwise Convolution", "Sigmoid Activation", "Inverted Residual Block", "Dropout", "Depthwise Separable Convolution", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["ImageNet"], "metric": ["Accuracy (%)"], "title": "LSQ+: Improving low-bit quantization through learnable offsets and better initialization"} {"abstract": "We present a novel language representation model enhanced by knowledge called\nERNIE (Enhanced Representation through kNowledge IntEgration). Inspired by the\nmasking strategy of BERT, ERNIE is designed to learn language representation\nenhanced by knowledge masking strategies, which includes entity-level masking\nand phrase-level masking. Entity-level strategy masks entities which are\nusually composed of multiple words.Phrase-level strategy masks the whole phrase\nwhich is composed of several words standing together as a conceptual\nunit.Experimental results show that ERNIE outperforms other baseline methods,\nachieving new state-of-the-art results on five Chinese natural language\nprocessing tasks including natural language inference, semantic similarity,\nnamed entity recognition, sentiment analysis and question answering. We also\ndemonstrate that ERNIE has more powerful knowledge inference capacity on a\ncloze test.", "field": ["Output Functions", "Regularization", "Stochastic Optimization", "Attention Modules", "Learning Rate Schedules", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Chinese Named Entity Recognition", "Chinese Sentence Pair Classification", "Chinese Sentiment Analysis", "Named Entity Recognition", "Natural Language Inference", "Question Answering", "Semantic Similarity", "Semantic Textual Similarity", "Sentiment Analysis"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["MSRA", "XNLI Chinese Dev", "XNLI Chinese", "MSRA Dev"], "metric": ["F1", "Accuracy"], "title": "ERNIE: Enhanced Representation through Knowledge Integration"} {"abstract": "The pre-dominant approach to language modeling to date is based on recurrent\nneural networks. Their success on this task is often linked to their ability to\ncapture unbounded context. In this paper we develop a finite context approach\nthrough stacked convolutions, which can be more efficient since they allow\nparallelization over sequential tokens. We propose a novel simplified gating\nmechanism that outperforms Oord et al (2016) and investigate the impact of key\narchitectural decisions. The proposed approach achieves state-of-the-art on the\nWikiText-103 benchmark, even though it features long-term dependencies, as well\nas competitive results on the Google Billion Words benchmark. Our model reduces\nthe latency to score a sentence by an order of magnitude compared to a\nrecurrent baseline. To our knowledge, this is the first time a non-recurrent\napproach is competitive with strong recurrent models on these large scale\nlanguage tasks.", "field": ["Temporal Convolutions", "Initialization", "Output Functions", "Stochastic Optimization", "Activation Functions", "Language Models", "Optimization", "Convolutions", "Feedforward Networks", "Skip Connections"], "task": ["Language Modelling"], "method": ["Gated Convolution", "GLU", "Adaptive Softmax", "Gated Linear Unit", "Gated Convolution Network", "Convolution", "1x1 Convolution", "Residual Connection", "Linear Layer", "Gradient Clipping", "Kaiming Initialization", "Nesterov Accelerated Gradient"], "dataset": ["WikiText-103", "One Billion Word"], "metric": ["PPL", "Validation perplexity", "Test perplexity"], "title": "Language Modeling with Gated Convolutional Networks"} {"abstract": "Despite the substantial progress made in deep learning in recent years, advanced approaches remain computationally intensive. The trade-off between accuracy and computation time and energy limits their use in real-time applications on low power and other resource-constrained systems. In this paper, we tackle this fundamental challenge by introducing a hybrid optical-digital implementation of a convolutional neural network (CNN) based on engineering of the point spread function (PSF) of an optical imaging system. This is done by coding an imaging aperture such that its PSF replicates a large convolution kernel of the first layer of a pre-trained CNN. As the convolution takes place in the optical domain, it has zero cost in terms of energy consumption and has zero latency independent of the kernel size. Experimental results on two datasets demonstrate that our approach yields more than two orders of magnitude reduction in the computational cost while achieving near-state-of-the-art accuracy, or equivalently, better accuracy at the same computational cost.\r", "field": ["Convolutions"], "task": ["Hand-Gesture Recognition", "Image Classification"], "method": ["Convolution"], "dataset": ["EMNIST-Digits", "InAirGestures", "EMNIST-Letters", "EMNIST-Balanced"], "metric": ["Accuracy (%)", "Accuracy"], "title": "Efficient Neural Vision Systems Based on Convolutional Image Acquisition"} {"abstract": "In this work we introduce a fully end-to-end approach for action detection in\nvideos that learns to directly predict the temporal bounds of actions. Our\nintuition is that the process of detecting actions is naturally one of\nobservation and refinement: observing moments in video, and refining hypotheses\nabout when an action is occurring. Based on this insight, we formulate our\nmodel as a recurrent neural network-based agent that interacts with a video\nover time. The agent observes video frames and decides both where to look next\nand when to emit a prediction. Since backpropagation is not adequate in this\nnon-differentiable setting, we use REINFORCE to learn the agent's decision\npolicy. Our model achieves state-of-the-art results on the THUMOS'14 and\nActivityNet datasets while observing only a fraction (2% or less) of the video\nframes.", "field": ["Policy Gradient Methods"], "task": ["Action Detection"], "method": ["REINFORCE"], "dataset": ["THUMOS\u201914"], "metric": ["mAP@0.2", "mAP@0.3", "mAP IOU@0.5", "mAP IOU@0.2", "mAP IOU@0.4", "mAP@0.4", "mAP@0.1", "mAP IOU@0.3", "mAP@0.5", "mAP IOU@0.1"], "title": "End-to-end Learning of Action Detection from Frame Glimpses in Videos"} {"abstract": "The latest deep learning-based approaches have shown promising results for the challenging task of inpainting missing regions of an image. However, the existing methods often generate contents with blurry textures and distorted structures due to the discontinuity of the local pixels. From a semantic-level perspective, the local pixel discontinuity is mainly because these methods ignore the semantic relevance and feature continuity of hole regions. To handle this problem, we investigate the human behavior in repairing pictures and propose a fined deep generative model-based approach with a novel coherent semantic attention (CSA) layer, which can not only preserve contextual structure but also make more effective predictions of missing parts by modeling the semantic relevance between the holes features. The task is divided into rough, refinement as two steps and model each step with a neural network under the U-Net architecture, where the CSA layer is embedded into the encoder of refinement step. To stabilize the network training process and promote the CSA layer to learn more effective parameters, we propose a consistency loss to enforce the both the CSA layer and the corresponding layer of the CSA in decoder to be close to the VGG feature layer of a ground truth image simultaneously. The experiments on CelebA, Places2, and Paris StreetView datasets have validated the effectiveness of our proposed methods in image inpainting tasks and can obtain images with a higher quality as compared with the existing state-of-the-art approaches.", "field": ["Semantic Segmentation Models", "Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections"], "task": ["Image Inpainting"], "method": ["U-Net", "VGG", "Softmax", "Concatenated Skip Connection", "Convolution", "Rectified Linear Units", "ReLU", "Dropout", "Dense Connections", "Max Pooling"], "dataset": ["Paris StreetView"], "metric": ["40-50% Mask PSNR", "20-30% Mask PSNR", "30-40% Mask PSNR", "10-20% Mask PSNR"], "title": "Coherent Semantic Attention for Image Inpainting"} {"abstract": "We present a new approach for pretraining a bi-directional transformer model\nthat provides significant performance gains across a variety of language\nunderstanding problems. Our model solves a cloze-style word reconstruction\ntask, where each word is ablated and must be predicted given the rest of the\ntext. Experiments demonstrate large performance gains on GLUE and new state of\nthe art results on NER as well as constituency parsing benchmarks, consistent\nwith the concurrently introduced BERT model. We also present a detailed\nanalysis of a number of factors that contribute to effective pretraining,\nincluding data domain and size, model capacity, and variations on the cloze\nobjective.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Constituency Parsing", "Named Entity Recognition", "Sentiment Analysis", "Text Classification"], "method": ["Weight Decay", "Adam", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Transformer", "ReLU", "Residual Connection", "Dense Connections", "Layer Normalization", "Label Smoothing", "GELU", "WordPiece", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Dropout", "BERT", "Rectified Linear Units"], "dataset": ["SST-2 Binary classification", "CoNLL 2003 (English)", "Penn Treebank"], "metric": ["F1", "F1 score", "Accuracy"], "title": "Cloze-driven Pretraining of Self-attention Networks"} {"abstract": "Generalization and robustness are both key desiderata for designing machine learning methods. Adversarial training can enhance robustness, but past work often finds it hurts generalization. In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning. However, these models are still vulnerable to adversarial attacks. In this paper, we show that adversarial pre-training can improve both generalization and robustness. We propose a general algorithm ALUM (Adversarial training for large neural LangUage Models), which regularizes the training objective by applying perturbations in the embedding space that maximizes the adversarial loss. We present the first comprehensive study of adversarial training in all stages, including pre-training from scratch, continual pre-training on a well-trained model, and task-specific fine-tuning. ALUM obtains substantial gains over BERT on a wide range of NLP tasks, in both regular and adversarial scenarios. Even for models that have been well trained on extremely large text corpora, such as RoBERTa, ALUM can still produce significant gains from continual pre-training, whereas conventional non-adversarial methods can not. ALUM can be further combined with task-specific fine-tuning to attain additional gains. The ALUM code is publicly available at https://github.com/namisan/mt-dnn.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Natural Language Inference", "Natural Language Understanding"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "RoBERTa", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["ANLI test"], "metric": ["ANLI", "A3", "A2", "A1"], "title": "Adversarial Training for Large Neural Language Models"} {"abstract": "Capturing document images is a common way for digitizing and recording physical documents due to the ubiquitousness of mobile cameras. To make text recognition easier, it is often desirable to digitally flatten a document image when the physical document sheet is folded or curved. In this paper, we develop the first learning-based method to achieve this goal. We propose a stacked U-Net with intermediate supervision to directly predict the forward mapping from a distorted image to its rectified version. Because large-scale real-world data with ground truth deformation is difficult to obtain, we create a synthetic dataset with approximately 100 thousand images by warping non-distorted document images. The network is trained on this dataset with various data augmentations to improve its generalization ability. We further create a comprehensive benchmark that covers various real-world conditions. We evaluate the proposed model quantitatively and qualitatively on the proposed benchmark, and compare it with previous non-learning-based methods.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Local Distortion", "MS-SSIM", "SSIM"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["DocUNet"], "metric": ["SSIM", "LD", "MS-SSIM"], "title": "DocUNet: Document Image Unwarping via a Stacked U-Net"} {"abstract": "Deep convolutional networks have achieved great success for image\nrecognition. However, for action recognition in videos, their advantage over\ntraditional methods is not so evident. We present a general and flexible\nvideo-level framework for learning action models in videos. This method, called\ntemporal segment network (TSN), aims to model long-range temporal structures\nwith a new segment-based sampling and aggregation module. This unique design\nenables our TSN to efficiently learn action models by using the whole action\nvideos. The learned models could be easily adapted for action recognition in\nboth trimmed and untrimmed videos with simple average pooling and multi-scale\ntemporal window integration, respectively. We also study a series of good\npractices for the instantiation of TSN framework given limited training\nsamples. Our approach obtains the state-the-of-art performance on four\nchallenging action recognition benchmarks: HMDB51 (71.0%), UCF101 (94.9%),\nTHUMOS14 (80.1%), and ActivityNet v1.2 (89.6%). Using the proposed RGB\ndifference for motion models, our method can still achieve competitive accuracy\non UCF101 (91.0%) while running at 340 FPS. Furthermore, based on the temporal\nsegment networks, we won the video classification track at the ActivityNet\nchallenge 2016 among 24 teams, which demonstrates the effectiveness of TSN and\nthe proposed good practices.", "field": ["Pooling Operations"], "task": ["Action Classification", "Action Recognition", "Action Recognition In Videos", "Action Recognition In Videos ", "Temporal Action Localization", "Video Classification"], "method": ["Average Pooling"], "dataset": ["Moments in Time"], "metric": ["Top 5 Accuracy"], "title": "Temporal Segment Networks for Action Recognition in Videos"} {"abstract": "A sentence compression method using LSTM can generate fluent compressed sentences. However, the performance of this method is significantly degraded when compressing longer sentences since it does not explicitly handle syntactic features. To solve this problem, we propose a higher-order syntactic attention network (HiSAN) that can handle higher-order dependency features as an attention distribution on LSTM hidden states. Furthermore, to avoid the influence of incorrect parse results, we trained HiSAN by maximizing jointly the probability of a correct output with the attention distribution. Experimental results on Google sentence compression dataset showed that our method achieved the best performance on F1 as well as ROUGE-1,2 and L scores, 83.2, 82.9, 75.8 and 82.7, respectively. In human evaluation, our methods also outperformed baseline methods in both readability and informativeness.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Machine Translation", "Sentence Compression"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["Google Dataset"], "metric": ["F1"], "title": "Higher-Order Syntactic Attention Network for Longer Sentence Compression"} {"abstract": "Change detection is an important task in remote sensing (RS) image analysis. It is widely used in natural disaster monitoring and assessment, land resource planning, and other fields. As a pixel-to-pixel prediction task, change detection is sensitive about the utilization of the original position information. Recent change detection methods always focus on the extraction of deep change semantic feature, but ignore the importance of shallow-layer information containing high-resolution and fine-grained features, this often leads to the uncertainty of the pixels at the edge of the changed target and the determination miss of small targets. In this letter, we propose a densely connected siamese network for change detection, namely SNUNet-CD (the combination of Siamese network and NestedUNet). SNUNet-CD alleviates the loss of localization information in the deep layers of neural network through compact information transmission between encoder and decoder, and between decoder and decoder. In addition, Ensemble Channel Attention Module (ECAM) is proposed for deep supervision. Through ECAM, the most representative features of different semantic levels can be refined and used for the final classification. Experimental results show that our method improves greatly on many evaluation criteria and has a better tradeoff between accuracy and calculation amount than other state-of-the-art (SOTA) change detection methods.", "field": ["Feedforward Networks", "Image Model Blocks", "Twin Networks", "Attention Mechanisms"], "task": ["Change detection for remote sensing images"], "method": ["Siamese Network", "Channel-wise Soft Attention", "Dense Connections", "Channel Attention Module"], "dataset": ["CDD Dataset (season-varying)"], "metric": ["F1-Score"], "title": "SNUNet-CD: A Densely Connected Siamese Network for Change Detection of VHR Images"} {"abstract": "Change detection is an important task in remote sensing (RS) image analysis. With the development of deep learning and the increase of RS data, there are more and more change detection methods based on supervised learning. In this paper, we improve the semantic segmentation network UNet++ and propose a fully convolutional siamese network (Siam-NestedUNet) for change detection. We combine three types of siamese structures with UNet++ respectively to explore the impact of siamese structures on the change detection task under the condition of a backbone network with strong feature extraction capabilities. In addition, for the characteristics of multiple outputs in Siam-NestedUNet, we design a set of experiments to explore the importance level of the output at different semantic levels. According to the experimental results, our method improves greatly on a number of indicators, including precision, recall, F1-Score and overall accuracy, and has better performance than other SOTA change detection methods. Our implementation will be released at https://github.com/likyoo/Siam-NestedUNet.", "field": ["Twin Networks"], "task": ["Change detection for remote sensing images", "Semantic Segmentation"], "method": ["Siamese Network"], "dataset": ["CDD Dataset (season-varying)"], "metric": ["F1-Score"], "title": "Siamese NestedUNet Networks for Change Detection of High Resolution Satellite Image"} {"abstract": "In this work, we revisit atrous convolution, a powerful tool to explicitly\nadjust filter's field-of-view as well as control the resolution of feature\nresponses computed by Deep Convolutional Neural Networks, in the application of\nsemantic image segmentation. To handle the problem of segmenting objects at\nmultiple scales, we design modules which employ atrous convolution in cascade\nor in parallel to capture multi-scale context by adopting multiple atrous\nrates. Furthermore, we propose to augment our previously proposed Atrous\nSpatial Pyramid Pooling module, which probes convolutional features at multiple\nscales, with image-level features encoding global context and further boost\nperformance. We also elaborate on implementation details and share our\nexperience on training our system. The proposed `DeepLabv3' system\nsignificantly improves over our previous DeepLab versions without DenseCRF\npost-processing and attains comparable performance with other state-of-art\nmodels on the PASCAL VOC 2012 semantic image segmentation benchmark.", "field": ["Image Data Augmentation", "Semantic Segmentation Models", "Regularization", "Semantic Segmentation Modules", "Stochastic Optimization", "Learning Rate Schedules", "Initialization", "Activation Functions", "Convolutional Neural Networks", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Semantic Segmentation"], "method": ["Weight Decay", "Average Pooling", "Polynomial Rate Decay", "Random Scaling", "1x1 Convolution", "ResNet", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Atrous Spatial Pyramid Pooling", "SGD with Momentum", "DeepLabv3", "Bottleneck Residual Block", "ASPP", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "Spatial Pyramid Pooling"], "dataset": ["Cityscapes val", "PASCAL VOC 2012 test", "PASCAL VOC 2012 val", "Cityscapes test"], "metric": ["Mean IoU", "Mean IoU (class)", "mIoU"], "title": "Rethinking Atrous Convolution for Semantic Image Segmentation"} {"abstract": "Despite the blooming success of architecture search for vision tasks in resource-constrained environments, the design of on-device object detection architectures have mostly been manual. The few automated search efforts are either centered around non-mobile-friendly search spaces or not guided by on-device latency. We propose MnasFPN, a mobile-friendly search space for the detection head, and combine it with latency-aware architecture search to produce efficient object detection models. The learned MnasFPN head, when paired with MobileNetV2 body, outperforms MobileNetV3+SSDLite by 1.8 mAP at similar latency on Pixel. It is also both 1.0 mAP more accurate and 10% faster than NAS-FPNLite. Ablation studies show that the majority of the performance gain comes from innovations in the search space. Further explorations reveal an interesting coupling between the search space design and the search algorithm, and that the complexity of MnasFPN search space may be at a local optimum.", "field": ["Policy Gradient Methods", "Regularization", "Output Functions", "Feature Extractors", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Neural Architecture Search", "Image Models", "Skip Connection Blocks"], "task": ["Object Detection"], "method": ["Depthwise Convolution", "Average Pooling", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "Proximal Policy Optimization", "MobileNetV2", "Entropy Regularization", "Convolution", "NAS-FPN", "ReLU", "Batch Normalization", "PPO", "Pointwise Convolution", "Neural Architecture Search", "Sigmoid Activation", "Inverted Residual Block", "Softmax", "LSTM", "Depthwise Separable Convolution", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["COCO test-dev"], "metric": ["box AP"], "title": "MnasFPN: Learning Latency-aware Pyramid Architecture for Object Detection on Mobile Devices"} {"abstract": "We present simple BERT-based models for relation extraction and semantic role\nlabeling. In recent years, state-of-the-art performance has been achieved using\nneural models by incorporating lexical and syntactic features such as\npart-of-speech tags and dependency trees. In this paper, extensive experiments\non datasets for these two tasks show that without using any external features,\na simple BERT-based model can achieve state-of-the-art performance. To our\nknowledge, we are the first to successfully apply BERT in this manner. Our\nmodels provide strong baselines for future research.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Relation Extraction", "Semantic Role Labeling"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["TACRED"], "metric": ["F1"], "title": "Simple BERT Models for Relation Extraction and Semantic Role Labeling"} {"abstract": "In this paper, we describe our team's effort on the semantic text question similarity task of NSURL 2019. Our top performing system utilizes several innovative data augmentation techniques to enlarge the training data. Then, it takes ELMo pre-trained contextual embeddings of the data and feeds them into an ON-LSTM network with self-attention. This results in sequence representation vectors that are used to predict the relation between the question pairs. The model is ranked in the 1st place with 96.499 F1-score (same as the second place F1-score) and the 2nd place with 94.848 F1-score (differs by 1.076 F1-score from the first place) on the public and private leaderboards, respectively.", "field": ["Output Functions", "Recurrent Neural Networks", "Activation Functions", "Word Embeddings", "Bidirectional Recurrent Neural Networks"], "task": ["Data Augmentation", "Question Similarity"], "method": ["Softmax", "Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Bidirectional LSTM", "LSTM", "ELMo", "Sigmoid Activation"], "dataset": ["Q2Q Arabic Benchmark"], "metric": ["F1 score"], "title": "Tha3aroon at NSURL-2019 Task 8: Semantic Question Similarity in Arabic"} {"abstract": "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \\rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \\rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \\approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "field": ["Discriminators", "Stochastic Optimization", "Activation Functions", "Normalization", "Loss Functions", "Convolutions", "Generative Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Image-to-Image Translation", "Multimodal Unsupervised Image-To-Image Translation", "Style Transfer", "Unsupervised Image-To-Image Translation"], "method": ["Cycle Consistency Loss", "Instance Normalization", "PatchGAN", "Adam", "GAN Least Squares Loss", "Batch Normalization", "Tanh Activation", "Convolution", "ReLU", "CycleGAN", "Residual Connection", "Leaky ReLU", "Residual Block", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["Edge-to-Shoes", "vangogh2photo", "Cityscapes Photo-to-Labels", "Edge-to-Handbags", "Freiburg Forest Dataset", "photo2vangogh", "RaFD", "horse2zebra", "Cats-and-Dogs", "zebra2horse", "Cityscapes Labels-to-Photo", "EPFL NIR-VIS"], "metric": ["Number of params", "Quality", "PSNR", "Per-pixel Accuracy", "Class IOU", "Frechet Inception Distance", "Diversity", "CIS", "Per-class Accuracy", "Classification Error", "Number of Params", "IS"], "title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks"} {"abstract": "This paper addresses 2 challenging tasks: improving the quality of low\nresolution facial images and accurately locating the facial landmarks on such\npoor resolution images. To this end, we make the following 5 contributions: (a)\nwe propose Super-FAN: the very first end-to-end system that addresses both\ntasks simultaneously, i.e. both improves face resolution and detects the facial\nlandmarks. The novelty or Super-FAN lies in incorporating structural\ninformation in a GAN-based super-resolution algorithm via integrating a\nsub-network for face alignment through heatmap regression and optimizing a\nnovel heatmap loss. (b) We illustrate the benefit of training the two networks\njointly by reporting good results not only on frontal images (as in prior work)\nbut on the whole spectrum of facial poses, and not only on synthetic low\nresolution images (as in prior work) but also on real-world images. (c) We\nimprove upon the state-of-the-art in face super-resolution by proposing a new\nresidual-based architecture. (d) Quantitatively, we show large improvement over\nthe state-of-the-art for both face super-resolution and alignment. (e)\nQualitatively, we show for the first time good results on real-world low\nresolution images.", "field": ["Output Functions"], "task": ["Face Alignment", "Face Hallucination", "Image Super-Resolution", "Regression", "Super-Resolution"], "method": ["Heatmap"], "dataset": ["FFHQ 512 x 512 - 16x upscaling", "FFHQ 512 x 512 - 4x upscaling"], "metric": ["LLE", "PSNR", "FID", "FED", "MS-SSIM", "SSIM", "LPIPS", "NIQE"], "title": "Super-FAN: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with GANs"} {"abstract": "We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\\% more accurate on ImageNet classification while reducing latency by 15\\% compared to MobileNetV2. MobileNetV3-Small is 4.6\\% more accurate while reducing latency by 5\\% compared to MobileNetV2. MobileNetV3-Large detection is 25\\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation.", "field": ["Image Data Augmentation", "Regularization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Activation Functions", "Feedforward Networks", "Pooling Operations", "Network Shrinking", "Image Model Blocks"], "task": ["Image Classification", "Neural Architecture Search", "Object Detection", "Semantic Segmentation"], "method": ["Weight Decay", "ReLU6", "RMSProp", "Random Horizontal Flip", "Random Resized Crop", "Step Decay", "Hard Swish", "Rectified Linear Units", "ReLU", "MobileNetV3", "NetAdapt", "Dropout", "Squeeze-and-Excitation Block", "Global Average Pooling", "Dense Connections", "Sigmoid Activation"], "dataset": ["ImageNet", "Cityscapes test"], "metric": ["Number of params", "Mean IoU (class)", "Top 1 Accuracy"], "title": "Searching for MobileNetV3"} {"abstract": "Teaching machines to read natural language documents remains an elusive\nchallenge. Machine reading systems can be tested on their ability to answer\nquestions posed on the contents of documents that they have seen, but until now\nlarge scale training and test datasets have been missing for this type of\nevaluation. In this work we define a new methodology that resolves this\nbottleneck and provides large scale supervised reading comprehension data. This\nallows us to develop a class of attention based deep neural networks that learn\nto read real documents and answer complex questions with minimal prior\nknowledge of language structure.", "field": ["Regularization", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Skip Connections"], "task": ["Reading Comprehension"], "method": ["RMSProp", "Long Short-Term Memory", "Concatenated Skip Connection", "Tanh Activation", "LSTM", "Dropout", "Deep LSTM Reader", "Sigmoid Activation"], "dataset": ["CNN / Daily Mail"], "metric": ["CNN", "Daily Mail"], "title": "Teaching Machines to Read and Comprehend"} {"abstract": "To discover powerful yet compact models is an important goal of neural architecture search. Previous two-stage one-shot approaches are limited by search space with a fixed depth. It seems handy to include an additional skip connection in the search space to make depths variable. However, it creates a large range of perturbation during supernet training and it has difficulty giving a confident ranking for subnetworks. In this paper, we discover that skip connections bring about significant feature inconsistency compared with other operations, which potentially degrades the supernet performance. Based on this observation, we tackle the problem by imposing an equivariant learnable stabilizer to homogenize such disparities. Experiments show that our proposed stabilizer helps to improve the supernet's convergence as well as ranking performance. With an evolutionary search backend that incorporates the stabilized supernet as an evaluator, we derive a family of state-of-the-art architectures, the SCARLET series of several depths, especially SCARLET-A obtains 76.9% top-1 accuracy on ImageNet. The models and evaluation code are released online https://github.com/xiaomi-automl/ScarletNAS.", "field": ["Neural Architecture Search", "Image Data Augmentation", "Regularization", "Convolutional Neural Networks", "Stochastic Optimization", "Learning Rate Schedules", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Skip Connections", "Skip Connection Blocks"], "task": ["AutoML", "Image Classification", "Neural Architecture Search"], "method": ["Depthwise Convolution", "Weight Decay", "Cosine Annealing", "RMSProp", "Cutout", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "AutoAugment", "SCARLET-NAS", "Residual Connection", "Dense Connections", "Batch Normalization", "ColorJitter", "Pointwise Convolution", "Step Decay", "Sigmoid Activation", "Color Jitter", "Inverted Residual Block", "SCARLET", "LSTM", "Dropout", "Depthwise Separable Convolution"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 1 Accuracy", "MACs", "Top-1 Error Rate", "Params", "Accuracy", "Top 5 Accuracy"], "title": "SCARLET-NAS: Bridging the gap between Stability and Scalability in Weight-sharing Neural Architecture Search"} {"abstract": "We present Poly-GAN, a novel conditional GAN architecture that is motivated by Fashion Synthesis, an application where garments are automatically placed on images of human models at an arbitrary pose. Poly-GAN allows conditioning on multiple inputs and is suitable for many tasks, including image alignment, image stitching, and inpainting. Existing methods have a similar pipeline where three different networks are used to first align garments with the human pose, then perform stitching of the aligned garment and finally refine the results. Poly-GAN is the first instance where a common architecture is used to perform all three tasks. Our novel architecture enforces the conditions at all layers of the encoder and utilizes skip connections from the coarse layers of the encoder to the respective layers of the decoder. Poly-GAN is able to perform a spatial transformation of the garment based on the RGB skeleton of the model at an arbitrary pose. Additionally, Poly-GAN can perform image stitching, regardless of the garment orientation, and inpainting on the garment mask when it contains irregular holes. Our system achieves state-of-the-art quantitative results on Structural Similarity Index metric and Inception Score metric using the DeepFashion dataset.", "field": ["Generative Models", "Convolutions"], "task": ["Fashion Synthesis", "Image Stitching", "Image-to-Image Translation", "Virtual Try-on"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["Deep-Fashion"], "metric": ["SSIM", "IS"], "title": "Poly-GAN: Multi-Conditioned GAN for Fashion Synthesis"} {"abstract": "The standard approach to image instance segmentation is to perform the object\ndetection first, and then segment the object from the detection bounding-box.\nMore recently, deep learning methods like Mask R-CNN perform them jointly.\nHowever, little research takes into account the uniqueness of the \"human\"\ncategory, which can be well defined by the pose skeleton. Moreover, the human\npose skeleton can be used to better distinguish instances with heavy occlusion\nthan using bounding-boxes. In this paper, we present a brand new pose-based\ninstance segmentation framework for humans which separates instances based on\nhuman pose, rather than proposal region detection. We demonstrate that our\npose-based framework can achieve better accuracy than the state-of-art\ndetection-based approach on the human instance segmentation problem, and can\nmoreover better handle occlusion. Furthermore, there are few public datasets\ncontaining many heavily occluded humans along with comprehensive annotations,\nwhich makes this a challenging problem seldom noticed by researchers.\nTherefore, in this paper we introduce a new benchmark \"Occluded Human\n(OCHuman)\", which focuses on occluded humans with comprehensive annotations\nincluding bounding-box, human pose and instance masks. This dataset contains\n8110 detailed annotated human instances within 4731 images. With an average\n0.67 MaxIoU for each person, OCHuman is the most complex and challenging\ndataset related to human instance segmentation. Through this dataset, we want\nto emphasize occlusion as a challenging problem for researchers to study.", "field": ["Convolutions", "RoI Feature Extractors", "Output Functions"], "task": ["Human Instance Segmentation", "Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": ["Softmax", "RoIAlign", "Convolution"], "dataset": ["OCHuman"], "metric": ["AP"], "title": "Pose2Seg: Detection Free Human Instance Segmentation"} {"abstract": "The morbidity of brain stroke increased rapidly in the past few years. To help specialists in lesion measurements and treatment planning, automatic segmentation methods are critically required for clinical practices. Recently, approaches based on deep learning and methods for contextual information extraction have served in many image segmentation tasks. However, their performances are limited due to the insufficient training of a large number of parameters, which sometimes fail in capturing long-range dependencies. To address these issues, we propose a depthwise separable convolution based X-Net that designs a nonlocal operation namely Feature Similarity Module (FSM) to capture long-range dependencies. The adopted depthwise convolution allows to reduce the network size, while the developed FSM provides a more effective, dense contextual information extraction and thus facilitates better segmentation. The effectiveness of X-Net was evaluated on an open dataset Anatomical Tracings of Lesions After Stroke (ATLAS) with superior performance achieved compared to other six state-of-the-art approaches. We make our code and models available at https://github.com/Andrewsher/X-Net.", "field": ["Convolutions"], "task": ["Lesion Segmentation", "Semantic Segmentation"], "method": ["Depthwise Convolution", "Pointwise Convolution", "Convolution", "Depthwise Separable Convolution"], "dataset": ["Anatomical Tracings of Lesions After Stroke (ATLAS) "], "metric": ["Precision", "Recall", "IoU", "Dice"], "title": "X-Net: Brain Stroke Lesion Segmentation Based on Depthwise Separable Convolution and Long-range Dependencies"} {"abstract": "We introduce SPARTA, a novel neural retrieval method that shows great promise in performance, generalization, and interpretability for open-domain question answering. Unlike many neural ranking methods that use dense vector nearest neighbor search, SPARTA learns a sparse representation that can be efficiently implemented as an Inverted Index. The resulting representation enables scalable neural retrieval that does not require expensive approximate vector search and leads to better performance than its dense counterpart. We validated our approaches on 4 open-domain question answering (OpenQA) tasks and 11 retrieval question answering (ReQA) tasks. SPARTA achieves new state-of-the-art results across a variety of open-domain question answering tasks in both English and Chinese datasets, including open SQuAD, Natuarl Question, CMRC and etc. Analysis also confirms that the proposed method creates human interpretable representation and allows flexible control over the trade-off between performance and efficiency.", "field": ["Image Models"], "task": ["Open-Domain Question Answering", "Question Answering"], "method": ["Interpretability"], "dataset": ["SQuAD1.1 dev"], "metric": ["EM"], "title": "SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval"} {"abstract": "Modelling relations between multiple entities has attracted increasing attention recently, and a new dataset called DocRED has been collected in order to accelerate the research on the document-level relation extraction. Current baselines for this task uses BiLSTM to encode the whole document and are trained from scratch. We argue that such simple baselines are not strong enough to model to complex interaction between entities. In this paper, we further apply a pre-trained language model (BERT) to provide a stronger baseline for this task. We also find that solving this task in phases can further improve the performance. The first step is to predict whether or not two entities have a relation, the second step is to predict the specific relation.", "field": ["Recurrent Neural Networks", "Activation Functions", "Bidirectional Recurrent Neural Networks"], "task": ["Relation Extraction"], "method": ["Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Bidirectional LSTM", "LSTM", "Sigmoid Activation"], "dataset": ["DocRED"], "metric": ["Ign F1", "F1"], "title": "Fine-tune Bert for DocRED with Two-step Process"} {"abstract": "We present a simple and general framework for feature learning from point cloud. The key to the success of CNNs is the convolution operator that is capable of leveraging spatially-local correlation in data represented densely in grids (e.g. images). However, point cloud are irregular and unordered, thus a direct convolving of kernels against the features associated with the points will result in deserting the shape information while being variant to the orders. To address these problems, we propose to learn a X-transformation from the input points, which is used for simultaneously weighting the input features associated with the points and permuting them into latent potentially canonical order. Then element-wise product and sum operations of typical convolution operator are applied on the X-transformed features. The proposed method is a generalization of typical CNNs into learning features from point cloud, thus we call it PointCNN. Experiments show that PointCNN achieves on par or better performance than state-of-the-art methods on multiple challenging benchmark datasets and tasks.", "field": ["Convolutions"], "task": ["3D Part Segmentation", "3D Point Cloud Classification", "Semantic Segmentation"], "method": ["Convolution"], "dataset": ["S3DIS Area5", "S3DIS", "ShapeNet-Part", "ModelNet40", "ScanNet"], "metric": ["Overall Accuracy", "oAcc", "3DIoU", "Mean IoU", "mAcc", "Instance Average IoU", "mIoU"], "title": "PointCNN: Convolution On X-Transformed Points"} {"abstract": "We present Neural-Guided RANSAC (NG-RANSAC), an extension to the classic RANSAC algorithm from robust optimization. NG-RANSAC uses prior information to improve model hypothesis search, increasing the chance of finding outlier-free minimal sets. Previous works use heuristic side-information like hand-crafted descriptor distance to guide hypothesis search. In contrast, we learn hypothesis search in a principled fashion that lets us optimize an arbitrary task loss during training, leading to large improvements on classic computer vision tasks. We present two further extensions to NG-RANSAC. Firstly, using the inlier count itself as training signal allows us to train neural guidance in a self-supervised fashion. Secondly, we combine neural guidance with differentiable RANSAC to build neural networks which focus on certain parts of the input data and make the output predictions as good as possible. We evaluate NG-RANSAC on a wide array of computer vision tasks, namely estimation of epipolar geometry, horizon line estimation and camera re-localization. We achieve superior or competitive results compared to state-of-the-art robust estimators, including very recent, learned ones.", "field": ["Graph Embeddings"], "task": ["Camera Localization", "Horizon Line Estimation", "Visual Localization"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["Horizon Lines in the Wild"], "metric": ["AUC (horizon error)"], "title": "Neural-Guided RANSAC: Learning Where to Sample Model Hypotheses"} {"abstract": "One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to common approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9% ImageNet top-1 accuracy with just 1% of the labels ($\\le$13 labeled images per class) using ResNet-50, a $10\\times$ improvement in label efficiency over the previous state-of-the-art. With 10% of labels, ResNet-50 trained with our method achieves 77.5% top-1 accuracy, outperforming standard supervised training with all of the labels.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Self-Supervised Image Classification", "Semi-Supervised Image Classification"], "method": ["ResNet", "Average Pooling", "Residual Block", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet", "ImageNet (finetuned)", "ImageNet - 1% labeled data", "ImageNet - 10% labeled data"], "metric": ["Top 5 Accuracy", "Number of Params", "Top 1 Accuracy"], "title": "Big Self-Supervised Models are Strong Semi-Supervised Learners"} {"abstract": "Machine reading comprehension helps machines learn to utilize most of the human knowledge written in the form of text. Existing approaches made a significant progress comparable to human-level performance, but they are still limited in understanding, up to a few paragraphs, failing to properly comprehend lengthy document. In this paper, we propose a novel deep neural network architecture to handle a long-range dependency in RC tasks. In detail, our method has two novel aspects: (1) an advanced memory-augmented architecture and (2) an expanded gated recurrent unit with dense connections that mitigate potential information distortion occurring in the memory. Our proposed architecture is widely applicable to other models. We have performed extensive experiments with well-known benchmark datasets such as TriviaQA, QUASAR-T, and SQuAD. The experimental results demonstrate that the proposed method outperforms existing methods, especially for lengthy documents.", "field": ["Feedforward Networks"], "task": ["Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": ["Dense Connections"], "dataset": ["TriviaQA"], "metric": ["EM", "F1"], "title": "MemoReader: Large-Scale Reading Comprehension through Neural Memory Controller"} {"abstract": "Most of the previous image-based 3D human pose and mesh estimation methods estimate parameters of the human mesh model from an input image. However, directly regressing the parameters from the input image is a highly non-linear mapping because it breaks the spatial relationship between pixels in the input image. In addition, it cannot model the prediction uncertainty, which can make training harder. To resolve the above issues, we propose I2L-MeshNet, an image-to-lixel (line+pixel) prediction network. The proposed I2L-MeshNet predicts the per-lixel likelihood on 1D heatmaps for each mesh vertex coordinate instead of directly regressing the parameters. Our lixel-based 1D heatmap preserves the spatial relationship in the input image and models the prediction uncertainty. We demonstrate the benefit of the image-to-lixel prediction and show that the proposed I2L-MeshNet outperforms previous methods. The code is publicly available https://github.com/mks0601/I2L-MeshNet_RELEASE.", "field": ["Output Functions"], "task": ["3D Hand Pose Estimation", "3D Human Pose Estimation", "3D Human Reconstruction"], "method": ["Heatmap"], "dataset": ["FreiHAND", "3DPW"], "metric": ["PA-MPJPE", "PA-MPVPE", "MPJPE", "MPVPE"], "title": "I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image"} {"abstract": "We motivate and present feature selective anchor-free (FSAF) module, a simple\nand effective building block for single-shot object detectors. It can be\nplugged into single-shot detectors with feature pyramid structure. The FSAF\nmodule addresses two limitations brought up by the conventional anchor-based\ndetection: 1) heuristic-guided feature selection; 2) overlap-based anchor\nsampling. The general concept of the FSAF module is online feature selection\napplied to the training of multi-level anchor-free branches. Specifically, an\nanchor-free branch is attached to each level of the feature pyramid, allowing\nbox encoding and decoding in the anchor-free manner at an arbitrary level.\nDuring training, we dynamically assign each instance to the most suitable\nfeature level. At the time of inference, the FSAF module can work jointly with\nanchor-based branches by outputting predictions in parallel. We instantiate\nthis concept with simple implementations of anchor-free branches and online\nfeature selection strategy. Experimental results on the COCO detection track\nshow that our FSAF module performs better than anchor-based counterparts while\nbeing faster. When working jointly with anchor-based branches, the FSAF module\nrobustly improves the baseline RetinaNet by a large margin under various\nsettings, while introducing nearly free inference overhead. And the resulting\nbest model can achieve a state-of-the-art 44.6% mAP, outperforming all existing\nsingle-shot detectors on COCO.", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Proposal Filtering", "Stochastic Optimization", "Feature Extractors", "Convolutional Neural Networks", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Object Detection Models", "Skip Connection Blocks"], "task": ["Feature Selection", "Object Detection"], "method": ["Weight Decay", "Average Pooling", "1x1 Convolution", "ResNet", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "FPN", "Grouped Convolution", "Focal Loss", "Non Maximum Suppression", "Batch Normalization", "Residual Network", "Kaiming Initialization", "FSAF", "ResNeXt Block", "SGD with Momentum", "ResNeXt", "Feature Pyramid Network", "Bottleneck Residual Block", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Feature Selective Anchor-Free Module for Single-Shot Object Detection"} {"abstract": "The Transformer architecture is superior to RNN-based models in computational efficiency. Recently, GPT and BERT demonstrate the efficacy of Transformer models on various NLP tasks using pre-trained language models on large-scale corpora. Surprisingly, these Transformer architectures are suboptimal for language model itself. Neither self-attention nor the positional encoding in the Transformer is able to efficiently incorporate the word-level sequential context crucial to language modeling. In this paper, we explore effective Transformer architectures for language model, including adding additional LSTM layers to better capture the sequential context while still keeping the computation efficient. We propose Coordinate Architecture Search (CAS) to find an effective architecture through iterative refinement of the model. Experimental results on the PTB, WikiText-2, and WikiText-103 show that CAS achieves perplexities between 20.42 and 34.11 on all problems, i.e. on average an improvement of 12.0 perplexity units compared to state-of-the-art LSTMs. The source code is publicly available.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Recurrent Neural Networks", "Subword Segmentation", "Normalization", "Language Models", "Attention Mechanisms", "Feedforward Networks", "Transformers", "Fine-Tuning", "Skip Connections"], "task": ["Language Modelling", "Neural Architecture Search"], "method": ["Weight Decay", "Cosine Annealing", "Adam", "Long Short-Term Memory", "Tanh Activation", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Transformer", "ReLU", "Residual Connection", "Dense Connections", "Layer Normalization", "Discriminative Fine-Tuning", "GPT", "Label Smoothing", "GELU", "Sigmoid Activation", "WordPiece", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Cosine Annealing", "Linear Warmup With Linear Decay", "LSTM", "Dropout", "BERT", "Rectified Linear Units"], "dataset": ["Penn Treebank (Word Level)", "WikiText-2", "WikiText-103"], "metric": ["Number of params", "Validation perplexity", "Test perplexity", "Params"], "title": "Language Models with Transformers"} {"abstract": "Convolutional neural networks have been successfully applied to semantic\nsegmentation problems. However, there are many problems that are inherently not\npixel-wise classification problems but are nevertheless frequently formulated\nas semantic segmentation. This ill-posed formulation consequently necessitates\nhand-crafted scenario-specific and computationally expensive post-processing\nmethods to convert the per pixel probability maps to final desired outputs.\nGenerative adversarial networks (GANs) can be used to make the semantic\nsegmentation network output to be more realistic or better\nstructure-preserving, decreasing the dependency on potentially complex\npost-processing. In this work, we propose EL-GAN: a GAN framework to mitigate\nthe discussed problem using an embedding loss. With EL-GAN, we discriminate\nbased on learned embeddings of both the labels and the prediction at the same\ntime. This results in more stable training due to having better discriminative\ninformation, benefiting from seeing both `fake' and `real' predictions at the\nsame time. This substantially stabilizes the adversarial training process. We\nuse the TuSimple lane marking challenge to demonstrate that with our proposed\nframework it is viable to overcome the inherent anomalies of posing it as a\nsemantic segmentation problem. Not only is the output considerably more similar\nto the labels when compared to conventional methods, the subsequent\npost-processing is also simpler and crosses the competitive 96% accuracy\nthreshold.", "field": ["Generative Models", "Convolutions"], "task": ["Lane Detection", "Semantic Segmentation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["TuSimple"], "metric": ["F1 score", "Accuracy"], "title": "EL-GAN: Embedding Loss Driven Generative Adversarial Networks for Lane Detection"} {"abstract": "This paper proposes a novel object detection framework named Grid R-CNN,\nwhich adopts a grid guided localization mechanism for accurate object\ndetection. Different from the traditional regression based methods, the Grid\nR-CNN captures the spatial information explicitly and enjoys the position\nsensitive property of fully convolutional architecture. Instead of using only\ntwo independent points, we design a multi-point supervision formulation to\nencode more clues in order to reduce the impact of inaccurate prediction of\nspecific points. To take the full advantage of the correlation of points in a\ngrid, we propose a two-stage information fusion strategy to fuse feature maps\nof neighbor grid points. The grid guided localization approach is easy to be\nextended to different state-of-the-art detection frameworks. Grid R-CNN leads\nto high quality object localization, and experiments demonstrate that it\nachieves a 4.1% AP gain at IoU=0.8 and a 10.0% AP gain at IoU=0.9 on COCO\nbenchmark compared to Faster R-CNN with Res50 backbone and FPN architecture.", "field": ["Image Data Augmentation", "Semantic Segmentation Models", "Regularization", "Proposal Filtering", "Stochastic Optimization", "Initialization", "Feature Extractors", "Activation Functions", "RoI Feature Extractors", "Convolutional Neural Networks", "Normalization", "Output Functions", "Convolutions", "Pooling Operations", "Region Proposal", "Skip Connections", "Object Detection Models", "Skip Connection Blocks"], "task": ["Object Detection", "Object Localization", "Regression"], "method": ["Weight Decay", "Dilated Convolution", "Average Pooling", "Faster R-CNN", "1x1 Convolution", "RoIAlign", "Region Proposal Network", "Grid R-CNN", "ResNet", "Random Horizontal Flip", "Convolution", "RoIPool", "ReLU", "Residual Connection", "FPN", "Fully Convolutional Network", "Synchronized Batch Normalization", "FCN", "RPN", "Grouped Convolution", "Non Maximum Suppression", "Batch Normalization", "Residual Network", "SyncBN", "Kaiming Initialization", "Sigmoid Activation", "ResNeXt Block", "SGD with Momentum", "ResNeXt", "Softmax", "Feature Pyramid Network", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Grid R-CNN"} {"abstract": "Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.", "field": ["Image Data Augmentation", "Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Domain Generalization", "Image Classification"], "method": ["ResNet", "AugMix", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet-C", "ImageNet-R"], "metric": ["mean Corruption Error (mCE)", "Top-1 Error Rate"], "title": "AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty"} {"abstract": "Generative networks have made it possible to generate meaningful signals such\nas images and texts from simple noise. Recently, generative methods based on\nGAN and VAE were developed for graphs and graph signals. However, the\nmathematical properties of these methods are unclear, and training good\ngenerative models is difficult. This work proposes a graph generation model\nthat uses a recent adaptation of Mallat's scattering transform to graphs. The\nproposed model is naturally composed of an encoder and a decoder. The encoder\nis a Gaussianized graph scattering transform, which is robust to signal and\ngraph manipulation. The decoder is a simple fully connected network that is\nadapted to specific tasks, such as link prediction, signal generation on graphs\nand full graph and signal generation. The training of our proposed system is\nefficient since it is only applied to the decoder and the hardware requirements\nare moderate. Numerical results demonstrate state-of-the-art performance of the\nproposed system for both link prediction and graph and signal generation.", "field": ["Generative Models"], "task": ["Graph Generation", "Link Prediction"], "method": ["VAE", "Variational Autoencoder"], "dataset": ["Pubmed (biased evaluation)", "Cora (biased evaluation)", "Citeseer (biased evaluation)"], "metric": ["AP", "AUC"], "title": "Encoding Robust Representation for Graph Generation"} {"abstract": "Consistency regularization describes a class of approaches that have yielded ground breaking results in semi-supervised classification problems. Prior work has established the cluster assumption - under which the data distribution consists of uniform class clusters of samples separated by low density regions - as important to its success. We analyze the problem of semantic segmentation and find that its' distribution does not exhibit low density regions separating classes and offer this as an explanation for why semi-supervised segmentation is a challenging problem, with only a few reports of success. We then identify choice of augmentation as key to obtaining reliable performance without such low-density regions. We find that adapted variants of the recently proposed CutOut and CutMix augmentation techniques yield state-of-the-art semi-supervised semantic segmentation results in standard datasets. Furthermore, given its challenging nature we propose that semantic segmentation acts as an effective acid test for evaluating semi-supervised regularizers. Implementation at: https://github.com/Britefury/cutmix-semisup-seg.", "field": ["Image Data Augmentation"], "task": ["Semantic Segmentation", "Semi-Supervised Semantic Segmentation"], "method": ["CutMix", "Cutout"], "dataset": ["Pascal VOC 2012 1% labeled", "Pascal VOC 2012 12.5% labeled", "Cityscapes 12.5% labeled", "Pascal VOC 2012 5% labeled", "Pascal VOC 2012 2% labeled", "Cityscapes 100 samples labeled", "Cityscapes 25% labeled"], "metric": ["Validation mIoU"], "title": "Semi-supervised semantic segmentation needs strong, varied perturbations"} {"abstract": "This paper studies learning node representations with GNNs for unsupervised scenarios. We make a theoretical understanding and empirical demonstration about the non-steady performance of GNNs over different graph datasets, when the supervision signals are not appropriately defined. The performance of GNNs depends on both the node feature smoothness and the graph locality. To smooth the discrepancy of node proximity measured by graph topology and node feature, we proposed KS2L - a novel graph \\underline{K}nowledge distillation regularized \\underline{S}elf-\\underline{S}upervised \\underline{L}earning framework, with two complementary regularization modules, for intra-and cross-model graph knowledge distillation. We demonstrate the competitive performance of KS2L on a variety of benchmarks. Even with a single GCN layer, KS2L has consistently competitive or even better performance on various benchmark datasets.", "field": ["Graph Models"], "task": ["Knowledge Distillation", "Node Classification"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Self-supervised Smoothing Graph Neural Networks"} {"abstract": "We introduce a class of convolutional neural networks (CNNs) that utilize\nrecurrent neural networks (RNNs) as convolution filters. A convolution filter\nis typically implemented as a linear affine transformation followed by a\nnon-linear function, which fails to account for language compositionality. As a\nresult, it limits the use of high-order filters that are often warranted for\nnatural language processing tasks. In this work, we model convolution filters\nwith RNNs that naturally capture compositionality and long-term dependencies in\nlanguage. We show that simple CNN architectures equipped with recurrent neural\nfilters (RNFs) achieve results that are on par with the best published ones on\nthe Stanford Sentiment Treebank and two answer sentence selection datasets.", "field": ["Convolutions"], "task": ["Sentiment Analysis"], "method": ["Convolution"], "dataset": ["SST-2 Binary classification", "SST-5 Fine-grained classification"], "metric": ["Accuracy"], "title": "Convolutional Neural Networks with Recurrent Neural Filters"} {"abstract": "Modern convolutional networks are not shift-invariant, as small input shifts or translations can cause drastic changes in the output. Commonly used downsampling methods, such as max-pooling, strided-convolution, and average-pooling, ignore the sampling theorem. The well-known signal processing fix is anti-aliasing by low-pass filtering before downsampling. However, simply inserting this module into deep networks degrades performance; as a result, it is seldomly used today. We show that when integrated correctly, it is compatible with existing architectural components, such as max-pooling and strided-convolution. We observe \\textit{increased accuracy} in ImageNet classification, across several commonly-used architectures, such as ResNet, DenseNet, and MobileNet, indicating effective regularization. Furthermore, we observe \\textit{better generalization}, in terms of stability and robustness to input corruptions. Our results demonstrate that this classical signal processing technique has been undeservingly overlooked in modern deep networks. Code and anti-aliased versions of popular networks are available at https://richzhang.github.io/antialiased-cnns/ .", "field": ["Downsampling", "Initialization", "Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Classification Consistency", "Conditional Image Generation", "Image Classification", "Image Generation"], "method": ["Average Pooling", "1x1 Convolution", "ResNet", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Dense Block", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Softmax", "Concatenated Skip Connection", "Anti-Alias Downsampling", "Bottleneck Residual Block", "Dropout", "DenseNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Consistency"], "title": "Making Convolutional Networks Shift-Invariant Again"} {"abstract": "State-of-the-art results on neural machine translation often use attentional\nsequence-to-sequence models with some form of convolution or recursion. Vaswani\net al. (2017) propose a new architecture that avoids recurrence and convolution\ncompletely. Instead, it uses only self-attention and feed-forward layers. While\nthe proposed architecture achieves state-of-the-art results on several machine\ntranslation tasks, it requires a large number of parameters and training\niterations to converge. We propose Weighted Transformer, a Transformer with\nmodified attention layers, that not only outperforms the baseline network in\nBLEU score but also converges 15-40% faster. Specifically, we replace the\nmulti-head attention by multiple self-attention branches that the model learns\nto combine during the training process. Our model improves the state-of-the-art\nperformance by 0.5 BLEU points on the WMT 2014 English-to-German translation\ntask and by 0.4 on the English-to-French translation task.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Convolutions", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Convolution", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["WMT2014 English-German", "WMT2014 English-French"], "metric": ["BLEU score"], "title": "Weighted Transformer Network for Machine Translation"} {"abstract": "Automatic program correction is an active topic of research, which holds the potential of dramatically improving productivity of programmers during the software development process and correctness of software in general. Recent advances in machine learning, deep learning and NLP have rekindled the hope to eventually fully automate the process of repairing programs. A key challenge is ambiguity, as multiple codes -- or fixes -- can implement the same functionality. In addition, datasets by nature fail to capture the variance introduced by such ambiguities. Therefore, we propose a deep generative model to automatically correct programming errors by learning a distribution of potential fixes. Our model is formulated as a deep conditional variational autoencoder that samples diverse fixes for the given erroneous programs. In order to account for ambiguity and inherent lack of representative datasets, we propose a novel regularizer to encourage the model to generate diverse fixes. Our evaluations on common programming errors show for the first time the generation of diverse fixes and strong improvements over the state-of-the-art approaches by fixing up to 65% of the mistakes.", "field": ["Generative Models"], "task": ["Program Repair"], "method": ["AutoEncoder"], "dataset": ["DeepFix"], "metric": ["Average Success Rate"], "title": "SampleFix: Learning to Correct Programs by Sampling Diverse Fixes"} {"abstract": "Linking two data sources is a basic building block in numerous computer\nvision problems. Canonical Correlation Analysis (CCA) achieves this by\nutilizing a linear optimizer in order to maximize the correlation between the\ntwo views. Recent work makes use of non-linear models, including deep learning\ntechniques, that optimize the CCA loss in some feature space. In this paper, we\nintroduce a novel, bi-directional neural network architecture for the task of\nmatching vectors from two data sources. Our approach employs two tied neural\nnetwork channels that project the two views into a common, maximally correlated\nspace using the Euclidean loss. We show a direct link between the\ncorrelation-based loss and Euclidean loss, enabling the use of Euclidean loss\nfor correlation maximization. To overcome common Euclidean regression\noptimization problems, we modify well-known techniques to our problem,\nincluding batch normalization and dropout. We show state of the art results on\na number of computer vision matching tasks including MNIST image matching and\nsentence-image matching on the Flickr8k, Flickr30k and COCO datasets.", "field": ["Normalization"], "task": ["Regression"], "method": ["Batch Normalization"], "dataset": ["Flickr30K 1K test"], "metric": ["R@1"], "title": "Linking Image and Text with 2-Way Nets"} {"abstract": "Key for solving fine-grained image categorization is finding discriminate and local regions that correspond to subtle visual traits. Great strides have been made, with complex networks designed specifically to learn part-level discriminate feature representations. In this paper, we show it is possible to cultivate subtle details without the need for overly complicated network designs or training mechanisms -- a single loss is all it takes. The main trick lies with how we delve into individual feature channels early on, as opposed to the convention of starting from a consolidated feature map. The proposed loss function, termed as mutual-channel loss (MC-Loss), consists of two channel-specific components: a discriminality component and a diversity component. The discriminality component forces all feature channels belonging to the same class to be discriminative, through a novel channel-wise attention mechanism. The diversity component additionally constraints channels so that they become mutually exclusive on spatial-wise. The end result is therefore a set of feature channels that each reflects different locally discriminative regions for a specific class. The MC-Loss can be trained end-to-end, without the need for any bounding-box/part annotations, and yields highly discriminative regions during inference. Experimental results show our MC-Loss when implemented on top of common base networks can achieve state-of-the-art performance on all four fine-grained categorization datasets (CUB-Birds, FGVC-Aircraft, Flowers-102, and Stanford-Cars). Ablative studies further demonstrate the superiority of MC-Loss when compared with other recently proposed general-purpose losses for visual classification, on two different base networks. Code available at https://github.com/dongliangchang/Mutual-Channel-Loss", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Fine-Grained Image Classification", "Image Categorization", "Image Classification"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": [" CUB-200-2011", "Stanford Cars", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification"} {"abstract": "We introduce adaptive input representations for neural language modeling\nwhich extend the adaptive softmax of Grave et al. (2017) to input\nrepresentations of variable capacity. There are several choices on how to\nfactorize the input and output layers, and whether to model words, characters\nor sub-word units. We perform a systematic comparison of popular choices for a\nself-attentional architecture. Our experiments show that models equipped with\nadaptive embeddings are more than twice as fast to train than the popular\ncharacter input CNN while having a lower number of parameters. On the\nWikiText-103 benchmark we achieve 18.7 perplexity, an improvement of 10.5\nperplexity compared to the previously best published result and on the Billion\nWord benchmark, we achieve 23.02 perplexity.", "field": ["Input Embedding Factorization", "Output Functions"], "task": ["Language Modelling"], "method": ["Adaptive Softmax", "Adaptive Input Representations", "Softmax"], "dataset": ["WikiText-103", "One Billion Word"], "metric": ["Number of params", "PPL", "Validation perplexity", "Test perplexity"], "title": "Adaptive Input Representations for Neural Language Modeling"} {"abstract": "We present DeepWalk, a novel approach for learning latent representations of\nvertices in a network. These latent representations encode social relations in\na continuous vector space, which is easily exploited by statistical models.\nDeepWalk generalizes recent advancements in language modeling and unsupervised\nfeature learning (or deep learning) from sequences of words to graphs. DeepWalk\nuses local information obtained from truncated random walks to learn latent\nrepresentations by treating walks as the equivalent of sentences. We\ndemonstrate DeepWalk's latent representations on several multi-label network\nclassification tasks for social networks such as BlogCatalog, Flickr, and\nYouTube. Our results show that DeepWalk outperforms challenging baselines which\nare allowed a global view of the network, especially in the presence of missing\ninformation. DeepWalk's representations can provide $F_1$ scores up to 10%\nhigher than competing methods when labeled data is sparse. In some experiments,\nDeepWalk's representations are able to outperform all baseline methods while\nusing 60% less training data. DeepWalk is also scalable. It is an online\nlearning algorithm which builds useful incremental results, and is trivially\nparallelizable. These qualities make it suitable for a broad class of real\nworld applications such as network classification, and anomaly detection.", "field": ["Graph Embeddings"], "task": ["Anomaly Detection", "Language Modelling", "Node Classification"], "method": ["DeepWalk"], "dataset": ["BlogCatalog", "Cora", "Wikipedia"], "metric": ["Macro-F1", "Accuracy"], "title": "DeepWalk: Online Learning of Social Representations"} {"abstract": "Very deep convolutional networks have been central to the largest advances in\nimage recognition performance in recent years. One example is the Inception\narchitecture that has been shown to achieve very good performance at relatively\nlow computational cost. Recently, the introduction of residual connections in\nconjunction with a more traditional architecture has yielded state-of-the-art\nperformance in the 2015 ILSVRC challenge; its performance was similar to the\nlatest generation Inception-v3 network. This raises the question of whether\nthere are any benefit in combining the Inception architecture with residual\nconnections. Here we give clear empirical evidence that training with residual\nconnections accelerates the training of Inception networks significantly. There\nis also some evidence of residual Inception networks outperforming similarly\nexpensive Inception networks without residual connections by a thin margin. We\nalso present several new streamlined architectures for both residual and\nnon-residual Inception networks. These variations improve the single-frame\nrecognition performance on the ILSVRC 2012 classification task significantly.\nWe further demonstrate how proper activation scaling stabilizes the training of\nvery wide residual Inception networks. With an ensemble of three residual and\none Inception-v4, we achieve 3.08 percent top-5 error on the test set of the\nImageNet classification (CLS) challenge", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connection Blocks", "Skip Connections", "Image Model Blocks", "Miscellaneous Components"], "task": ["Image Classification"], "method": ["Inception-ResNet-v2-B", "Average Pooling", "RMSProp", "Global Average Pooling", "1x1 Convolution", "Inception-ResNet-v2-A", "Inception-v3", "Inception-A", "Inception-v4", "Inception-ResNet-v2 Reduction-B", "ResNet", "Inception-B", "Inception-ResNet-v2", "Convolution", "ReLU", "Inception-C", "Residual Connection", "Inception-ResNet-v2-C", "Dense Connections", "Reduction-A", "Inception-v3 Module", "Batch Normalization", "Residual Network", "Label Smoothing", "Kaiming Initialization", "Exponential Decay", "Softmax", "Auxiliary Classifier", "Bottleneck Residual Block", "Dropout", "Residual Block", "Reduction-B", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 5 Accuracy", "Top 1 Accuracy"], "title": "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning"} {"abstract": "Pretrained contextualized embeddings are powerful word representations for structured prediction tasks. Recent work found that better word representations can be obtained by concatenating different types of embeddings. However, the selection of embeddings to form the best concatenated representation usually varies depending on the task and the collection of candidate embeddings, and the ever-increasing number of embedding types makes it a more difficult problem. In this paper, we propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks, based on a formulation inspired by recent progress on neural architecture search. Specifically, a controller alternately samples a concatenation of embeddings, according to its current belief of the effectiveness of individual embedding types in consideration for a task, and updates the belief based on a reward. We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model, which is fed with the sampled concatenation as input and trained on a task dataset. Empirical results on 6 tasks and 21 datasets show that our approach outperforms strong baselines and achieves state-of-the-art performance with fine-tuned embeddings in the vast majority of evaluations.", "field": ["Structured Prediction", "Policy Gradient Methods", "Neural Architecture Search", "Regularization", "Attention Modules", "Learning Rate Schedules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Recurrent Neural Networks", "Subword Segmentation", "Normalization", "Tokenizers", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Aspect Extraction", "Chunking", "Dependency Parsing", "Named Entity Recognition", "Neural Architecture Search", "Part-Of-Speech Tagging", "Semantic Dependency Parsing", "Structured Prediction"], "method": ["Weight Decay", "Adam", "Long Short-Term Memory", "Tanh Activation", "Scaled Dot-Product Attention", "SentencePiece", "Proximal Policy Optimization", "Gaussian Linear Error Units", "XLNet", "Entropy Regularization", "CRF", "Residual Connection", "Dense Connections", "Layer Normalization", "GELU", "PPO", "Neural Architecture Search", "Sigmoid Activation", "WordPiece", "Byte Pair Encoding", "BPE", "Softmax", "Conditional Random Field", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "LSTM", "Dropout", "BERT"], "dataset": ["Penn Treebank", "CoNLL 2002 (Spanish)", "CoNLL 2002 (Dutch)", "SemEval-2016 Task 5 Subtask 1", " SemEval 2015 Task 12", "CoNLL 2003 (German) Revised", "CoNLL 2000", "CoNLL 2003 (English)", "PAS", "SemEval-2016 Task 5 Subtask 1 (Spanish)", "SemEval-2016 Task 5 Subtask 1 (Dutch)", "Ritter", "SemEval-2016 Task 5 Subtask 1 (Russian)", "CoNLL 2003 (German)", "DM", "Tweebank", "PSD", "SemEval-2016 Task 5 Subtask 1 (Turkish)", "ARK", "SemEval 2014 Task 4 Sub Task 2"], "metric": ["Acc", "Out-of-domain", "F1 score", "Laptop (F1)", "Exact Span F1", "UAS", "F1", "LAS", "Restaurant (F1)", "In-domain"], "title": "Automated Concatenation of Embeddings for Structured Prediction"} {"abstract": "Recent progress on saliency detection is substantial, benefiting mostly from\nthe explosive development of Convolutional Neural Networks (CNNs). Semantic\nsegmentation and saliency detection algorithms developed lately have been\nmostly based on Fully Convolutional Neural Networks (FCNs). There is still a\nlarge room for improvement over the generic FCN models that do not explicitly\ndeal with the scale-space problem. Holistically-Nested Edge Detector (HED)\nprovides a skip-layer structure with deep supervision for edge and boundary\ndetection, but the performance gain of HED on salience detection is not\nobvious. In this paper, we propose a new method for saliency detection by\nintroducing short connections to the skip-layer structures within the HED\narchitecture. Our framework provides rich multi-scale feature maps at each\nlayer, a property that is critically needed to perform segment detection. Our\nmethod produces state-of-the-art results on 5 widely tested salient object\ndetection benchmarks, with advantages in terms of efficiency (0.15 seconds per\nimage), effectiveness, and simplicity over the existing algorithms.", "field": ["Convolutions", "Pooling Operations", "Semantic Segmentation Models"], "task": ["Boundary Detection", "Object Detection", "RGB Salient Object Detection", "Saliency Detection", "Salient Object Detection", "Semantic Segmentation"], "method": ["Fully Convolutional Network", "FCN", "Max Pooling", "Convolution"], "dataset": ["UCF", "SBU", "ISTD", "DUTS-TE"], "metric": ["MAE", "F-measure", "Balanced Error Rate"], "title": "Deeply supervised salient object detection with short connections"} {"abstract": "The rise of machine learning (ML) has created an explosion in the potential strategies for using data to make scientific predictions. For physical scientists wishing to apply ML strategies to a particular domain, it can be difficult to assess in advance what strategy to adopt within a vast space of possibilities. Here we outline the results of an online community-powered effort to swarm search the space of ML strategies and develop algorithms for predicting atomic-pairwise nuclear magnetic resonance (NMR) properties in molecules. Using an open-source dataset, we worked with Kaggle to design and host a 3-month competition which received 47,800 ML model predictions from 2,700 teams in 84 countries. Within 3 weeks, the Kaggle community produced models with comparable accuracy to our best previously published \"in-house\" efforts. A meta-ensemble model constructed as a linear combination of the top predictions has a prediction accuracy which exceeds that of any individual model, 7-19x better than our previous state-of-the-art. The results highlight the potential of transformer architectures for predicting quantum mechanical (QM) molecular properties.", "field": ["Attention Modules", "Output Functions", "Stochastic Optimization", "Regularization", "Graph Models", "Subword Segmentation", "Normalization", "Feedforward Networks", "Graph Embeddings", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Molecular Property Prediction", "NMR J-coupling"], "method": ["Laplacian PE", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Graph Transformer", "Residual Connection", "Scaled Dot-Product Attention", "Label Smoothing", "Laplacian EigenMap", "Dropout", "LapEigen", "Dense Connections", "Laplacian Positional Encodings"], "dataset": ["QM9"], "metric": ["avg. log MAE"], "title": "A community-powered search of machine learning strategy space to find NMR property prediction models"} {"abstract": "Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained\nopinion polarity towards a specific aspect, is a challenging subtask of\nsentiment analysis (SA). In this paper, we construct an auxiliary sentence from\nthe aspect and convert ABSA to a sentence-pair classification task, such as\nquestion answering (QA) and natural language inference (NLI). We fine-tune the\npre-trained model from BERT and achieve new state-of-the-art results on\nSentiHood and SemEval-2014 Task 4 datasets.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Aspect-Based Sentiment Analysis", "Natural Language Inference", "Question Answering", "Sentiment Analysis"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SemEval 2014 Task 4 Subtask 4", "Sentihood"], "metric": ["Accuracy (3-way)", "Aspect", "Binary Accuracy", "Sentiment", "Accuracy (4-way)"], "title": "Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence"} {"abstract": "Mixture of Softmaxes (MoS) has been shown to be effective at addressing the expressiveness limitation of Softmax-based models. Despite the known advantage, MoS is practically sealed by its large consumption of memory and computational time due to the need of computing multiple Softmaxes. In this work, we set out to unleash the power of MoS in practical applications by investigating improved word coding schemes, which could effectively reduce the vocabulary size and hence relieve the memory and computation burden. We show both BPE and our proposed Hybrid-LightRNN lead to improved encoding mechanisms that can halve the time and memory consumption of MoS without performance losses. With MoS, we achieve an improvement of 1.5 BLEU scores on IWSLT 2014 German-to-English corpus and an improvement of 0.76 CIDEr score on image captioning. Moreover, on the larger WMT 2014 machine translation dataset, our MoS-boosted Transformer yields 29.5 BLEU score for English-to-German and 42.1 BLEU score for English-to-French, outperforming the single-Softmax Transformer by 0.8 and 0.4 BLEU scores respectively and achieving the state-of-the-art result on WMT 2014 English-to-German task.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Image Captioning", "Machine Translation", "Text Generation"], "method": ["Mixture of Softmaxes", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["WMT2014 English-German", "WMT2014 English-French"], "metric": ["BLEU score"], "title": "Fast and Simple Mixture of Softmaxes with BPE and Hybrid-LightRNN for Language Generation"} {"abstract": "The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work\nthat is capable of generating realistic textures during single image\nsuper-resolution. However, the hallucinated details are often accompanied with\nunpleasant artifacts. To further enhance the visual quality, we thoroughly\nstudy three key components of SRGAN - network architecture, adversarial loss\nand perceptual loss, and improve each of them to derive an Enhanced SRGAN\n(ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block\n(RRDB) without batch normalization as the basic network building unit.\nMoreover, we borrow the idea from relativistic GAN to let the discriminator\npredict relative realness instead of the absolute value. Finally, we improve\nthe perceptual loss by using the features before activation, which could\nprovide stronger supervision for brightness consistency and texture recovery.\nBenefiting from these improvements, the proposed ESRGAN achieves consistently\nbetter visual quality with more realistic and natural textures than SRGAN and\nwon the first place in the PIRM2018-SR Challenge. The code is available at\nhttps://github.com/xinntao/ESRGAN .", "field": ["Regularization", "Convolutional Neural Networks", "Output Functions", "Activation Functions", "Miscellaneous Components", "Loss Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Generative Models", "Skip Connections", "Generative Adversarial Networks", "Skip Connection Blocks"], "task": ["Face Hallucination", "Image Super-Resolution", "Super-Resolution"], "method": ["Generative Adversarial Network", "PixelShuffle", "VGG", "Convolution", "PReLU", "ReLU", "Residual Connection", "Leaky ReLU", "Dense Connections", "GAN", "Batch Normalization", "SRGAN Residual Block", "Parameterized ReLU", "SRGAN", "Sigmoid Activation", "Relativistic GAN", "Softmax", "VGG Loss", "Dropout", "Residual Block", "Rectified Linear Units", "Max Pooling"], "dataset": ["FFHQ 256 x 256 - 4x upscaling", "Set14 - 4x upscaling", "Manga109 - 4x upscaling", "FFHQ 512 x 512 - 16x upscaling", "BSD100 - 4x upscaling", "FFHQ 1024 x 1024 - 4x upscaling", "Set5 - 4x upscaling", "FFHQ 512 x 512 - 4x upscaling", "PIRM-test", "Urban100 - 4x upscaling"], "metric": ["LLE", "PSNR", "FID", "FED", "MS-SSIM", "LPIPS", "NIQE", "SSIM"], "title": "ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks"} {"abstract": "Current methods for skeleton-based human action recognition usually work with completely observed skeletons. However, in real scenarios, it is prone to capture incomplete and noisy skeletons, which will deteriorate the performance of traditional models. To enhance the robustness of action recognition models to incomplete skeletons, we propose a multi-stream graph convolutional network (GCN) for exploring sufficient discriminative features distributed over all skeleton joints. Here, each stream of the network is only responsible for learning features from currently unactivated joints, which are distinguished by the class activation maps (CAM) obtained by preceding streams, so that the activated joints of the proposed method are obviously more than traditional methods. Thus, the proposed method is termed richly activated GCN (RA-GCN), where the richly discovered features will improve the robustness of the model. Compared to the state-of-the-art methods, the RA-GCN achieves comparable performance on the NTU RGB+D dataset. Moreover, on a synthetic occlusion dataset, the performance deterioration can be alleviated by the RA-GCN significantly.", "field": ["Graph Models"], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Richly Activated Graph Convolutional Network for Action Recognition with Incomplete Skeletons"} {"abstract": "Analyzing scenes thoroughly is crucial for mobile robots acting in different environments. Semantic segmentation can enhance various subsequent tasks, such as (semantically assisted) person perception, (semantic) free space detection, (semantic) mapping, and (semantic) navigation. In this paper, we propose an efficient and robust RGB-D segmentation approach that can be optimized to a high degree using NVIDIA TensorRT and, thus, is well suited as a common initial processing step in a complex system for scene analysis on mobile robots. We show that RGB-D segmentation is superior to processing RGB images solely and that it can still be performed in real time if the network architecture is carefully designed. We evaluate our proposed Efficient Scene Analysis Network (ESANet) on the common indoor datasets NYUv2 and SUNRGB-D and show that it reaches state-of-the-art performance when considering both segmentation performance and runtime. Furthermore, our evaluation on the outdoor dataset Cityscapes shows that our approach is suitable for other areas of application as well. Finally, instead of presenting benchmark results only, we show qualitative results in one of our indoor application scenarios.", "field": ["Initialization", "Semantic Segmentation Modules", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Pyramid Pooling Module"], "dataset": ["SUN-RGBD", "NYU Depth v2", "Cityscapes test"], "metric": ["Mean IoU", "Mean IoU (class)"], "title": "Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis"} {"abstract": "We address the problem of estimating a high quality dense depth map from a single RGB input image. We start out with a baseline encoder-decoder convolutional neural network architecture and pose the question of how the global processing of information can help improve overall depth estimation. To this end, we propose a transformer-based architecture block that divides the depth range into bins whose center value is estimated adaptively per image. The final depth values are estimated as linear combinations of the bin centers. We call our new building block AdaBins. Our results show a decisive improvement over the state-of-the-art on several popular depth datasets across all metrics. We also validate the effectiveness of the proposed block with an ablation study and provide the code and corresponding pre-trained weights of the new state-of-the-art model.", "field": ["Attention Modules", "Output Functions", "Image Models", "Attention Mechanisms"], "task": ["Depth Estimation", "Monocular Depth Estimation"], "method": ["EfficientNet", "Softmax", "Multi-Head Attention", "Vision Transformer", "Scaled Dot-Product Attention"], "dataset": ["NYU-Depth V2", "KITTI Eigen split"], "metric": ["RMSE", "absolute relative error"], "title": "AdaBins: Depth Estimation using Adaptive Bins"} {"abstract": "Prevailing deep convolutional neural networks (CNNs) for person re-IDentification (reID) are usually built upon ResNet or VGG backbones, which were originally designed for classification. Because reID is different from classification, the architecture should be modified accordingly. We propose to automatically search for a CNN architecture that is specifically suitable for the reID task. There are three aspects to be tackled. First, body structural information plays an important role in reID but it is not encoded in backbones. Second, Neural Architecture Search (NAS) automates the process of architecture design without human effort, but no existing NAS methods incorporate the structure information of input images. Third, reID is essentially a retrieval task but current NAS algorithms are merely designed for classification. To solve these problems, we propose a retrieval-based search algorithm over a specifically designed reID search space, named Auto-ReID. Our Auto-ReID enables the automated approach to find an efficient and effective CNN architecture for reID. Extensive experiments demonstrate that the searched architecture achieves state-of-the-art performance while reducing 50% parameters and 53% FLOPs compared to others.", "field": ["Initialization", "Regularization", "Output Functions", "Convolutional Neural Networks", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Neural Architecture Search", "Person Re-Identification"], "method": ["Average Pooling", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "ResNet", "VGG", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Sigmoid Activation", "Softmax", "LSTM", "Dropout", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CUHK03 detected", "DukeMTMC-reID", "Market-1501", "CUHK03 labeled"], "metric": ["Rank-1", "MAP"], "title": "Auto-ReID: Searching for a Part-aware ConvNet for Person Re-Identification"} {"abstract": "We present a simple and general framework for feature learning from point\nclouds. The key to the success of CNNs is the convolution operator that is\ncapable of leveraging spatially-local correlation in data represented densely\nin grids (e.g. images). However, point clouds are irregular and unordered, thus\ndirectly convolving kernels against features associated with the points, will\nresult in desertion of shape information and variance to point ordering. To\naddress these problems, we propose to learn an $\\mathcal{X}$-transformation\nfrom the input points, to simultaneously promote two causes. The first is the\nweighting of the input features associated with the points, and the second is\nthe permutation of the points into a latent and potentially canonical order.\nElement-wise product and sum operations of the typical convolution operator are\nsubsequently applied on the $\\mathcal{X}$-transformed features. The proposed\nmethod is a generalization of typical CNNs to feature learning from point\nclouds, thus we call it PointCNN. Experiments show that PointCNN achieves on\npar or better performance than state-of-the-art methods on multiple challenging\nbenchmark datasets and tasks.", "field": ["Convolutions"], "task": ["3D Instance Segmentation", "3D Part Segmentation"], "method": ["Convolution"], "dataset": ["ShapeNet-Part", "S3DIS"], "metric": ["Class Average IoU", "mAcc", "Instance Average IoU", "mIoU"], "title": "PointCNN: Convolution On $\\mathcal{X}$-Transformed Points"} {"abstract": "In recent years, single image super-resolution (SISR) methods using deep convolution neural network (CNN) have achieved impressive results. Thanks to the powerful representation capabilities of the deep networks, numerous previous ways can learn the complex non-linear mapping between low-resolution (LR) image patches and their high-resolution (HR) versions. However, excessive convolutions will limit the application of super-resolution technology in low computing power devices. Besides, super-resolution of any arbitrary scale factor is a critical issue in practical applications, which has not been well solved in the previous approaches. To address these issues, we propose a lightweight information multi-distillation network (IMDN) by constructing the cascaded information multi-distillation blocks (IMDB), which contains distillation and selective fusion parts. Specifically, the distillation module extracts hierarchical features step-by-step, and fusion module aggregates them according to the importance of candidate features, which is evaluated by the proposed contrast-aware channel attention mechanism. To process real images with any sizes, we develop an adaptive cropping strategy (ACS) to super-resolve block-wise image patches using the same well-trained model. Extensive experiments suggest that the proposed method performs favorably against the state-of-the-art SR algorithms in term of visual quality, memory footprint, and inference time. Code is available at \\url{https://github.com/Zheng222/IMDN}.", "field": ["Convolutions"], "task": ["Image Super-Resolution", "Super-Resolution"], "method": ["Convolution"], "dataset": ["Set5 - 3x upscaling", "Set14 - 2x upscaling", "Set14 - 4x upscaling", "Manga109 - 3x upscaling", "BSD100 - 2x upscaling", "Manga109 - 4x upscaling", "Set14 - 3x upscaling", "BSD100 - 3x upscaling", "Urban100 - 2x upscaling", "Urban100 - 3x upscaling", "BSD100 - 4x upscaling", "Manga109 - 2x upscaling", "Set5 - 4x upscaling", "Set5 - 2x upscaling", "Urban100 - 4x upscaling"], "metric": ["PSNR"], "title": "Lightweight Image Super-Resolution with Information Multi-distillation Network"} {"abstract": "The emotion cause extraction (ECE) task aims at discovering the potential causes behind a certain emotion expression in a document. Techniques including rule-based methods, traditional machine learning methods and deep neural networks have been proposed to solve this task. However, most of the previous work considered ECE as a set of independent clause classification problems and ignored the relations between multiple clauses in a document. In this work, we propose a joint emotion cause extraction framework, named RNN-Transformer Hierarchical Network (RTHN), to encode and classify multiple clauses synchronously. RTHN is composed of a lower word-level encoder based on RNNs to encode multiple words in each clause, and an upper clause-level encoder based on Transformer to learn the correlation between multiple clauses in a document. We furthermore propose ways to encode the relative position and global predication information into Transformer that can capture the causality between clauses and make RTHN more efficient. We finally achieve the best performance among 12 compared systems and improve the F1 score of the state-of-the-art from 72.69\\% to 76.77\\%.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Emotion Cause Extraction"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["ECE"], "metric": ["F1"], "title": "RTHN: A RNN-Transformer Hierarchical Network for Emotion Cause Extraction"} {"abstract": "Distance metric learning (DML) is to learn the embeddings where examples from the same class are closer than examples from different classes. It can be cast as an optimization problem with triplet constraints. Due to the vast number of triplet constraints, a sampling strategy is essential for DML. With the tremendous success of deep learning in classifications, it has been applied for DML. When learning embeddings with deep neural networks (DNNs), only a mini-batch of data is available at each iteration. The set of triplet constraints has to be sampled within the mini-batch. Since a mini-batch cannot capture the neighbors in the original set well, it makes the learned embeddings sub-optimal. On the contrary, optimizing SoftMax loss, which is a classification loss, with DNN shows a superior performance in certain DML tasks. It inspires us to investigate the formulation of SoftMax. Our analysis shows that SoftMax loss is equivalent to a smoothed triplet loss where each class has a single center. In real-world data, one class can contain several local clusters rather than a single one, e.g., birds of different poses. Therefore, we propose the SoftTriple loss to extend the SoftMax loss with multiple centers for each class. Compared with conventional deep metric learning algorithms, optimizing SoftTriple loss can learn the embeddings without the sampling phase by mildly increasing the size of the last fully connected layer. Experiments on the benchmark fine-grained data sets demonstrate the effectiveness of the proposed loss function. Code is available at https://github.com/idstcv/SoftTriple", "field": ["Loss Functions", "Output Functions"], "task": ["Image Retrieval", "Metric Learning"], "method": ["Softmax", "Triplet Loss"], "dataset": [" CUB-200-2011", "CARS196"], "metric": ["R@1"], "title": "SoftTriple Loss: Deep Metric Learning Without Triplet Sampling"} {"abstract": "Techniques for automatically designing deep neural network architectures such\nas reinforcement learning based approaches have recently shown promising\nresults. However, their success is based on vast computational resources (e.g.\nhundreds of GPUs), making them difficult to be widely used. A noticeable\nlimitation is that they still design and train each network from scratch during\nthe exploration of the architecture space, which is highly inefficient. In this\npaper, we propose a new framework toward efficient architecture search by\nexploring the architecture space based on the current network and reusing its\nweights. We employ a reinforcement learning agent as the meta-controller, whose\naction is to grow the network depth or layer width with function-preserving\ntransformations. As such, the previously validated networks can be reused for\nfurther exploration, thus saves a large amount of computational cost. We apply\nour method to explore the architecture space of the plain convolutional neural\nnetworks (no skip-connections, branching etc.) on image benchmark datasets\n(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method\ncan design highly competitive networks that outperform existing networks using\nthe same design scheme. On CIFAR-10, our model without skip-connections\nachieves 4.23\\% test error rate, exceeding a vast majority of modern\narchitectures and approaching DenseNet. Furthermore, by applying our method to\nexplore the DenseNet architecture space, we are able to achieve more accurate\nnetworks with fewer parameters.", "field": ["Initialization", "Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks"], "task": ["Image Classification", "Neural Architecture Search"], "method": ["Dense Block", "Average Pooling", "Softmax", "Concatenated Skip Connection", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Dropout", "DenseNet", "Kaiming Initialization", "Dense Connections", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CIFAR-10"], "metric": ["Percentage correct"], "title": "Efficient Architecture Search by Network Transformation"} {"abstract": "We introduce YOLO9000, a state-of-the-art, real-time object detection system\nthat can detect over 9000 object categories. First we propose various\nimprovements to the YOLO detection method, both novel and drawn from prior\nwork. The improved model, YOLOv2, is state-of-the-art on standard detection\ntasks like PASCAL VOC and COCO. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At\n40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like\nFaster RCNN with ResNet and SSD while still running significantly faster.\nFinally we propose a method to jointly train on object detection and\nclassification. Using this method we train YOLO9000 simultaneously on the COCO\ndetection dataset and the ImageNet classification dataset. Our joint training\nallows YOLO9000 to predict detections for object classes that don't have\nlabelled detection data. We validate our approach on the ImageNet detection\ntask. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite\nonly having detection data for 44 of the 200 classes. On the 156 classes not in\nCOCO, YOLO9000 gets 16.0 mAP. But YOLO can detect more than just 200 classes;\nit predicts detections for more than 9000 different object categories. And it\nstill runs in real-time.", "field": ["Image Data Augmentation", "Output Functions", "Convolutional Neural Networks", "Proposal Filtering", "Learning Rate Schedules", "Stochastic Optimization", "Regularization", "Object Detection Models"], "task": ["Object Detection", "Real-Time Object Detection"], "method": ["Weight Decay", "Darknet-19", "Color Jitter", "Polynomial Rate Decay", "SGD with Momentum", "Softmax", "Non Maximum Suppression", "Random Resized Crop", "SSD", "YOLOv2", "ColorJitter", "Step Decay"], "dataset": ["PASCAL VOC 2007", "COCO test-dev", "CARPK", "SKU-110K"], "metric": ["RMSE", "MAP", "MAE", "box AP", "AP75", "AP"], "title": "YOLO9000: Better, Faster, Stronger"} {"abstract": "Learning discriminative features for Facial Expression Recognition (FER) in the wild using Convolutional Neural Networks (CNNs) is a non-trivial task due to the significant intra-class variations and inter-class similarities. Deep Metric Learning (DML) approaches such as center loss and its variants jointly optimized with softmax loss have been adopted in many FER methods to enhance the discriminative power of learned features in the embedding space. However, equally supervising all features with the metric learning method might include irrelevant features and ultimately degrade the generalization ability of the learning algorithm. We propose a Deep Attentive Center Loss (DACL) method to adaptively select a subset of significant feature elements for enhanced discrimination. The proposed DACL integrates an attention mechanism to estimate attention weights correlated with feature importance using the intermediate spatial feature maps in CNN as context. The estimated weights accommodate the sparse formulation of center loss to selectively achieve intra-class compactness and inter-class separation for the relevant information in the embedding space. An extensive study on two widely used wild FER datasets demonstrates the superiority of the proposed DACL method compared to state-of-the-art methods.", "field": ["Initialization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Facial Expression Recognition", "Feature Importance", "Metric Learning"], "method": ["ResNet", "Average Pooling", "Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["RAF-DB"], "metric": ["Overall Accuracy", "Avg. Accuracy"], "title": "Facial Expression Recognition in the Wild via Deep Attentive Center Loss"} {"abstract": "Reinforcement learning (RL) enables robots to learn skills from interactions with the real world. In practice, the unstructured step-based exploration used in Deep RL -- often very successful in simulation -- leads to jerky motion patterns on real robots. Consequences of the resulting shaky behavior are poor exploration, or even damage to the robot. We address these issues by adapting state-dependent exploration (SDE) to current Deep RL algorithms. To enable this adaptation, we propose three extensions to the original SDE, which leads to a new exploration method generalized state-dependent exploration (gSDE). We evaluate gSDE both in simulation, on PyBullet continuous control tasks, and directly on a tendon-driven elastic robot. gSDE yields competitive results in simulation but outperforms the unstructured exploration on the real robot. The code is available at https://github.com/DLR-RM/stable-baselines3/tree/sde.", "field": ["Policy Gradient Methods", "Regularization", "Off-Policy TD Control", "Stochastic Optimization", "Activation Functions", "Feedforward Networks", "Replay Memory"], "task": ["Continuous Control"], "method": ["Target Policy Smoothing", "A2C", "Adam", "Twin Delayed Deep Deterministic", "Clipped Double Q-learning", "TD3", "Soft Actor Critic", "Entropy Regularization", "Rectified Linear Units", "ReLU", "Experience Replay", "PPO", "Proximal Policy Optimization", "Dense Connections"], "dataset": ["PyBullet Ant", "PyBullet Walker2D", "PyBullet HalfCheetah", "PyBullet Hopper"], "metric": ["Return"], "title": "Generalized State-Dependent Exploration for Deep Reinforcement Learning in Robotics"} {"abstract": "We introduce a new local sparse attention layer that preserves two-dimensional geometry and locality. We show that by just replacing the dense attention layer of SAGAN with our construction, we obtain very significant FID, Inception score and pure visual improvements. FID score is improved from $18.65$ to $15.94$ on ImageNet, keeping all other parameters the same. The sparse attention patterns that we propose for our new layer are designed using a novel information theoretic criterion that uses information flow graphs. We also present a novel way to invert Generative Adversarial Networks with attention. Our method extracts from the attention layer of the discriminator a saliency map, which we use to construct a new loss function for the inversion. This allows us to visualize the newly introduced attention heads and show that they indeed capture interesting aspects of two-dimensional geometry of real images.", "field": ["Output Functions", "Attention Modules", "Stochastic Optimization", "Loss Functions", "Normalization", "Convolutions", "Attention Mechanisms", "Generative Adversarial Networks"], "task": ["Conditional Image Generation", "Deep Attention", "Image Generation"], "method": ["Spectral Normalization", "Softmax", "Adam", "Self-Attention GAN", "SAGAN Self-Attention Module", "GAN Hinge Loss", "1x1 Convolution", "Dot-Product Attention", "Convolution", "Batch Normalization", "SAGAN"], "dataset": ["ImageNet 128x128"], "metric": ["Inception score", "FID"], "title": "Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models"} {"abstract": "Scene text in the wild is commonly presented with high variant characteristics. Using quadrilateral bounding box to localize the text instance is nearly indispensable for detection methods. However, recent researches reveal that introducing quadrilateral bounding box for scene text detection will bring a label confusion issue which is easily overlooked, and this issue may significantly undermine the detection performance. To address this issue, in this paper, we propose a novel method called Sequential-free Box Discretization (SBD) by discretizing the bounding box into key edges (KE) which can further derive more effective methods to improve detection performance. Experiments showed that the proposed method can outperform state-of-the-art methods in many popular scene text benchmarks, including ICDAR 2015, MLT, and MSRA-TD500. Ablation study also showed that simply integrating the SBD into Mask R-CNN framework, the detection performance can be substantially improved. Furthermore, an experiment on the general object dataset HRSC2016 (multi-oriented ships) showed that our method can outperform recent state-of-the-art methods by a large margin, demonstrating its powerful generalization ability. Source code: https://github.com/Yuliang-Liu/Box_Discretization_Network.", "field": ["Output Functions", "RoI Feature Extractors", "Convolutions", "Backbone Architectures", "Instance Segmentation Models"], "task": ["Scene Text", "Scene Text Detection"], "method": ["Softmax", "Convolution", "Spatial Broadcast Decoder", "RoIAlign", "Mask R-CNN"], "dataset": ["IC19-ReCTs"], "metric": ["F-Measure"], "title": "Omnidirectional Scene Text Detection with Sequential-free Box Discretization"} {"abstract": "We propose a distributed architecture for deep reinforcement learning at\nscale, that enables agents to learn effectively from orders of magnitude more\ndata than previously possible. The algorithm decouples acting from learning:\nthe actors interact with their own instances of the environment by selecting\nactions according to a shared neural network, and accumulate the resulting\nexperience in a shared experience replay memory; the learner replays samples of\nexperience and updates the neural network. The architecture relies on\nprioritized experience replay to focus only on the most significant data\ngenerated by the actors. Our architecture substantially improves the state of\nthe art on the Arcade Learning Environment, achieving better final performance\nin a fraction of the wall-clock training time.", "field": ["Q-Learning Networks", "Policy Gradient Methods", "Regularization", "Off-Policy TD Control", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Replay Memory", "Distributed Reinforcement Learning", "Value Function Estimation"], "task": ["Atari Games"], "method": ["Weight Decay", "Deep Deterministic Policy Gradient", "Ape-X", "Double Q-learning", "RMSProp", "Adam", "Dueling Network", "Prioritized Experience Replay", "N-step Returns", "Batch Normalization", "Convolution", "DDPG", "ReLU", "Experience Replay", "Dense Connections", "Rectified Linear Units", "Ape-X DPG", "Ape-X DQN"], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari-57", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Pitfall!", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Defender", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Solaris", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Surround", "Atari 2600 Yars Revenge", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score", "Medium Human-Normalized Score"], "title": "Distributed Prioritized Experience Replay"} {"abstract": "Recent advancements in deep learning have significantly increased the capabilities of face recognition. However, face recognition in an unconstrained environment is still an active research challenge. Covariates such as pose and low resolution have received significant attention, but \u201cdisguise\u201d is considered an onerous covariate of face recognition. One primary reason for this is the unavailability of large and representative databases. To address the problem of recognizing disguised faces, we propose an active learning framework A-LINK, that intelligently selects training samples from the target domain data, such that the decision boundary does not overfit to a particular set of variations, and better generalizes to encode variability. The framework further applies domain adaptation with the actively selected training samples to fine-tune the network. We demonstrate the effectiveness of the proposed framework on DFW and Multi-PIE datasets with state-of-the-art models such as LCSSE and DenseNet.", "field": ["Initialization", "Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks"], "task": ["Active Learning", "Domain Adaptation", "Face Recognition", "Heterogeneous Face Recognition"], "method": ["Dense Block", "Average Pooling", "Softmax", "Concatenated Skip Connection", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Dropout", "DenseNet", "Kaiming Initialization", "Dense Connections", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Disguised Faces in the Wild", "CMU-MPIE"], "metric": ["GAR @1% FAR Impersonation", "16x16 Accuracy", "GAR @0.1% FAR Obfuscation", "24x24 Accuracy", "48x48 Accuracy", "GAR @0.1% FAR Overall", "GAR @0.1% FAR Impersonation", "GAR @1% FAR Obfuscation", "32x32 Accuracy", "GAR @1% FAR Overall"], "title": "A-LINK: Recognizing Disguised Faces via Active Learning based Inter-Domain Knowledge"} {"abstract": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on theCoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10{\\%} relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.", "field": ["Recurrent Neural Networks", "Activation Functions", "Bidirectional Recurrent Neural Networks"], "task": ["Predicate Detection", "Semantic Role Labeling"], "method": ["Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Bidirectional LSTM", "LSTM", "Sigmoid Activation"], "dataset": ["CoNLL 2005", "OntoNotes"], "metric": ["F1"], "title": "Deep Semantic Role Labeling: What Works and What's Next"} {"abstract": "Heatmap representations have formed the basis of human pose estimation systems for many years, and their extension to 3D has been a fruitful line of recent research. This includes 2.5D volumetric heatmaps, whose X and Y axes correspond to image space and Z to metric depth around the subject. To obtain metric-scale predictions, 2.5D methods need a separate post-processing step to resolve scale ambiguity. Further, they cannot localize body joints outside the image boundaries, leading to incomplete estimates for truncated images. To address these limitations, we propose metric-scale truncation-robust (MeTRo) volumetric heatmaps, whose dimensions are all defined in metric 3D space, instead of being aligned with image space. This reinterpretation of heatmap dimensions allows us to directly estimate complete, metric-scale poses without test-time knowledge of distance or relying on anthropometric heuristics, such as bone lengths. To further demonstrate the utility our representation, we present a differentiable combination of our 3D metric-scale heatmaps with 2D image-space ones to estimate absolute 3D pose (our MeTRAbs architecture). We find that supervision via absolute pose loss is crucial for accurate non-root-relative localization. Using a ResNet-50 backbone without further learned layers, we obtain state-of-the-art results on Human3.6M, MPI-INF-3DHP and MuPoTS-3D. Our code will be made publicly available to facilitate further research.", "field": ["Graph Embeddings", "Output Functions"], "task": ["3D Absolute Human Pose Estimation", "3D Human Pose Estimation", "Pose Estimation"], "method": ["LINE", "Heatmap", "Large-scale Information Network Embedding"], "dataset": ["3D Poses in the Wild Challenge"], "metric": ["MPJPE"], "title": "MeTRAbs: Metric-Scale Truncation-Robust Heatmaps for Absolute 3D Human Pose Estimation"} {"abstract": "While supervised learning has enabled great progress in many applications,\nunsupervised learning has not seen such widespread adoption, and remains an\nimportant and challenging endeavor for artificial intelligence. In this work,\nwe propose a universal unsupervised learning approach to extract useful\nrepresentations from high-dimensional data, which we call Contrastive\nPredictive Coding. The key insight of our model is to learn such\nrepresentations by predicting the future in latent space by using powerful\nautoregressive models. We use a probabilistic contrastive loss which induces\nthe latent space to capture information that is maximally useful to predict\nfuture samples. It also makes the model tractable by using negative sampling.\nWhile most prior work has focused on evaluating representations for a\nparticular modality, we demonstrate that our approach is able to learn useful\nrepresentations achieving strong performance on four distinct domains: speech,\nimages, text and reinforcement learning in 3D environments.", "field": ["Self-Supervised Learning", "Policy Gradient Methods", "Initialization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Loss Functions", "Convolutions", "Feedforward Networks", "Pooling Operations", "Generative Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Representation Learning", "Self-Supervised Image Classification", "Semi-Supervised Image Classification"], "method": ["Gated Recurrent Unit", "InfoNCE", "A2C", "Average Pooling", "RMSProp", "Long Short-Term Memory", "Adam", "Tanh Activation", "1x1 Convolution", "ResNet", "Convolution", "ReLU", "Residual Connection", "Linear Layer", "PixelCNN", "Residual Network", "GRU", "Kaiming Initialization", "Step Decay", "Sigmoid Activation", "Contrastive Predictive Coding", "SGD with Momentum", "LSTM", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet", "ImageNet - 1% labeled data", "ImageNet - 10% labeled data"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Representation Learning with Contrastive Predictive Coding"} {"abstract": "There are a huge number of features which are said to improve Convolutional Neural Network (CNN) accuracy. Practical testing of combinations of such features on large datasets, and theoretical justification of the result, is required. Some features operate on certain models exclusively and for certain problems exclusively, or only for small-scale datasets; while some features, such as batch-normalization and residual-connections, are applicable to the majority of models, tasks, and datasets. We assume that such universal features include Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections (CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT) and Mish-activation. We use new features: WRC, CSP, CmBN, SAT, Mish activation, Mosaic data augmentation, CmBN, DropBlock regularization, and CIoU loss, and combine some of them to achieve state-of-the-art results: 43.5% AP (65.7% AP50) for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100. Source code is at https://github.com/AlexeyAB/darknet", "field": ["Proposal Filtering", "Convolutional Neural Networks", "Feature Extractors", "Normalization", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Object Detection Models", "Region Proposal", "Image Models", "Generalized Linear Models", "Stochastic Optimization", "Feedforward Networks", "Skip Connection Blocks", "Image Data Augmentation", "Output Functions", "Learning Rate Schedules", "RoI Feature Extractors", "Clustering", "Skip Connections", "Image Model Blocks"], "task": ["Data Augmentation", "Object Detection", "Real-Time Object Detection"], "method": ["Depthwise Convolution", "Weight Decay", "Dilated Convolution", "Cosine Annealing", "Average Pooling", "Polynomial Rate Decay", "RMSProp", "EfficientNet", "RFB", "Mixup", "Tanh Activation", "Bottom-up Path Augmentation", "1x1 Convolution", "CSPResNeXt Block", "RoIAlign", "Softplus", "PAFPN", "BiFPN", "Region Proposal Network", "Mish", "Convolution", "CutMix", "ReLU", "Adaptive Feature Pooling", "Residual Connection", "Receptive Field Block", "FPN", "Leaky ReLU", "Dense Connections", "RPN", "YOLOv3", "Swish", "Grouped Convolution", "CSPResNeXt", "Spatial Attention Module", "Batch Normalization", "Label Smoothing", "Pointwise Convolution", "DIoU-NMS", "Sigmoid Activation", "Logistic Regression", "k-Means Clustering", "DropBlock", "ResNeXt Block", "SGD with Momentum", "CSPDarknet53", "Inverted Residual Block", "Softmax", "Feature Pyramid Network", "Concatenated Skip Connection", "YOLOv4", "Dropout", "Depthwise Separable Convolution", "Darknet-53", "Linear Warmup", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "Spatial Pyramid Pooling"], "dataset": ["COCO", "COCO test-dev", "CrowdHuman (full body)"], "metric": ["APM", "FPS", "MAP", "inference time (ms)", "box AP", "AP 0.5", "AP75", "APS", "APL", "AP50"], "title": "YOLOv4: Optimal Speed and Accuracy of Object Detection"} {"abstract": "Anomaly detection methods require high-quality features. One way of obtaining strong features is to adapt pre-trained features to anomaly detection on the target distribution. Unfortunately, simple adaptation methods often result in catastrophic collapse (feature deterioration) and reduce performance. DeepSVDD combats collapse by removing biases from architectures, but this limits the adaptation performance gain. In this work, we propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning. In addition, we conduct a thorough investigation of Imagenet-pretrained features for one-class anomaly detection. Our method, PANDA, outperforms the state-of-the-art in the one-class and outlier exposure settings (CIFAR10: 96.2% vs. 90.1% and 98.9% vs. 95.6%).", "field": ["Regularization"], "task": ["Anomaly Detection", "Continual Learning"], "method": ["Early Stopping"], "dataset": ["DIOR", "CIFAR-100", "CIFAR-10", "Cats-and-Dogs", "Fashion-MNIST"], "metric": ["ROC AUC"], "title": "PANDA -- Adapting Pretrained Features for Anomaly Detection"} {"abstract": "Conventional neural architecture search (NAS) approaches are based on reinforcement learning or evolutionary strategy, which take more than 3000 GPU hours to find a good model on CIFAR-10. We propose an efficient NAS approach learning to search by gradient descent. Our approach represents the search space as a directed acyclic graph (DAG). This DAG contains billions of sub-graphs, each of which indicates a kind of neural architecture. To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG. This sampler is learnable and optimized by the validation loss after training the sampled architecture. In this way, our approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS). In experiments, we can finish one searching procedure in four GPU hours on CIFAR-10, and the discovered model obtains a test error of 2.82\\% with only 2.5M parameters, which is on par with the state-of-the-art. Code is publicly available on GitHub: https://github.com/D-X-Y/NAS-Projects.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions"], "task": ["Neural Architecture Search"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["NAS-Bench-201, ImageNet-16-120", "NAS-Bench-201, CIFAR-100", "NAS-Bench-201, CIFAR-10", "CIFAR-10"], "metric": ["Accuracy (Test)", "Search Time (GPU days)", "Accuracy (Val)", "Top-1 Error Rate", "Accuracy (val)", "Search time (s)"], "title": "Searching for A Robust Neural Architecture in Four GPU Hours"} {"abstract": "This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec\u2019s compact and efficiently trainable model outperforms stateof-the-art CF techniques (biased matrix factorization, RBMCF and LLORMA) on the Movielens and Netflix datasets.", "field": ["Generative Models"], "task": ["Recommendation Systems"], "method": ["AutoEncoder"], "dataset": ["MovieLens 1M", "MovieLens 10M"], "metric": ["RMSE"], "title": "AutoRec: Autoencoders Meet Collaborative Filtering"} {"abstract": "We introduce MGP-VAE (Multi-disentangled-features Gaussian Processes Variational AutoEncoder), a variational autoencoder which uses Gaussian processes (GP) to model the latent space for the unsupervised learning of disentangled representations in video sequences. We improve upon previous work by establishing a framework by which multiple features, static or dynamic, can be disentangled. Specifically we use fractional Brownian motions (fBM) and Brownian bridges (BB) to enforce an inter-frame correlation structure in each independent channel, and show that varying this structure enables one to capture different factors of variation in the data. We demonstrate the quality of our representations with experiments on three publicly available datasets, and also quantify the improvement using a video prediction task. Moreover, we introduce a novel geodesic loss function which takes into account the curvature of the data manifold to improve learning. Our experiments show that the combination of the improved representations with the novel loss function enable MGP-VAE to outperform the baselines in video prediction.", "field": ["Generative Models"], "task": ["Gaussian Processes", "Video Prediction"], "method": ["AutoEncoder"], "dataset": ["Colored dSprites", "Sprites"], "metric": ["MSE"], "title": "Disentangling Multiple Features in Video Sequences using Gaussian Processes in Variational Autoencoders"} {"abstract": "Feature pyramid has been an efficient method to extract features at different scales. Development over this method mainly focuses on aggregating contextual information at different levels while seldom touching the inter-level correlation in the feature pyramid. Early computer vision methods extracted scale-invariant features by locating the feature extrema in both spatial and scale dimension. Inspired by this, a convolution across the pyramid level is proposed in this study, which is termed pyramid convolution and is a modified 3-D convolution. Stacked pyramid convolutions directly extract 3-D (scale and spatial) features and outperforms other meticulously designed feature fusion modules. Based on the viewpoint of 3-D convolution, an integrated batch normalization that collects statistics from the whole feature pyramid is naturally inserted after the pyramid convolution. Furthermore, we also show that the naive pyramid convolution, together with the design of RetinaNet head, actually best applies for extracting features from a Gaussian pyramid, whose properties can hardly be satisfied by a feature pyramid. In order to alleviate this discrepancy, we build a scale-equalizing pyramid convolution (SEPC) that aligns the shared pyramid convolution kernel only at high-level feature maps. Being computationally efficient and compatible with the head design of most single-stage object detectors, the SEPC module brings significant performance improvement ($>4$AP increase on MS-COCO2017 dataset) in state-of-the-art one-stage object detectors, and a light version of SEPC also has $\\sim3.5$AP gain with only around 7% inference time increase. The pyramid convolution also functions well as a stand-alone module in two-stage object detectors and is able to improve the performance by $\\sim2$AP. The source code can be found at https://github.com/jshilong/SEPC.", "field": ["Feature Extractors", "Loss Functions", "Normalization", "Convolutions", "Object Detection Models"], "task": ["Object Detection"], "method": ["Focal Loss", "Feature Pyramid Network", "Convolution", "Batch Normalization", "1x1 Convolution", "FPN", "RetinaNet"], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Scale-Equalizing Pyramid Convolution for Object Detection"} {"abstract": "Dense depth perception is critical for autonomous driving and other robotics applications. However, modern LiDAR sensors only provide sparse depth measurement. It is thus necessary to complete the sparse LiDAR data, where a synchronized guidance RGB image is often used to facilitate this completion. Many neural networks have been designed for this task. However, they often na\\\"{\\i}vely fuse the LiDAR data and RGB image information by performing feature concatenation or element-wise addition. Inspired by the guided image filtering, we design a novel guided network to predict kernel weights from the guidance image. These predicted kernels are then applied to extract the depth image features. In this way, our network generates content-dependent and spatially-variant kernels for multi-modal feature fusion. Dynamically generated spatially-variant kernels could lead to prohibitive GPU memory consumption and computation overhead. We further design a convolution factorization to reduce computation and memory consumption. The GPU memory reduction makes it possible for feature fusion to work in multi-stage scheme. We conduct comprehensive experiments to verify our method on real-world outdoor, indoor and synthetic datasets. Our method produces strong results. It outperforms state-of-the-art methods on the NYUv2 dataset and ranks 1st on the KITTI depth completion benchmark at the time of submission. It also presents strong generalization capability under different 3D point densities, various lighting and weather conditions as well as cross-dataset evaluations. The code will be released for reproduction.", "field": ["Convolutions"], "task": ["Autonomous Driving", "Depth Completion"], "method": ["Convolution"], "dataset": ["KITTI Depth Completion Validation"], "metric": ["RMSE"], "title": "Learning Guided Convolutional Network for Depth Completion"} {"abstract": "We present lambda layers -- an alternative framework to self-attention -- for capturing long-range interactions between an input and structured contextual information (e.g. a pixel surrounded by other pixels). Lambda layers capture such interactions by transforming available contexts into linear functions, termed lambdas, and applying these linear functions to each input separately. Similar to linear attention, lambda layers bypass expensive attention maps, but in contrast, they model both content and position-based interactions which enables their application to large structured inputs such as images. The resulting neural network architectures, LambdaNetworks, significantly outperform their convolutional and attentional counterparts on ImageNet classification, COCO object detection and COCO instance segmentation, while being more computationally efficient. Additionally, we design LambdaResNets, a family of hybrid architectures across different scales, that considerably improves the speed-accuracy tradeoff of image classification models. LambdaResNets reach excellent accuracies on ImageNet while being 3.2 - 4.4x faster than the popular EfficientNets on modern machine learning accelerators. When training with an additional 130M pseudo-labeled images, LambdaResNets achieve up to a 9.5x speed-up over the corresponding EfficientNet checkpoints.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Long-Range Interaction Layers", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification", "Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": ["ResNet", "Lambda Layer", "Sigmoid Activation", "Batch Normalization", "Convolution", "1x1 Convolution", "Residual Network", "ReLU", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Dense Connections", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 1 Accuracy"], "title": "LambdaNetworks: Modeling Long-Range Interactions Without Attention"} {"abstract": "Can we automatically design a Convolutional Network (ConvNet) with the\nhighest image classification accuracy under the runtime constraint of a mobile\ndevice? Neural architecture search (NAS) has revolutionized the design of\nhardware-efficient ConvNets by automating this process. However, the NAS\nproblem remains challenging due to the combinatorially large design space,\ncausing a significant searching time (at least 200 GPU-hours). To alleviate\nthis complexity, we propose Single-Path NAS, a novel differentiable NAS method\nfor designing hardware-efficient ConvNets in less than 4 hours. Our\ncontributions are as follows: 1. Single-path search space: Compared to previous\ndifferentiable NAS methods, Single-Path NAS uses one single-path\nover-parameterized ConvNet to encode all architectural decisions with shared\nconvolutional kernel parameters, hence drastically decreasing the number of\ntrainable parameters and the search cost down to few epochs. 2.\nHardware-efficient ImageNet classification: Single-Path NAS achieves 74.96%\ntop-1 accuracy on ImageNet with 79ms latency on a Pixel 1 phone, which is\nstate-of-the-art accuracy compared to NAS methods with similar constraints\n(<80ms). 3. NAS efficiency: Single-Path NAS search cost is only 8 epochs (30\nTPU-hours), which is up to 5,000x faster compared to prior work. 4.\nReproducibility: Unlike all recent mobile-efficient NAS methods which only\nrelease pretrained models, we open-source our entire codebase at:\nhttps://github.com/dstamoulis/single-path-nas.", "field": ["Output Functions", "Convolutional Neural Networks", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connection Blocks"], "task": ["Image Classification", "Neural Architecture Search"], "method": ["Depthwise Convolution", "Average Pooling", "Inverted Residual Block", "Softmax", "Convolution", "Batch Normalization", "1x1 Convolution", "Depthwise Separable Convolution", "Pointwise Convolution", "Single-path NAS", "Global Average Pooling", "Dense Connections"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours"} {"abstract": "Graph Convolutional Networks (GCNs) have drawn significant attention and become promising methods for learning graph representations. The most GCNs suffer the performance loss when the depth of the model increases. Similarly to CNNs, without specially designed architectures, the performance of a network degrades quickly. Some researchers argue that the required neighbourhood size and neural network depth are two completely orthogonal aspects of graph representation. Thus, several methods extend the neighbourhood by aggregating k-hop neighbourhoods of nodes while using shallow neural networks. However, these methods still encounter oversmoothing, high computation and storage costs. In this paper, we use the Markov diffusion kernel to derive a variant of GCN called Simple Spectral Graph Convolution (S^2GC) which is closely related to spectral models and combines strengths of both spatial and spectral methods. Our spectral analysis shows that our simple spectral graph convolution used in S^2GC is a low-pass filter which partitions networks into a few large parts. Our experimental evaluation demonstrates that S^2GC with a linear learner is competitive in text and node classification tasks. Moreover, S^2GC is comparable to other state-of-the-art methods for node clustering and community prediction tasks. ", "field": ["Convolutions", "Graph Models"], "task": ["Node Classification", "Node Clustering", "Text Classification"], "method": ["Graph Convolutional Network", "GCN", "Convolution"], "dataset": ["R8", "20NEWS", "Reddit", "MR", "Cora: fixed 20 node per class", "R52", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class", "Ohsumed"], "metric": ["Accuracy"], "title": "Simple Spectral Graph Convolution"} {"abstract": "The key idea of current deep learning methods for dense prediction is to apply a model on a regular patch centered on each pixel to make pixel-wise predictions. These methods are limited in the sense that the patches are determined by network architecture instead of learned from data. In this work, we propose the dense transformer networks, which can learn the shapes and sizes of patches from data. The dense transformer networks employ an encoder-decoder architecture, and a pair of dense transformer modules are inserted into each of the encoder and decoder paths. The novelty of this work is that we provide technical solutions for learning the shapes and sizes of patches from data and efficiently restoring the spatial correspondence required for dense prediction. The proposed dense transformer modules are differentiable, thus the entire network can be trained. We apply the proposed networks on biological image segmentation tasks and show superior performance is achieved in comparison to baseline methods.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Electron Microscopy", "Electron Microscopy Image Segmentation", "Semantic Segmentation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["SNEMI3D"], "metric": ["AUC"], "title": "Dense Transformer Networks for Brain Electron Microscopy Image Segmentation"} {"abstract": "We propose a new approach for 3D instance segmentation based on sparse\nconvolution and point affinity prediction, which indicates the likelihood of\ntwo points belonging to the same instance. The proposed network, built upon\nsubmanifold sparse convolution [3], processes a voxelized point cloud and\npredicts semantic scores for each occupied voxel as well as the affinity\nbetween neighboring voxels at different scales. A simple yet effective\nclustering algorithm segments points into instances based on the predicted\naffinity and the mesh topology. The semantic for each instance is determined by\nthe semantic prediction. Experiments show that our method outperforms the\nstate-of-the-art instance segmentation methods by a large margin on the widely\nused ScanNet benchmark [2]. We share our code publicly at\nhttps://github.com/art-programmer/MASC.", "field": ["Convolutions"], "task": ["3D Instance Segmentation", "Instance Segmentation", "Semantic Segmentation"], "method": ["Convolution"], "dataset": ["ScanNet(v2)", "ScanNet"], "metric": ["mAP", "Mean AP @ 0.5"], "title": "MASC: Multi-scale Affinity with Sparse Convolution for 3D Instance Segmentation"} {"abstract": "We introduce the variational graph auto-encoder (VGAE), a framework for\nunsupervised learning on graph-structured data based on the variational\nauto-encoder (VAE). This model makes use of latent variables and is capable of\nlearning interpretable latent representations for undirected graphs. We\ndemonstrate this model using a graph convolutional network (GCN) encoder and a\nsimple inner product decoder. Our model achieves competitive results on a link\nprediction task in citation networks. In contrast to most existing models for\nunsupervised learning on graph-structured data and link prediction, our model\ncan naturally incorporate node features, which significantly improves\npredictive performance on a number of benchmark datasets.", "field": ["Graph Embeddings"], "task": ["Graph Clustering", "Link Prediction"], "method": ["Variational Graph Auto Encoder", "VGAE"], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Variational Graph Auto-Encoders"} {"abstract": "Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks, but these graphs are usually incomplete, urging auto-completion of them. Prevalent graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings and capturing their triple-level relationship with spatial distance. However, they are hardly generalizable to the elements never visited in training and are intrinsically vulnerable to graph incompleteness. In contrast, textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations. They are generalizable enough and robust to the incompleteness, especially when coupled with pre-trained encoders. But two major drawbacks limit the performance: (1) high overheads due to the costly scoring of all possible triples in inference, and (2) a lack of structured knowledge in the textual encoder. In this paper, we follow the textual encoding paradigm and aim to alleviate its drawbacks by augmenting it with graph embedding techniques -- a complementary hybrid of both paradigms. Specifically, we partition each triple into two asymmetric parts as in translation-based graph embedding approach, and encode both parts into contextualized representations by a Siamese-style textual encoder. Built upon the representations, our model employs both deterministic classifier and spatial measurement for representation and structure learning respectively. Moreover, we develop a self-adaptive ensemble scheme to further improve the performance by incorporating triple scores from an existing graph embedding model. In experiments, we achieve state-of-the-art performance on three benchmarks and a zero-shot dataset for link prediction, with highlights of inference costs reduced by 1-2 orders of magnitude compared to a textual encoding method.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Graph Embeddings", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Graph Embedding", "Knowledge Graph Completion", "Knowledge Graphs", "Link Prediction", "Representation Learning"], "method": ["TransE", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["WN18RR", "FB15k-237"], "metric": ["Hits@3", "Hits@1", "MR", "MRR", "Hits@10"], "title": "Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion"} {"abstract": "Wide usage of social media platforms has increased the risk of aggression, which results in mental stress and affects the lives of people negatively like psychological agony, fighting behavior, and disrespect to others. Majority of such conversations contains code-mixed languages[28]. Additionally, the way used to express thought or communication style also changes from one social media plat-form to another platform (e.g., communication styles are different in twitter and Facebook). These all have increased the complexity of the problem. To solve these problems, we have introduced a unified and robust multi-modal deep learning architecture which works for English code-mixed dataset and uni-lingual English dataset both.The devised system, uses psycho-linguistic features and very ba-sic linguistic features. Our multi-modal deep learning architecture contains, Deep Pyramid CNN, Pooled BiLSTM, and Disconnected RNN(with Glove and FastText embedding, both). Finally, the system takes the decision based on model averaging. We evaluated our system on English Code-Mixed TRAC 2018 dataset and uni-lingual English dataset obtained from Kaggle. Experimental results show that our proposed system outperforms all the previous approaches on English code-mixed dataset and uni-lingual English dataset.", "field": ["Recurrent Neural Networks", "Activation Functions", "Word Embeddings", "Bidirectional Recurrent Neural Networks"], "task": ["Aggression Identification", "Cross-Lingual Transfer", "Domain Adaptation", "Style Generalization", "Text Classification"], "method": ["Long Short-Term Memory", "BiLSTM", "Tanh Activation", "GloVe Embeddings", "Bidirectional LSTM", "LSTM", "GloVe", "Sigmoid Activation", "fastText"], "dataset": ["Twitter-US", "Facebook Media"], "metric": ["F1 (Hidden Test Set)"], "title": "A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts"} {"abstract": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way.SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections", "Image Model Blocks"], "task": ["Optical Character Recognition", "Scene Text", "Scene Text Detection", "Scene Text Recognition"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "Spatial Transformer", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["FSNS - Test"], "metric": ["Sequence error"], "title": "SEE: Towards Semi-SupervisedEnd-to-End Scene Text Recognition"} {"abstract": "Natural language inference (NLI) is known as one of the central tasks in natural language processing (NLP) which encapsulates many fundamental aspects of language understanding. With the considerable achievements of data-hungry deep learning methods in NLP tasks, a great amount of effort has been devoted to develop more diverse datasets for different languages. In this paper, we present a new dataset for the NLI task in the Persian language, also known as Farsi, which is one of the dominant languages in the Middle East. This dataset, named FarsTail, includes 10,367 samples which are provided in both the Persian language as well as the indexed format to be useful for non-Persian researchers. The samples are generated from 3,539 multiple-choice questions with the least amount of annotator interventions in a way similar to the SciTail dataset. A carefully designed multi-step process is adopted to ensure the quality of the dataset. We also present the results of traditional and state-of-the-art methods on FarsTail including different embedding methods such as word2vec, fastText, ELMo, BERT, and LASER, as well as different modeling approaches such as DecompAtt, ESIM, HBMP, ULMFiT, and cross-lingual transfer approach to provide a solid baseline for the future research. The best obtained test accuracy is 78.13% which shows that there is a big room for improving the current methods to be useful for real-world NLP applications in different languages. The dataset is available at https://github.com/dml-qom/FarsTail.", "field": ["Recurrent Neural Networks", "Sequence To Sequence Models", "Word Embeddings", "Language Models", "Bidirectional Recurrent Neural Networks"], "task": ["Cross-Lingual Natural Language Inference", "Cross-Lingual Transfer", "Natural Language Inference"], "method": ["HBMP", "Universal Language Model Fine-tuning", "Hierarchical BiLSTM Max Pooling", "ULMFiT", "Long Short-Term Memory", "fastText", "BiGRU", "CBoW Word2Vec", "ESIM", "BERT", "LSTM", "Skip-gram Word2Vec", "Continuous Bag-of-Words Word2Vec", "Enhanced Sequential Inference Model", "ELMo", "Bidirectional GRU"], "dataset": ["FarsTail"], "metric": ["% Test Accuracy"], "title": "FarsTail: A Persian Natural Language Inference Dataset"} {"abstract": "We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. We further propose two visually-grounded language model objectives for pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between verbs and image regions corresponding to their arguments.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Visual Question Answering", "Visual Reasoning"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["NLVR", "VCR (Q-A) dev", "VQA v2 test-std", "VCR (Q-A) test", "Flickr30k Entities Test", "VCR (QA-R) dev", "VCR (QA-R) test", "VQA v2 test-dev", "Flickr30k Entities Dev", "NLVR2 Dev", "VCR (Q-AR) test", "VCR (Q-AR) dev"], "metric": ["overall", "R@10", "Accuracy (Test-P)", "R@1", "R@5", "Accuracy (Dev)", "Accuracy", "Accuracy (Test-U)"], "title": "VisualBERT: A Simple and Performant Baseline for Vision and Language"} {"abstract": "This paper addresses deep face recognition (FR) problem under open-set\nprotocol, where ideal face features are expected to have smaller maximal\nintra-class distance than minimal inter-class distance under a suitably chosen\nmetric space. However, few existing algorithms can effectively achieve this\ncriterion. To this end, we propose the angular softmax (A-Softmax) loss that\nenables convolutional neural networks (CNNs) to learn angularly discriminative\nfeatures. Geometrically, A-Softmax loss can be viewed as imposing\ndiscriminative constraints on a hypersphere manifold, which intrinsically\nmatches the prior that faces also lie on a manifold. Moreover, the size of\nangular margin can be quantitatively adjusted by a parameter $m$. We further\nderive specific $m$ to approximate the ideal feature criterion. Extensive\nanalysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF)\nand MegaFace Challenge show the superiority of A-Softmax loss in FR tasks. The\ncode has also been made publicly available.", "field": ["Output Functions"], "task": ["Face Identification", "Face Recognition", "Face Verification"], "method": ["Softmax"], "dataset": ["MegaFace", "CK+", "YouTube Faces DB", "Trillion Pairs Dataset", "Labeled Faces in the Wild"], "metric": ["Accuracy"], "title": "SphereFace: Deep Hypersphere Embedding for Face Recognition"} {"abstract": "This paper presents a study showing the benefits of the EfficientNet models compared with heavier Convolutional Neural Networks (CNNs) in the Document Classification task, essential problem in the digitalization process of institutions. We show in the RVL-CDIP dataset that we can improve previous results with a much lighter model and present its transfer learning capabilities on a smaller in-domain dataset such as Tobacco3482. Moreover, we present an ensemble pipeline which is able to boost solely image input by combining image model predictions with the ones generated by BERT model on extracted text by OCR. We also show that the batch size can be effectively increased without hindering its accuracy so that the training process can be sped up by parallelizing throughout multiple GPUs, decreasing the computational time needed. Lastly, we expose the training performance differences between PyTorch and Tensorflow Deep Learning frameworks.", "field": ["Output Functions", "Regularization", "Stochastic Optimization", "Attention Modules", "Learning Rate Schedules", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Convolutions", "Feedforward Networks", "Pooling Operations", "Attention Mechanisms", "Skip Connections", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["Document Classification", "Document Image Classification", "Image Classification", "Multi-Modal Document Classification", "Optical Character Recognition", "Transfer Learning"], "method": ["Depthwise Convolution", "Weight Decay", "Average Pooling", "EfficientNet", "RMSProp", "Adam", "1x1 Convolution", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Layer Normalization", "Swish", "Batch Normalization", "GELU", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Sigmoid Activation", "WordPiece", "Inverted Residual Block", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Dropout", "Depthwise Separable Convolution", "BERT", "Rectified Linear Units"], "dataset": ["RVL-CDIP"], "metric": ["Accuracy"], "title": "Improving accuracy and speeding up Document Image Classification through parallel systems"} {"abstract": "Meaning Representations (AMRs) are broad-coverage sentence-level semantic graphs. Existing approaches to generating text from AMR have focused on training sequence-to-sequence or graph-to-sequence models on AMR annotated data only. In this paper, we propose an alternative approach that combines a strong pre-trained language model with cycle consistency-based re-scoring. Despite the simplicity of the approach, our experimental results show these models outperform all previous techniques on the English LDC2017T10dataset, including the recent use of transformer architectures. In addition to the standard evaluation metrics, we provide human evaluation experiments that further substantiate the strength of our approach.", "field": ["Regularization", "Attention Modules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["AMR-to-Text Generation", "Data-to-Text Generation", "Graph-to-Sequence", "Language Modelling", "Text Generation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections", "Rectified Linear Units"], "dataset": ["LDC2017T10"], "metric": ["BLEU"], "title": "GPT-too: A language-model-first approach for AMR-to-text generation"} {"abstract": "A key challenge in coreference resolution is to capture properties of entity clusters, and use those in the resolution process. Here we provide a simple and effective approach for achieving this, via an {``}Entity Equalization{''} mechanism. The Equalization approach represents each mention in a cluster via an approximation of the sum of all mentions in the cluster. We show how this can be done in a fully differentiable end-to-end manner, thus enabling high-order inferences in the resolution process. Our approach, which also employs BERT embeddings, results in new state-of-the-art results on the CoNLL-2012 coreference resolution task, improving average F1 by 3.6{\\%}.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Coreference Resolution"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["OntoNotes", "CoNLL 2012"], "metric": ["Avg F1", "F1"], "title": "Coreference Resolution with Entity Equalization"} {"abstract": "Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks. We have made the code publicly available at \\url{https://github.com/DSE-MSU/R-transformer}.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Convolutions", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Music Modeling", "Sequential Image Classification"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Convolution", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["Penn Treebank (Word Level)", "Penn Treebank (Character Level)", "Nottingham", "Sequential MNIST"], "metric": ["NLL", "Bit per Character (BPC)", "Unpermuted Accuracy", "Test perplexity"], "title": "R-Transformer: Recurrent Neural Network Enhanced Transformer"} {"abstract": "Images or videos always contain multiple objects or actions. Multi-label recognition has been witnessed to achieve pretty performance attribute to the rapid development of deep learning technologies. Recently, graph convolution network (GCN) is leveraged to boost the performance of multi-label recognition. However, what is the best way for label correlation modeling and how feature learning can be improved with label system awareness are still unclear. In this paper, we propose a label graph superimposing framework to improve the conventional GCN+CNN framework developed for multi-label recognition in the following two aspects. Firstly, we model the label correlations by superimposing label graph built from statistical co-occurrence information into the graph constructed from knowledge priors of labels, and then multi-layer graph convolutions are applied on the final superimposed graph for label embedding abstraction. Secondly, we propose to leverage embedding of the whole label system for better representation learning. In detail, lateral connections between GCN and CNN are added at shallow, middle and deep layers to inject information of label system into backbone CNN for label-awareness in the feature learning process. Extensive experiments are carried out on MS-COCO and Charades datasets, showing that our proposed solution can greatly improve the recognition performance and achieves new state-of-the-art recognition performance.", "field": ["Convolutions", "Graph Models"], "task": ["Multi-Label Classification", "Representation Learning"], "method": ["Graph Convolutional Network", "GCN", "Convolution"], "dataset": ["MS-COCO"], "metric": ["mAP"], "title": "Multi-Label Classification with Label Graph Superimposing"} {"abstract": "Cascade is a classic yet powerful architecture that has boosted performance\non various tasks. However, how to introduce cascade to instance segmentation\nremains an open question. A simple combination of Cascade R-CNN and Mask R-CNN\nonly brings limited gain. In exploring a more effective approach, we find that\nthe key to a successful instance segmentation cascade is to fully leverage the\nreciprocal relationship between detection and segmentation. In this work, we\npropose a new framework, Hybrid Task Cascade (HTC), which differs in two\nimportant aspects: (1) instead of performing cascaded refinement on these two\ntasks separately, it interweaves them for a joint multi-stage processing; (2)\nit adopts a fully convolutional branch to provide spatial context, which can\nhelp distinguishing hard foreground from cluttered background. Overall, this\nframework can learn more discriminative features progressively while\nintegrating complementary features together in each stage. Without bells and\nwhistles, a single HTC obtains 38.4 and 1.5 improvement over a strong Cascade\nMask R-CNN baseline on MSCOCO dataset. Moreover, our overall system achieves\n48.6 mask AP on the test-challenge split, ranking 1st in the COCO 2018\nChallenge Object Detection Task. Code is available at:\nhttps://github.com/open-mmlab/mmdetection.", "field": ["Image Model Blocks", "Object Detection Models", "Initialization", "Output Functions", "Convolutional Neural Networks", "Semantic Segmentation Modules", "Feature Extractors", "Proposal Filtering", "Activation Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Region Proposal", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": ["Cascade Mask R-CNN", "Average Pooling", "Hybrid Task Cascade", "HTC", "Bottom-up Path Augmentation", "1x1 Convolution", "RoIAlign", "PAFPN", "Region Proposal Network", "ResNet", "Precise RoI Pooling", "Convolution", "ReLU", "Residual Connection", "FPN", "Dense Connections", "Deformable Convolution", "RPN", "Grouped Convolution", "Soft-NMS", "Batch Normalization", "Residual Network", "Squeeze-and-Excitation Block", "Cascade R-CNN", "Kaiming Initialization", "Global Convolutional Network", "Sigmoid Activation", "ResNeXt Block", "ResNeXt", "Softmax", "Feature Pyramid Network", "Bottleneck Residual Block", "SENet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50", "mask AP"], "title": "Hybrid Task Cascade for Instance Segmentation"} {"abstract": "SLAM 2018 focuses on predicting a student{'}s mistake while using the Duolingo application. In this paper, we describe the system we developed for this shared task. Our system uses a logistic regression model to predict the likelihood of a student making a mistake while answering an exercise on Duolingo in all three language tracks - English/Spanish (en/es), Spanish/English (es/en) and French/English (fr/en). We conduct an ablation study with several features during the development of this system and discover that context based features plays a major role in language acquisition modeling. Our model beats Duolingo{'}s baseline scores in all three language tracks (AUROC scores for en/es = 0.821, es/en = 0.790 and fr/en = 0.812). Our work makes a case for providing favourable textual context for students while learning second language.", "field": ["Generalized Linear Models"], "task": ["Language Acquisition", "Regression"], "method": ["Logistic Regression"], "dataset": ["SLAM 2018"], "metric": ["AUC"], "title": "Context Based Approach for Second Language Acquisition"} {"abstract": "Video frame interpolation achieves temporal super-resolution by generating smooth transitions between frames. Although great success has been achieved by deep neural networks, the synthesized images stills suffer from poor visual appearance and unsatisfactory artifacts. In this paper, we propose a novel network structure that leverages residue refinement and adaptive weight to synthesize in-between frames. The residue refinement technique is used for optical flow and image generation for higher accuracy and better visual appearance, while the adaptive weight map combines the forward and backward warped frames to reduce the artifacts. Moreover, all sub-modules in our method are implemented by U-Net with less depths, so the efficiency is guaranteed. Experiments on public datasets demonstrate the effectiveness and superiority of our method over the state-of-the-art approaches.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Optical Flow Estimation", "Video Frame Interpolation"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["Vimeo90k", "UCF101"], "metric": ["SSIM", "PSNR"], "title": "Video Frame Interpolation via Residue Refinement"} {"abstract": "Optical remote sensing imagery is at the core of many Earth observation activities. The regular, consistent and global-scale nature of the satellite data is exploited in many applications, such as cropland monitoring, climate change assessment, land-cover and land-use classification, and disaster assessment. However, one main problem severely affects the temporal and spatial availability of surface observations, namely cloud cover. The task of removing clouds from optical images has been subject of studies since decades. The advent of the Big Data era in satellite remote sensing opens new possibilities for tackling the problem using powerful data-driven deep learning methods.\r\n\r\nIn this paper, a deep residual neural network architecture is designed to remove clouds from multispectral Sentinel-2 imagery. SAR-optical data fusion is used to exploit the synergistic properties of the two imaging systems to guide the image reconstruction. Additionally, a novel cloud-adaptive loss is proposed to maximize the retainment of original information. The network is trained and tested on a globally sampled dataset comprising real cloudy and cloud-free images. The proposed setup allows to remove even optically thick clouds by reconstructing an optical representation of the underlying land surface structure.", "field": ["Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Skip Connections", "Skip Connection Blocks"], "task": ["Cloud Removal", "Image Reconstruction"], "method": ["ResNet", "Batch Normalization", "Convolution", "ReLU", "Residual Network", "Residual Connection", "Residual Block", "Rectified Linear Units"], "dataset": ["SEN12MS-CR"], "metric": ["PSNR", "RMSE", "SAM", "MAE", "SSIM"], "title": "Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion"} {"abstract": "Scene text recognition (STR) is the task of recognizing character sequences in natural scenes. While there have been great advances in STR methods, current methods still fail to recognize texts in arbitrary shapes, such as heavily curved or rotated texts, which are abundant in daily life (e.g. restaurant signs, product labels, company logos, etc). This paper introduces a novel architecture to recognizing texts of arbitrary shapes, named Self-Attention Text Recognition Network (SATRN), which is inspired by the Transformer. SATRN utilizes the self-attention mechanism to describe two-dimensional (2D) spatial dependencies of characters in a scene text image. Exploiting the full-graph propagation of self-attention, SATRN can recognize texts with arbitrary arrangements and large inter-character spacing. As a result, SATRN outperforms existing STR models by a large margin of 5.7 pp on average in \"irregular text\" benchmarks. We provide empirical analyses that illustrate the inner mechanisms and the extent to which the model is applicable (e.g. rotated and multi-line text). We will open-source the code.", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Scene Text", "Scene Text Recognition"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections", "Rectified Linear Units"], "dataset": ["ICDAR2013", "ICDAR2015", "ICDAR 2003", "SVT"], "metric": ["Accuracy"], "title": "On Recognizing Texts of Arbitrary Shapes with 2D Self-Attention"} {"abstract": "In this paper, we introduce an embedding model, named CapsE, exploring a\ncapsule network to model relationship triples (subject, relation, object). Our\nCapsE represents each triple as a 3-column matrix where each column vector\nrepresents the embedding of an element in the triple. This 3-column matrix is\nthen fed to a convolution layer where multiple filters are operated to generate\ndifferent feature maps. These feature maps are reconstructed into corresponding\ncapsules which are then routed to another capsule to produce a continuous\nvector. The length of this vector is used to measure the plausibility score of\nthe triple. Our proposed CapsE obtains better performance than previous\nstate-of-the-art embedding models for knowledge graph completion on two\nbenchmark datasets WN18RR and FB15k-237, and outperforms strong search\npersonalization baselines on SEARCH17.", "field": ["Convolutions"], "task": ["Knowledge Graph Completion", "Link Prediction"], "method": ["Convolution"], "dataset": ["WN18RR", "FB15k-237"], "metric": ["Hits@10", "MR", "MRR", "Appropriate Evaluation Protocols"], "title": "A Capsule Network-based Embedding Model for Knowledge Graph Completion and Search Personalization"} {"abstract": "The highest accuracy object detectors to date are based on a two-stage\napproach popularized by R-CNN, where a classifier is applied to a sparse set of\ncandidate object locations. In contrast, one-stage detectors that are applied\nover a regular, dense sampling of possible object locations have the potential\nto be faster and simpler, but have trailed the accuracy of two-stage detectors\nthus far. In this paper, we investigate why this is the case. We discover that\nthe extreme foreground-background class imbalance encountered during training\nof dense detectors is the central cause. We propose to address this class\nimbalance by reshaping the standard cross entropy loss such that it\ndown-weights the loss assigned to well-classified examples. Our novel Focal\nLoss focuses training on a sparse set of hard examples and prevents the vast\nnumber of easy negatives from overwhelming the detector during training. To\nevaluate the effectiveness of our loss, we design and train a simple dense\ndetector we call RetinaNet. Our results show that when trained with the focal\nloss, RetinaNet is able to match the speed of previous one-stage detectors\nwhile surpassing the accuracy of all existing state-of-the-art two-stage\ndetectors. Code is at: https://github.com/facebookresearch/Detectron.", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Convolutional Neural Networks", "Stochastic Optimization", "Feature Extractors", "Activation Functions", "Normalization", "Loss Functions", "Convolutions", "Pooling Operations", "Skip Connections", "Object Detection Models", "Skip Connection Blocks"], "task": ["Dense Object Detection", "Object Detection", "Real-Time Object Detection", "Region Proposal"], "method": ["Weight Decay", "Average Pooling", "1x1 Convolution", "ResNet", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "FPN", "Grouped Convolution", "Focal Loss", "Batch Normalization", "Residual Network", "Kaiming Initialization", "ResNeXt Block", "SGD with Momentum", "ResNeXt", "Feature Pyramid Network", "Bottleneck Residual Block", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["SKU-110K", "COCO", "CARPK", "Trillion Pairs Dataset", "COCO test-dev"], "metric": ["APM", "RMSE", "MAP", "MAE", "box AP", "AP75", "APS", "APL", "Accuracy", "AP50", "AP"], "title": "Focal Loss for Dense Object Detection"} {"abstract": "Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TaBERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TaBERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TaBERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider. Implementation of the model will be available at http://fburl.com/TaBERT .", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Optimization", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Semantic Parsing", "Text-To-Sql"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "Gradient Clipping", "GELU", "BERT", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["spider", "WikiTableQuestions"], "metric": ["Accuracy (Test)", "Accuracy (Dev)"], "title": "TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data"} {"abstract": "Existing methods for visual reasoning attempt to directly map inputs to\noutputs using black-box architectures without explicitly modeling the\nunderlying reasoning processes. As a result, these black-box models often learn\nto exploit biases in the data rather than learning to perform visual reasoning.\nInspired by module networks, this paper proposes a model for visual reasoning\nthat consists of a program generator that constructs an explicit representation\nof the reasoning process to be performed, and an execution engine that executes\nthe resulting program to produce an answer. Both the program generator and the\nexecution engine are implemented by neural networks, and are trained using a\ncombination of backpropagation and REINFORCE. Using the CLEVR benchmark for\nvisual reasoning, we show that our model significantly outperforms strong\nbaselines and generalizes better in a variety of settings.", "field": ["Policy Gradient Methods"], "task": ["Visual Question Answering", "Visual Reasoning"], "method": ["REINFORCE"], "dataset": ["CLEVR-Humans", "CLEVR"], "metric": ["Accuracy"], "title": "Inferring and Executing Programs for Visual Reasoning"} {"abstract": "We present Kernel Point Convolution (KPConv), a new design of point convolution, i.e. that operates on point clouds without any intermediate representation. The convolution weights of KPConv are located in Euclidean space by kernel points, and applied to the input points close to them. Its capacity to use any number of kernel points gives KPConv more flexibility than fixed grid convolutions. Furthermore, these locations are continuous in space and can be learned by the network. Therefore, KPConv can be extended to deformable convolutions that learn to adapt kernel points to local geometry. Thanks to a regular subsampling strategy, KPConv is also efficient and robust to varying densities. Whether they use deformable KPConv for complex tasks, or rigid KPconv for simpler tasks, our networks outperform state-of-the-art classification and segmentation approaches on several datasets. We also offer ablation studies and visualizations to provide understanding of what has been learned by KPConv and to validate the descriptive power of deformable KPConv.", "field": ["Convolutions"], "task": ["3D Part Segmentation", "3D Point Cloud Classification", "3D Semantic Segmentation", "Scene Segmentation", "Semantic Segmentation"], "method": ["Convolution"], "dataset": ["Semantic3D", "S3DIS Area5", "S3DIS", "SemanticKITTI", "ShapeNet-Part", "ParisLille3D", "ModelNet40", "ScanNet"], "metric": ["Overall Accuracy", "Mean IoU", "3DIoU", "mAcc", "Class Average IoU", "Instance Average IoU", "mIoU"], "title": "KPConv: Flexible and Deformable Convolution for Point Clouds"} {"abstract": "Most modern multiple object tracking (MOT) systems follow the tracking-by-detection paradigm, consisting of a detector followed by a method for associating detections into tracks. There is a long history in tracking of combining motion and appearance features to provide robustness to occlusions and other challenges, but typically this comes with the trade-off of a more complex and slower implementation. Recent successes on popular 2D tracking benchmarks indicate that top-scores can be achieved using a state-of-the-art detector and relatively simple associations relying on single-frame spatial offsets -- notably outperforming contemporary methods that leverage learned appearance features to help re-identify lost tracks. In this paper, we propose an efficient joint detection and tracking model named DEFT, or \"Detection Embeddings for Tracking.\" Our approach relies on an appearance-based object matching network jointly-learned with an underlying object detection network. An LSTM is also added to capture motion constraints. DEFT has comparable accuracy and speed to the top methods on 2D online tracking leaderboards while having significant advantages in robustness when applied to more challenging tracking data. DEFT raises the bar on the nuScenes monocular 3D tracking challenge, more than doubling the performance of the previous top method. Code is publicly available.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["3D Multi-Object Tracking", "Multi-Object Tracking", "Multiple Object Tracking", "Object Detection", "Object Tracking"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["nuScenes", "MOT16", "MOT17"], "metric": ["MOTA", "amota"], "title": "DEFT: Detection Embeddings for Tracking"} {"abstract": "To truly understand the visual world our models should be able not only to\nrecognize images but also generate them. To this end, there has been exciting\nrecent progress on generating images from natural language descriptions. These\nmethods give stunning results on limited domains such as descriptions of birds\nor flowers, but struggle to faithfully reproduce complex sentences with many\nobjects and relationships. To overcome this limitation we propose a method for\ngenerating images from scene graphs, enabling explicitly reasoning about\nobjects and their relationships. Our model uses graph convolution to process\ninput graphs, computes a scene layout by predicting bounding boxes and\nsegmentation masks for objects, and converts the layout to an image with a\ncascaded refinement network. The network is trained adversarially against a\npair of discriminators to ensure realistic outputs. We validate our approach on\nVisual Genome and COCO-Stuff, where qualitative results, ablations, and user\nstudies demonstrate our method's ability to generate complex images with\nmultiple objects.", "field": ["Convolutions"], "task": ["Image Generation", "Layout-to-Image Generation"], "method": ["Convolution"], "dataset": ["COCO-Stuff 64x64", "Visual Genome 64x64"], "metric": ["Inception Score", "FID"], "title": "Image Generation from Scene Graphs"} {"abstract": "This paper reports a novel deep architecture referred to as Maxout network In\nNetwork (MIN), which can enhance model discriminability and facilitate the\nprocess of information abstraction within the receptive field. The proposed\nnetwork adopts the framework of the recently developed Network In Network\nstructure, which slides a universal approximator, multilayer perceptron (MLP)\nwith rectifier units, to exact features. Instead of MLP, we employ maxout MLP\nto learn a variety of piecewise linear activation functions and to mediate the\nproblem of vanishing gradients that can occur when using rectifier units.\nMoreover, batch normalization is applied to reduce the saturation of maxout\nunits by pre-conditioning the model and dropout is applied to prevent\noverfitting. Finally, average pooling is used in all pooling layers to\nregularize maxout MLP in order to facilitate information abstraction in every\nreceptive field while tolerating the change of object position. Because average\npooling preserves all features in the local patch, the proposed MIN model can\nenforce the suppression of irrelevant information during training. Our\nexperiments demonstrated the state-of-the-art classification performance when\nthe MIN model was applied to MNIST, CIFAR-10, and CIFAR-100 datasets and\ncomparable performance for SVHN dataset.", "field": ["Activation Functions", "Pooling Operations", "Regularization", "Normalization"], "task": ["Image Classification"], "method": ["Dropout", "Maxout", "Average Pooling", "Batch Normalization"], "dataset": ["SVHN", "MNIST", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Batch-normalized Maxout Network in Network"} {"abstract": "Blood pressure (BP) is a direct indicator of hypertension, a dangerous and potentially deadly condition. Regular monitoring of BP is thus important, but many people have aversion towards cuff-based devices, and their limitation is that they can only be used at rest. Using just a photoplethysmogram (PPG) to estimate BP is a potential solution investigated in our study. We analyzed the MIMIC III database for high-quality PPG and arterial BP waveforms, resulting in over 700 h of signals after preprocessing, belonging to 510 subjects. We then used the PPG alongside its first and second derivative as inputs into a novel spectro-temporal deep neural network with residual connections. We have shown in a leave-one-subject-out experiment that the network is able to model the dependency between PPG and BP, achieving mean absolute errors of 9.43 for systolic and 6.88 for diastolic BP. Additionally we have shown that personalization of models is important and substantially improves the results, while deriving a good general predictive model is difficult. We have made crucial parts of our study, especially the list of used subjects and our neural network code, publicly available, in an effort to provide a solid baseline and simplify potential comparison between future studies on an explicit MIMIC III subset.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Blood pressure estimation", "Photoplethysmography (PPG)"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["MIMIC-III"], "metric": ["MAE for SBP [mmHg]", "MAE for DBP [mmHg]"], "title": "Blood Pressure Estimation from Photoplethysmogram Using a Spectro-Temporal Deep Neural Network"} {"abstract": "Semantic segmentation and object detection research have recently achieved\nrapid progress. However, the former task has no notion of different instances\nof the same object, and the latter operates at a coarse, bounding-box level. We\npropose an Instance Segmentation system that produces a segmentation map where\neach pixel is assigned an object class and instance identity label. Most\napproaches adapt object detectors to produce segments instead of boxes. In\ncontrast, our method is based on an initial semantic segmentation module, which\nfeeds into an instance subnetwork. This subnetwork uses the initial\ncategory-level segmentation, along with cues from the output of an object\ndetector, within an end-to-end CRF to predict instances. This part of our model\nis dynamically instantiated to produce a variable number of instances per\nimage. Our end-to-end approach requires no post-processing and considers the\nimage holistically, instead of processing independent proposals. Therefore,\nunlike some related work, a pixel cannot belong to multiple instances.\nFurthermore, far more precise segmentations are achieved, as shown by our\nstate-of-the-art results (particularly at high IoU thresholds) on the Pascal\nVOC and Cityscapes datasets.", "field": ["Structured Prediction"], "task": ["Instance Segmentation", "Object Detection", "Panoptic Segmentation", "Semantic Segmentation"], "method": ["Conditional Random Field", "CRF"], "dataset": ["Cityscapes test"], "metric": ["PQ", "Average Precision"], "title": "Pixelwise Instance Segmentation with a Dynamically Instantiated Network"} {"abstract": "We extend neural Turing machine (NTM) model into a dynamic neural Turing\nmachine (D-NTM) by introducing a trainable memory addressing scheme. This\naddressing scheme maintains for each memory cell two separate vectors, content\nand address vectors. This allows the D-NTM to learn a wide variety of\nlocation-based addressing strategies including both linear and nonlinear ones.\nWe implement the D-NTM with both continuous, differentiable and discrete,\nnon-differentiable read/write mechanisms. We investigate the mechanisms and\neffects of learning to read and write into a memory through experiments on\nFacebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is\nevaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM\nbaselines. We have done extensive analysis of our model and different\nvariations of NTM on bAbI task. We also provide further experimental results on\nsequential pMNIST, Stanford Natural Language Inference, associative recall and\ncopy tasks.", "field": ["Output Functions", "Recurrent Neural Networks", "Activation Functions", "Working Memory Models", "Attention Mechanisms"], "task": ["Natural Language Inference", "Question Answering"], "method": ["Softmax", "Long Short-Term Memory", "Neural Turing Machine", "Tanh Activation", "Content-based Attention", "LSTM", "Location-based Attention", "Sigmoid Activation"], "dataset": ["bAbi"], "metric": ["Accuracy (trained on 1k)", "Accuracy (trained on 10k)"], "title": "Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes"} {"abstract": "Relying entirely on an attention mechanism, the Transformer introduced by\nVaswani et al. (2017) achieves state-of-the-art results for machine\ntranslation. In contrast to recurrent and convolutional neural networks, it\ndoes not explicitly model relative or absolute position information in its\nstructure. Instead, it requires adding representations of absolute positions to\nits inputs. In this work we present an alternative approach, extending the\nself-attention mechanism to efficiently consider representations of the\nrelative positions, or distances between sequence elements. On the WMT 2014\nEnglish-to-German and English-to-French translation tasks, this approach yields\nimprovements of 1.3 BLEU and 0.3 BLEU over absolute position representations,\nrespectively. Notably, we observe that combining relative and absolute position\nrepresentations yields no further improvement in translation quality. We\ndescribe an efficient implementation of our method and cast it as an instance\nof relation-aware self-attention mechanisms that can generalize to arbitrary\ngraph-labeled inputs.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["WMT2014 English-German", "WMT2014 English-French"], "metric": ["BLEU score"], "title": "Self-Attention with Relative Position Representations"} {"abstract": "We propose two neural network architectures for nested named entity recognition (NER), a setting in which named entities may overlap and also be labeled with more than one label. We encode the nested labels using a linearized scheme. In our first proposed approach, the nested labels are modeled as multilabels corresponding to the Cartesian product of the nested labels in a standard LSTM-CRF architecture. In the second one, the nested NER is viewed as a sequence-to-sequence problem, in which the input sequence consists of the tokens and output sequence of the labels, using hard attention on the word whose label is being predicted. The proposed methods outperform the nested NER state of the art on four corpora: ACE-2004, ACE-2005, GENIA and Czech CNEC. We also enrich our architectures with the recently published contextual embeddings: ELMo, BERT and Flair, reaching further improvements for the four nested entity corpora. In addition, we report flat NER state-of-the-art results for CoNLL-2002 Dutch and Spanish and for CoNLL-2003 English.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Attention Modules", "Subword Segmentation", "Word Embeddings", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections", "Bidirectional Recurrent Neural Networks"], "task": ["Named Entity Recognition", "Nested Mention Recognition", "Nested Named Entity Recognition"], "method": ["Weight Decay", "Long Short-Term Memory", "Adam", "BiLSTM", "Tanh Activation", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Bidirectional LSTM", "Residual Connection", "Dense Connections", "ELMo", "Layer Normalization", "GELU", "Sigmoid Activation", "WordPiece", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "LSTM", "Dropout", "BERT"], "dataset": ["GENIA", "ACE 2004", "CoNLL 2002 (Spanish)", "CoNLL 2002 (Dutch)", "ACE 2005", "CoNLL 2003 (English)", "CoNLL 2003 (German)"], "metric": ["F1"], "title": "Neural Architectures for Nested NER through Linearization"} {"abstract": "We propose ENCASE to combine expert features and DNNs (Deep Neural Networks) together for ECG classification. We first explore and implement expert features from statistical area, signal processing area and medical area. Then, we build DNNs to automatically extract deep features. Besides, we propose a new algorithm to find the most representative wave (called centerwave) among long ECG record, and extract features from centerwave. Finally, we combine these features together and put them into ensemble classifiers. Experiment on 4-class ECG data classification reports 0.84 F1 score, which is much better than any of the single model.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Arrhythmia Detection", "ECG Classification", "Time Series Classification"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["The PhysioNet Computing in Cardiology Challenge 2017", "Physionet 2017 Atrial Fibrillation"], "metric": ["F1 (Hidden Test Set)"], "title": "ENCASE: An ENsemble ClASsifiEr for ECG classification using expert features and deep neural networks"} {"abstract": "Graph-structured data appears frequently in domains including chemistry,\nnatural language semantics, social networks, and knowledge bases. In this work,\nwe study feature learning techniques for graph-structured inputs. Our starting\npoint is previous work on Graph Neural Networks (Scarselli et al., 2009), which\nwe modify to use gated recurrent units and modern optimization techniques and\nthen extend to output sequences. The result is a flexible and broadly useful\nclass of neural network models that has favorable inductive biases relative to\npurely sequence-based models (e.g., LSTMs) when the problem is\ngraph-structured. We demonstrate the capabilities on some simple AI (bAbI) and\ngraph algorithm learning tasks. We then show it achieves state-of-the-art\nperformance on a problem from program verification, in which subgraphs need to\nbe matched to abstract data structures.", "field": ["Graph Models"], "task": ["Drug Discovery", "Graph Classification", "Node Classification", "SQL-to-Text"], "method": ["GGS-NNs", "Gated Graph Sequence Neural Networks"], "dataset": ["PubMed (0.1%)", "PubMed (0.03%)", "QM9", "Cora (1%)", "IPC-grounded", "PubMed (0.05%)", "Cora (3%)", "IPC-lifted", "WikiSQL", "CiteSeer (1%)", "Cora (0.5%)", "Cora with Public Split: fixed 20 nodes per class", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["BLEU-4", "Error ratio", "Accuracy"], "title": "Gated Graph Sequence Neural Networks"} {"abstract": "Semantic parsing aims to map natural language utterances onto machine interpretable meaning representations, aka programs whose execution against a real-world environment produces a denotation. Weakly-supervised semantic parsers are trained on utterance-denotation pairs treating programs as latent. The task is challenging due to the large search space and spuriousness of programs which may execute to the correct answer but do not generalize to unseen examples. Our goal is to instill an inductive bias in the parser to help it distinguish between spurious and correct programs. We capitalize on the intuition that correct programs would likely respect certain structural constraints were they to be aligned to the question (e.g., program fragments are unlikely to align to overlapping text spans) and propose to model alignments as structured latent variables. In order to make the latent-alignment framework tractable, we decompose the parsing task into (1) predicting a partial \"abstract program\" and (2) refining it while modeling structured alignments with differential dynamic programming. We obtain state-of-the-art performance on the WIKITABLEQUESTIONS and WIKISQL datasets. When compared to a standard attention baseline, we observe that the proposed structured-alignment mechanism is highly beneficial.", "field": ["Output Functions", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Word Embeddings", "Feedforward Networks"], "task": ["Semantic Parsing"], "method": ["Feedforward Network", "Softmax", "Long Short-Term Memory", "Adam", "Tanh Activation", "GloVe Embeddings", "ReLU", "LSTM", "GloVe", "Dense Connections", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["WikiTableQuestions"], "metric": ["Accuracy (Test)", "Accuracy (Dev)"], "title": "Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs"} {"abstract": "Spatio-temporal prediction plays an important role in many application areas\nespecially in traffic domain. However, due to complicated spatio-temporal\ndependency and high non-linear dynamics in road networks, traffic prediction\ntask is still challenging. Existing works either exhibit heavy training cost or\nfail to accurately capture the spatio-temporal patterns, also ignore the\ncorrelation between distant roads that share the similar patterns. In this\npaper, we propose a novel deep learning framework to overcome these issues: 3D\nTemporal Graph Convolutional Networks (3D-TGCN). Two novel components of our\nmodel are introduced. (1) Instead of constructing the road graph based on\nspatial information, we learn it by comparing the similarity between time\nseries for each road, thus providing a spatial information free framework. (2)\nWe propose an original 3D graph convolution model to model the spatio-temporal\ndata more accurately. Empirical results show that 3D-TGCN could outperform\nstate-of-the-art baselines.", "field": ["Convolutions"], "task": ["Time Series", "Traffic Prediction"], "method": ["Convolution"], "dataset": ["PeMS-M"], "metric": ["MAE (60 min)"], "title": "3D Graph Convolutional Networks with Temporal Graphs: A Spatial Information Free Framework For Traffic Forecasting"} {"abstract": "Sequence-to-sequence automatic speech recognition (ASR) models require large quantities of data to attain high performance. For this reason, there has been a recent surge in interest for unsupervised and semi-supervised training in such models. This work builds upon recent results showing notable improvements in semi-supervised training using cycle-consistency and related techniques. Such techniques derive training procedures and losses able to leverage unpaired speech and/or text data by combining ASR with Text-to-Speech (TTS) models. In particular, this work proposes a new semi-supervised loss combining an end-to-end differentiable ASR$\\rightarrow$TTS loss with TTS$\\rightarrow$ASR loss. The method is able to leverage both unpaired speech and text data to outperform recently proposed related techniques in terms of \\%WER. We provide extensive results analyzing the impact of data quantity and speech and text modalities and show consistent gains across WSJ and Librispeech corpora. Our code is provided in ESPnet to reproduce the experiments.", "field": ["Initialization", "Semantic Segmentation Models", "Degridding", "Activation Functions", "Convolutions", "Image Model Blocks"], "task": ["Semi-Supervised Image Classification", "Speech Recognition"], "method": ["ESPNet", "Dilated Convolution", "Efficient Spatial Pyramid", "ESP", "Convolution", "1x1 Convolution", "PReLU", "Parameterized ReLU", "Hierarchical Feature Fusion", "Kaiming Initialization", "Pointwise Convolution"], "dataset": ["ImageNet - 10% labeled data"], "metric": ["Top 5 Accuracy"], "title": "Semi-supervised Sequence-to-sequence ASR using Unpaired Speech and Text"} {"abstract": "Feature extraction becomes increasingly important as data grows high\ndimensional. Autoencoder as a neural network based feature extraction method\nachieves great success in generating abstract features of high dimensional\ndata. However, it fails to consider the relationships of data samples which may\naffect experimental results of using original and new features. In this paper,\nwe propose a Relation Autoencoder model considering both data features and\ntheir relationships. We also extend it to work with other major autoencoder\nmodels including Sparse Autoencoder, Denoising Autoencoder and Variational\nAutoencoder. The proposed relational autoencoder models are evaluated on a set\nof benchmark datasets and the experimental results show that considering data\nrelationships can generate more robust features which achieve lower\nconstruction loss and then lower error rate in further classification compared\nto the other variants of autoencoders.", "field": ["Generative Models"], "task": ["Denoising", "Skeleton Based Action Recognition"], "method": ["AutoEncoder", "Sparse Autoencoder", "Denoising Autoencoder"], "dataset": ["J-HMBD Early Action"], "metric": ["10%"], "title": "Relational Autoencoder for Feature Extraction"} {"abstract": "We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, the suggested architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models bring significant performance gain over the standard LSTMs on benchmark datasets for natural language inference, paraphrase detection, sentiment classification, and machine translation. We also conduct extensive qualitative analysis to understand the internal behavior of the suggested approach.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Machine Translation", "Natural Language Inference", "Paraphrase Identification", "Sentiment Analysis"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["SST-2 Binary classification", "SST-5 Fine-grained classification", "SNLI", "Quora Question Pairs"], "metric": ["% Test Accuracy", "Accuracy"], "title": "Cell-aware Stacked LSTMs for Modeling Sentences"} {"abstract": "Deep generative modeling has seen impressive advances in recent years, to the point where it is now commonplace to see simulated samples (e.g., images) that closely resemble real-world data. However, generation quality is generally inconsistent for any given model and can vary dramatically between samples. We introduce Discriminator Gradient flow (DGflow), a new technique that improves generated samples via the gradient flow of entropy-regularized f-divergences between the real and the generated data distributions. The gradient flow takes the form of a non-linear Fokker-Plank equation, which can be easily simulated by sampling from the equivalent McKean-Vlasov process. By refining inferior samples, our technique avoids wasteful sample rejection used by previous methods (DRS & MH-GAN). Compared to existing works that focus on specific GAN variants, we show our refinement approach can be applied to GANs with vector-valued critics and even other deep generative models such as VAEs and Normalizing Flows. Empirical results on multiple synthetic, image, and text datasets demonstrate that DGflow leads to significant improvement in the quality of generated samples for a variety of generative models, outperforming the state-of-the-art Discriminator Optimal Transport (DOT) and Discriminator Driven Latent Sampling (DDLS) methods.", "field": ["Generative Models", "Distribution Approximation"], "task": ["Image Generation", "Text Generation"], "method": ["Generative Adversarial Network", "GAN", "Normalizing Flows"], "dataset": ["One Billion Word", "CIFAR-10"], "metric": ["Inception score", "JS-4", "Frechet Inception Distance"], "title": "Refining Deep Generative Models via Discriminator Gradient Flow"} {"abstract": "In this paper, we study bidirectional LSTM network for the task of text classification using both supervised and semi-supervised approaches. Several prior works have suggested that either complex pretraining schemes using unsupervised methods such as language modeling (Dai and Le 2015; Miyato, Dai, and Goodfellow 2016) or complicated models (Johnson and Zhang 2017) are necessary to achieve a high classification accuracy. However, we develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results compared with more complex approaches. Furthermore, in addition to cross-entropy loss, by using a combination of entropy minimization, adversarial, and virtual adversarial losses for both labeled and unlabeled data, we report state-of-the-art results for text classification task on several benchmark datasets. In particular, on the ACL-IMDB sentiment analysis and AG-News topic classification datasets, our method outperforms current approaches by a substantial margin. We also show the generality of the mixed objective function by improving the performance on relation extraction task.", "field": ["Recurrent Neural Networks", "Activation Functions", "Bidirectional Recurrent Neural Networks"], "task": ["Language Modelling", "Relation Extraction", "Semi Supervised Text Classification", "Semi-Supervised Text Classification", "Sentiment Analysis", "Text Classification"], "method": ["Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Bidirectional LSTM", "LSTM", "Sigmoid Activation"], "dataset": ["AG News", "DBpedia", "IMDb"], "metric": ["Error", "Accuracy"], "title": "Revisiting LSTM Networks for Semi-Supervised Text Classification via Mixed Objective Function"} {"abstract": "Convolutional Neural Networks (CNNs) have been widely used in computer vision\ntasks, such as face recognition and verification, and have achieved\nstate-of-the-art results due to their ability to capture discriminative deep\nfeatures. Conventionally, CNNs have been trained with softmax as supervision\nsignal to penalize the classification loss. In order to further enhance the\ndiscriminative capability of deep features, we introduce a joint supervision\nsignal, Git loss, which leverages on softmax and center loss functions. The aim\nof our loss function is to minimize the intra-class variations as well as\nmaximize the inter-class distances. Such minimization and maximization of deep\nfeatures are considered ideal for face recognition task. We perform experiments\non two popular face recognition benchmarks datasets and show that our proposed\nloss function achieves maximum separability between deep face features of\ndifferent identities and achieves state-of-the-art accuracy on two major face\nrecognition benchmark datasets: Labeled Faces in the Wild (LFW) and YouTube\nFaces (YTF). However, it should be noted that the major objective of Git loss\nis to achieve maximum separability between deep features of divergent\nidentities.", "field": ["Output Functions"], "task": ["Face Identification", "Face Recognition", "Face Verification"], "method": ["Softmax"], "dataset": ["YouTube Faces DB", "Labeled Faces in the Wild"], "metric": ["Accuracy"], "title": "Git Loss for Deep Face Recognition"} {"abstract": "Spatial downsampling layers are favored in convolutional neural networks (CNNs) to downscale feature maps for larger receptive fields and less memory consumption. However, for discriminative tasks, there is a possibility that these layers lose the discriminative details due to improper pooling strategies, which could hinder the learning process and eventually result in suboptimal models. In this paper, we present a unified framework over the existing downsampling layers (e.g., average pooling, max pooling, and strided convolution) from a local importance view. In this framework, we analyze the issues of these widely-used pooling layers and figure out the criteria for designing an effective downsampling layer. According to this analysis, we propose a conceptually simple, general, and effective pooling layer based on local importance modeling, termed as {\\em Local Importance-based Pooling} (LIP). LIP can automatically enhance discriminative features during the downsampling procedure by learning adaptive importance weights based on inputs. Experiment results show that LIP consistently yields notable gains with different depths and different architectures on ImageNet classification. In the challenging MS COCO dataset, detectors with our LIP-ResNets as backbones obtain a consistent improvement ($\\ge 1.4\\%$) over the vanilla ResNets, and especially achieve the current state-of-the-art performance in detecting small objects under the single-scale testing scheme.", "field": ["Object Detection Models", "Semantic Segmentation Models", "Regularization", "Convolutional Neural Networks", "Output Functions", "Feature Extractors", "Stochastic Optimization", "RoI Feature Extractors", "Activation Functions", "Initialization", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Region Proposal", "Skip Connections", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Image Classification", "Object Detection"], "method": ["Weight Decay", "Faster R-CNN", "Average Pooling", "1x1 Convolution", "Region Proposal Network", "ResNet", "Instance Normalization", "Local Importance-based Pooling", "RoIPool", "Convolution", "ReLU", "Residual Connection", "FPN", "Fully Convolutional Network", "Dense Connections", "FCN", "RPN", "Dense Block", "Batch Normalization", "Residual Network", "Kaiming Initialization", "SGD with Momentum", "Softmax", "Feature Pyramid Network", "Concatenated Skip Connection", "Bottleneck Residual Block", "DenseNet", "Dropout", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet", "COCO minival", "COCO test-dev"], "metric": ["Number of params", "APM", "Top 1 Accuracy", "box AP", "AP75", "APS", "APL", "AP50", "Top 5 Accuracy"], "title": "LIP: Local Importance-based Pooling"} {"abstract": "We propose a framework for recognizing human actions from skeleton data by modeling the underlying dynamic process that generates the motion pattern. We capture three major factors that contribute to the complexity of the motion pattern including spatial dependencies among body joints, temporal dependencies of body poses, and variation among subjects in action execution. We utilize graph convolution to extract structure-aware feature representation from pose data by exploiting the skeleton anatomy. Long short-term memory (LSTM) network is then used to capture the temporal dynamics of the data. Finally, the whole model is extended under the Bayesian framework to a probabilistic model in order to better capture the stochasticity and variation in the data. An adversarial prior is developed to regularize the model parameters to improve the generalization of the model. A Bayesian inference problem is formulated to solve the classification task. We demonstrate the benefit of this framework in several benchmark datasets with recognition under various generalization conditions.\r", "field": ["Convolutions"], "task": ["Action Recognition", "Bayesian Inference", "Skeleton Based Action Recognition"], "method": ["Convolution"], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Bayesian Graph Convolution LSTM for Skeleton Based Action Recognition"} {"abstract": "We introduce the deep inside-outside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree. Our approach predicts each word in an input sentence conditioned on the rest of the sentence. During training we use dynamic programming to consider all possible binary trees over the sentence, and for inference we use the CKY algorithm to extract the highest scoring parse. DIORA outperforms previously reported results for unsupervised binary constituency parsing on the benchmark WSJ dataset.", "field": ["Generative Models"], "task": ["Constituency Grammar Induction", "Constituency Parsing"], "method": ["AutoEncoder"], "dataset": ["PTB"], "metric": ["Max F1 (WSJ10)", "Mean F1 (WSJ10)", "Max F1 (WSJ)", "Mean F1 (WSJ)"], "title": "Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders"} {"abstract": "Determining the intended sense of words in text - word sense disambiguation\n(WSD) - is a long standing problem in natural language processing. Recently,\nresearchers have shown promising results using word vectors extracted from a\nneural network language model as features in WSD algorithms. However, a simple\naverage or concatenation of word vectors for each word in a text loses the\nsequential and syntactic information of the text. In this paper, we study WSD\nwith a sequence learning neural net, LSTM, to better capture the sequential and\nsyntactic patterns of the text. To alleviate the lack of training data in\nall-words WSD, we employ the same LSTM in a semi-supervised label propagation\nclassifier. We demonstrate state-of-the-art results, especially on verbs.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Language Modelling", "Word Sense Disambiguation"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["SensEval 3 Task 1", "SemEval 2013 Task 12", "SemEval 2007 Task 17", "SemEval 2007 Task 7", "SensEval 2"], "metric": ["F1"], "title": "Semi-supervised Word Sense Disambiguation with Neural Models"} {"abstract": "Network pruning reduces the computation costs of an over-parameterized network without performance damage. Prevailing pruning algorithms pre-define the width and depth of the pruned networks, and then transfer parameters from the unpruned network to pruned networks. To break the structure limitation of the pruned networks, we propose to apply neural architecture search to search directly for a network with flexible channel and layer sizes. The number of the channels/layers is learned by minimizing the loss of the pruned networks. The feature map of the pruned network is an aggregation of K feature map fragments (generated by K networks of different sizes), which are sampled based on the probability distribution.The loss can be back-propagated not only to the network weights, but also to the parameterized distribution to explicitly tune the size of the channels/layers. Specifically, we apply channel-wise interpolation to keep the feature map with different channel sizes aligned in the aggregation procedure. The maximum probability for the size in each distribution serves as the width and depth of the pruned network, whose parameters are learned by knowledge transfer, e.g., knowledge distillation, from the original networks. Experiments on CIFAR-10, CIFAR-100 and ImageNet demonstrate the effectiveness of our new perspective of network pruning compared to traditional network pruning algorithms. Various searching and knowledge transfer approaches are conducted to show the effectiveness of the two components. Code is at: https://github.com/D-X-Y/NAS-Projects.", "field": ["Initialization", "Output Functions", "Convolutional Neural Networks", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Knowledge Distillation", "Network Pruning", "Neural Architecture Search", "Transfer Learning"], "method": ["Average Pooling", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "ResNet", "Convolution", "ReLU", "Residual Connection", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Sigmoid Activation", "Softmax", "LSTM", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet", "CIFAR-100", "CIFAR-10"], "metric": ["GFLOPs", "Accuracy"], "title": "Network Pruning via Transformable Architecture Search"} {"abstract": "Semantic segmentation of point clouds is a key component of scene understanding for robotics and autonomous driving. In this paper, we introduce TORNADO-Net - a neural network for 3D LiDAR point cloud semantic segmentation. We incorporate a multi-view (bird-eye and range) projection feature extraction with an encoder-decoder ResNet architecture with a novel diamond context block. Current projection-based methods do not take into account that neighboring points usually belong to the same class. To better utilize this local neighbourhood information and reduce noisy predictions, we introduce a combination of Total Variation, Lovasz-Softmax, and Weighted Cross-Entropy losses. We also take advantage of the fact that the LiDAR data encompasses 360 degrees field of view and uses circular padding. We demonstrate state-of-the-art results on the SemanticKITTI dataset and also provide thorough quantitative evaluations and ablation results.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Loss Functions", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["3D Semantic Segmentation", "Autonomous Driving", "Scene Understanding", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Residual Block", "Lovasz-Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["SemanticKITTI"], "metric": ["mIoU"], "title": "TORNADO-Net: mulTiview tOtal vaRiatioN semAntic segmentation with Diamond inceptiOn module"} {"abstract": "Over the past few years, deep neural networks (DNNs) have garnered remarkable success in a diverse range of real-world applications. However, DNNs consider a large number of inputs and consist of a large number of parameters, resulting in high computational demand. We study the human somatosensory system and propose the SpinalNet to achieve higher accuracy with less computational resources. In a typical neural network (NN) architecture, the hidden layers receive inputs in the first layer and then transfer the intermediate outcomes to the next layer. In the proposed SpinalNet, the structure of hidden layers allocates to three sectors: 1) Input row, 2) Intermediate row, and 3) output row. The intermediate row of the SpinalNet contains a few neurons. The role of input segmentation is in enabling each hidden layer to receive a part of the inputs and outputs of the previous layer. Therefore, the number of incoming weights in a hidden layer is significantly lower than traditional DNNs. As all layers of the SpinalNet directly contributes to the output row, the vanishing gradient problem does not exist. We also investigate the SpinalNet fully-connected layer to several well-known DNN models and perform traditional learning and transfer learning. We observe significant error reductions with lower computational costs in most of the DNNs. We have also obtained the state-of-the-art (SOTA) performance for QMNIST, Kuzushiji-MNIST, EMNIST (Letters, Digits, and Balanced), STL-10, Bird225, Fruits 360, and Caltech-101 datasets. The scripts of the proposed SpinalNet are available with the following link: https://github.com/dipuk0506/SpinalNet", "field": ["Image Data Augmentation", "Stochastic Optimization"], "task": ["Fine-Grained Image Classification", "Image Classification", "Transfer Learning"], "method": ["Image Scale Augmentation", "Adam", "SGD with Momentum", "RandAugment"], "dataset": ["Oxford 102 Flowers", "EMNIST-Balanced", "Bird-225", "Fruits-360", "Flowers-102", "STL-10", "Caltech-101", "MNIST", "EMNIST-Letters"], "metric": ["Percentage correct", "Top-1 Error Rate", "Trainable Parameters", "Accuracy (%)", "Accuracy"], "title": "SpinalNet: Deep Neural Network with Gradual Input"} {"abstract": "Self-supervised (SS) learning is a powerful approach for representation learning using unlabeled data. Recently, it has been applied to Generative Adversarial Networks (GAN) training. Specifically, SS tasks were proposed to address the catastrophic forgetting issue in the GAN discriminator. In this work, we perform an in-depth analysis to understand how SS tasks interact with learning of generator. From the analysis, we identify issues of SS tasks which allow a severely mode-collapsed generator to excel the SS tasks. To address the issues, we propose new SS tasks based on a multi-class minimax game. The competition between our proposed SS tasks in the game encourages the generator to learn the data distribution and generate diverse samples. We provide both theoretical and empirical analysis to support that our proposed SS tasks have better convergence property. We conduct experiments to incorporate our proposed SS tasks into two different GAN baseline models. Our approach establishes state-of-the-art FID scores on CIFAR-10, CIFAR-100, STL-10, CelebA, Imagenet $32\\times32$ and Stacked-MNIST datasets, outperforming existing works by considerable margins in some cases. Our unconditional GAN model approaches performance of conditional GAN without using labeled data. Our code: https://github.com/tntrung/msgan", "field": ["Generative Models", "Convolutions"], "task": ["Image Generation", "Representation Learning"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["ImageNet 32x32", "CIFAR-100", "CIFAR-10"], "metric": ["FID"], "title": "Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game"} {"abstract": "Recent advances in Generative Adversarial Networks (GANs) have led to their widespread adoption for the purposes of generating high quality synthetic imagery. While capable of generating photo-realistic images, these models often produce unrealistic samples which fall outside of the data manifold. Several recently proposed techniques attempt to avoid spurious samples, either by rejecting them after generation, or by truncating the model's latent space. While effective, these methods are inefficient, as a large fraction of training time and model capacity are dedicated towards samples that will ultimately go unused. In this work we propose a novel approach to improve sample quality: altering the training dataset via instance selection before model training has taken place. By refining the empirical data distribution before training, we redirect model capacity towards high-density regions, which ultimately improves sample fidelity, lowers model capacity requirements, and significantly reduces training time. Code is available at https://github.com/uoguelph-mlrg/instance_selection_for_gans.", "field": ["Normalization", "Optimization", "Attention Mechanisms", "Generative Adversarial Networks", "Discriminators", "Attention Modules", "Regularization", "Activation Functions", "Latent Variable Sampling", "Convolutions", "Image Feature Extractors", "Stochastic Optimization", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Output Functions", "Generative Models", "Skip Connections", "Image Model Blocks"], "task": ["Conditional Image Generation"], "method": ["TTUR", "Truncation Trick", "Off-Diagonal Orthogonal Regularization", "Spectral Normalization", "Self-Attention GAN", "Adam", "Projection Discriminator", "Early Stopping", "GAN Hinge Loss", "1x1 Convolution", "SAGAN Self-Attention Module", "Convolution", "ReLU", "Residual Connection", "Linear Layer", "Two Time-scale Update Rule", "Dense Connections", "Feedforward Network", "Conditional Batch Normalization", "Non-Local Operation", "Batch Normalization", "Dot-Product Attention", "SAGAN", "Non-Local Block", "Softmax", "BigGAN", "Residual Block", "Rectified Linear Units"], "dataset": ["ImageNet64x64", "ImageNet 128x128"], "metric": ["Inception Score", "Inception score", "FID"], "title": "Instance Selection for GANs"} {"abstract": "With the recent success of the pre-training technique for NLP and image-linguistic tasks, some video-linguistic pre-training works are gradually developed to improve video-text related downstream tasks. However, most of the existing multimodal models are pre-trained for understanding tasks, leading to a pretrain-finetune discrepancy for generation tasks. This paper proposes UniVL: a Unified Video and Language pre-training model for both multimodal understanding and generation. It comprises four components, including two single-modal encoders, a cross encoder, and a decoder with the Transformer backbone. Five objectives, including video-text joint, conditioned masked language model (CMLM), conditioned masked frame model (CMFM), video-text alignment, and language reconstruction, are designed to train each of the components. We further develop two pre-training strategies, stage by stage pre-training (StagedP) and enhanced video representation (EnhancedV), to make the training process of the UniVL more effective. The pre-train is carried out on a sizeable instructional video dataset HowTo100M. Experimental results demonstrate that the UniVL can learn strong video-text representation and achieves state-of-the-art results on five downstream tasks.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Video Captioning", "Video Retrieval"], "method": ["Weight Decay", "Adam", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Transformer", "ReLU", "Residual Connection", "Dense Connections", "Layer Normalization", "Label Smoothing", "GELU", "WordPiece", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Dropout", "BERT", "Rectified Linear Units"], "dataset": ["YouCook2", "MSR-VTT"], "metric": ["text-to-video Median Rank", "METEOR", "text-to-video R@5", "CIDEr", "BLEU-3", "text-to-video R@1", "ROUGE-L", "BLEU-4", "text-to-video R@10"], "title": "UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"} {"abstract": "Skeleton-based human action recognition has attracted a lot of research\nattention during the past few years. Recent works attempted to utilize\nrecurrent neural networks to model the temporal dependencies between the 3D\npositional configurations of human body joints for better analysis of human\nactivities in the skeletal data. The proposed work extends this idea to spatial\ndomain as well as temporal domain to better analyze the hidden sources of\naction-related information within the human skeleton sequences in both of these\ndomains simultaneously. Based on the pictorial structure of Kinect's skeletal\ndata, an effective tree-structure based traversal framework is also proposed.\nIn order to deal with the noise in the skeletal data, a new gating mechanism\nwithin LSTM module is introduced, with which the network can learn the\nreliability of the sequential data and accordingly adjust the effect of the\ninput data on the updating procedure of the long-term context representation\nstored in the unit's memory cell. Moreover, we introduce a novel multi-modal\nfeature fusion strategy within the LSTM unit in this paper. The comprehensive\nexperimental results on seven challenging benchmark datasets for human action\nrecognition demonstrate the effectiveness of the proposed method.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Action Recognition", "One-Shot 3D Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["NTU RGB+D 120", "SYSU 3D"], "metric": ["Accuracy (Cross-Subject)", "Accuracy (Cross-Setup)", "Accuracy"], "title": "Skeleton-Based Action Recognition Using Spatio-Temporal LSTM Network with Trust Gates"} {"abstract": "Gait recognition, applied to identify individual walking patterns in a long-distance, is one of the most promising video-based biometric technologies. At present, most gait recognition methods take the whole human body as a unit to establish the spatio-temporal representations. However, we have observed that different parts of human body possess evidently various visual appearances and movement patterns during walking. In the latest literature, employing partial features for human body description has been verified being beneficial to individual recognition. Taken above insights together, we assume that each part of human body needs its own spatio-temporal expression. Then, we propose a novel part-based model GaitPart and get two aspects effect of boosting the performance: On the one hand, Focal Convolution Layer, a new applying of convolution, is presented to enhance the fine-grained learning of the part-level spatial features. On the other hand, the Micro-motion Capture Module (MCM) is proposed and there are several parallel MCMs in the GaitPart corresponding to the pre-defined parts of the human body, respectively. It is worth mentioning that the MCM is a novel way of temporal modeling for gait task, which focuses on the short-range temporal features rather than the redundant long-range features for cycle gait. Experiments on two of the most popular public datasets, CASIA-B and OU-MVLP, richly exemplified that our method meets a new state-of-the-art on multiple standard benchmarks. The source code will be available on https://github.com/ChaoFan96/GaitPart.\r", "field": ["Convolutions"], "task": ["Gait Recognition", "Motion Capture", "Multiview Gait Recognition"], "method": ["Convolution"], "dataset": ["CASIA-B", "OU-MVLP"], "metric": ["Accuracy (Cross-View)", "BG#1-2", "NM#5-6 ", "Accuracy (Cross-View, Avg)", "CL#1-2"], "title": "GaitPart: Temporal Part-Based Model for Gait Recognition"} {"abstract": "Recently, neural network based approaches have achieved significant improvement for solving large, complex, graph-structured problems. However, their bottlenecks still need to be addressed, and the advantages of multi-scale information and deep architectures have not been sufficiently exploited. In this paper, we theoretically analyze how existing Graph Convolutional Networks (GCNs) have limited expressive power due to the constraint of the activation functions and their architectures. We generalize spectral graph convolution and deep GCN in block Krylov subspace forms and devise two architectures, both with the potential to be scaled deeper but each making use of the multi-scale information in different ways. We further show that the equivalence of these two architectures can be established under certain conditions. On several node classification tasks, with or without the help of validation, the two new architectures achieve better performance compared to many state-of-the-art methods.", "field": ["Convolutions", "Graph Models"], "task": ["Node Classification"], "method": ["Graph Convolutional Network", "GCN", "Convolution"], "dataset": ["PubMed (0.1%)", "PubMed (0.03%)", "Cora (1%)", "PubMed (0.05%)", "Cora (3%)", "CiteSeer (1%)", "Cora (0.5%)", "Cora with Public Split: fixed 20 nodes per class", "CiteSeer (0.5%)", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks"} {"abstract": "In contrast to the literature where local patterns in 3D point clouds are captured by customized convolutional operators, in this paper we study the problem of how to effectively and efficiently project such point clouds into a 2D image space so that traditional 2D convolutional neural networks (CNNs) such as U-Net can be applied for segmentation. To this end, we are motivated by graph drawing and reformulate it as an integer programming problem to learn the topology-preserving graph-to-grid mapping for each individual point cloud. To accelerate the computation in practice, we further propose a novel hierarchical approximate algorithm. With the help of the Delaunay triangulation for graph construction from point clouds and a multi-scale U-Net for segmentation, we manage to demonstrate the state-of-the-art performance on ShapeNet and PartNet, respectively, with significant improvement over the literature. Code is available at https://github.com/Zhang-VISLab.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["3D Part Segmentation", "graph construction"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["ShapeNet-Part"], "metric": ["Class Average IoU", "Instance Average IoU"], "title": "Learning to Segment 3D Point Clouds in 2D Image Space"} {"abstract": "In natural language processing, relation extraction seeks to rationally understand unstructured text. Here, we propose a novel SpanBERT-based graph convolutional network (DG-SpanBERT) that extracts semantic features from a raw sentence using the pre-trained language model SpanBERT and a graph convolutional network to pool latent features. Our DG-SpanBERT model inherits the advantage of SpanBERT on learning rich lexical features from large-scale corpus. It also has the ability to capture long-range relations between entities due to the usage of GCN on dependency tree. The experimental results show that our model outperforms other existing dependency-based and sequence-based models and achieves a state-of-the-art performance on the TACRED dataset.", "field": ["Graph Models"], "task": ["Language Modelling", "Relation Extraction"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["TACRED"], "metric": ["F1"], "title": "Efficient long-distance relation extraction with DG-SpanBERT"} {"abstract": "Following the global COVID-19 pandemic, the number of scientific papers studying the virus has grown massively, leading to increased interest in automated literate review. We present a clinical text mining system that improves on previous efforts in three ways. First, it can recognize over 100 different entity types including social determinants of health, anatomy, risk factors, and adverse events in addition to other commonly used clinical and biomedical entities. Second, the text processing pipeline includes assertion status detection, to distinguish between clinical facts that are present, absent, conditional, or about someone other than the patient. Third, the deep learning models used are more accurate than previously available, leveraging an integrated pipeline of state-of-the-art pretrained named entity recognition models, and improving on the previous best performing benchmarks for assertion status detection. We illustrate extracting trends and insights, e.g. most frequent disorders and symptoms, and most common vital signs and EKG findings, from the COVID-19 Open Research Dataset (CORD-19). The system is built using the Spark NLP library which natively supports scaling to use distributed clusters, leveraging GPUs, configurable and reusable NLP pipelines, healthcare specific embeddings, and the ability to train models to support new entity types or human languages with no code changes.", "field": ["Bidirectional Recurrent Neural Networks", "Output Functions", "Word Embeddings"], "task": ["Clinical Assertion Status Detection", "Clinical Concept Extraction", "Named Entity Recognition"], "method": ["Softmax", "BiLSTM", "CNN BiLSTM", "CNN Bidirectional LSTM", "Bidirectional LSTM", "Skip-gram Word2Vec"], "dataset": ["2010 i2b2/VA"], "metric": ["Micro F1"], "title": "Improving Clinical Document Understanding on COVID-19 Research with Spark NLP"} {"abstract": "To provide more accurate, diverse, and explainable recommendation, it is compulsory to go beyond modeling user-item interactions and take side information into account. Traditional methods like factorization machine (FM) cast it as a supervised learning problem, which assumes each interaction as an independent instance with side information encoded. Due to the overlook of the relations among instances or items (e.g., the director of a movie is also an actor of another movie), these methods are insufficient to distill the collaborative signal from the collective behaviors of users. In this work, we investigate the utility of knowledge graph (KG), which breaks down the independent interaction assumption by linking items with their attributes. We argue that in such a hybrid structure of KG and user-item graph, high-order relations --- which connect two items with one or multiple linked attributes --- are an essential factor for successful recommendation. We propose a new method named Knowledge Graph Attention Network (KGAT) which explicitly models the high-order connectivities in KG in an end-to-end fashion. It recursively propagates the embeddings from a node's neighbors (which can be users, items, or attributes) to refine the node's embedding, and employs an attention mechanism to discriminate the importance of the neighbors. Our KGAT is conceptually advantageous to existing KG-based recommendation methods, which either exploit high-order relations by extracting paths or implicitly modeling them with regularization. Empirical results on three public benchmarks show that KGAT significantly outperforms state-of-the-art methods like Neural FM and RippleNet. Further studies verify the efficacy of embedding propagation for high-order relation modeling and the interpretability benefits brought by the attention mechanism.", "field": ["Image Models"], "task": ["Knowledge Graphs", "Link Prediction", "Recommendation Systems"], "method": ["Interpretability"], "dataset": ["MovieLens 25M", "Yelp"], "metric": ["Hits@10", "nDCG@10", "HR@10"], "title": "KGAT: Knowledge Graph Attention Network for Recommendation"} {"abstract": "To improve the discriminative and generalization ability of lightweight network for face recognition, we propose an efficient variable group convolutional network called VarGFaceNet. Variable group convolution is introduced by VarGNet to solve the conflict between small computational cost and the unbalance of computational intensity inside a block. We employ variable group convolution to design our network which can support large scale face identification while reduce computational cost and parameters. Specifically, we use a head setting to reserve essential information at the start of the network and propose a particular embedding setting to reduce parameters of fully-connected layer for embedding. To enhance interpretation ability, we employ an equivalence of angular distillation loss to guide our lightweight network and we apply recursive knowledge distillation to relieve the discrepancy between the teacher model and the student model. The champion of deepglint-light track of LFR (2019) challenge demonstrates the effectiveness of our model and approach. Implementation of VarGFaceNet will be released at https://github.com/zma-c-137/VarGFaceNet soon.", "field": ["Convolutions"], "task": ["Face Detection", "Face Identification", "Face Recognition", "Knowledge Distillation"], "method": ["Convolution"], "dataset": ["CFP-FP", "AgeDB-30", "Labeled Faces in the Wild"], "metric": ["Accuracy"], "title": "VarGFaceNet: An Efficient Variable Group Convolutional Neural Network for Lightweight Face Recognition"} {"abstract": "We propose PanopticFusion, a novel online volumetric semantic mapping system at the level of stuff and things. In contrast to previous semantic mapping systems, PanopticFusion is able to densely predict class labels of a background region (stuff) and individually segment arbitrary foreground objects (things). In addition, our system has the capability to reconstruct a large-scale scene and extract a labeled mesh thanks to its use of a spatially hashed volumetric map representation. Our system first predicts pixel-wise panoptic labels (class labels for stuff regions and instance IDs for thing regions) for incoming RGB frames by fusing 2D semantic and instance segmentation outputs. The predicted panoptic labels are integrated into the volumetric map together with depth measurements while keeping the consistency of the instance IDs, which could vary frame to frame, by referring to the 3D map at that moment. In addition, we construct a fully connected conditional random field (CRF) model with respect to panoptic labels for map regularization. For online CRF inference, we propose a novel unary potential approximation and a map division strategy. We evaluated the performance of our system on the ScanNet (v2) dataset. PanopticFusion outperformed or compared with state-of-the-art offline 3D DNN methods in both semantic and instance segmentation benchmarks. Also, we demonstrate a promising augmented reality application using a 3D panoptic map generated by the proposed system.", "field": ["Structured Prediction"], "task": ["3D Instance Segmentation", "Instance Segmentation", "Semantic Segmentation"], "method": ["Conditional Random Field", "CRF"], "dataset": ["ScanNet"], "metric": ["3DIoU"], "title": "PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things"} {"abstract": "Recent research has shown the advantages of using autoencoders based on deep neural networks for collaborative filtering. In particular, the recently proposed Mult-VAE model, which used the multinomial likelihood variational autoencoders, has shown excellent results for top-N recommendations. In this work, we propose the Recommender VAE (RecVAE) model that originates from our research on regularization techniques for variational autoencoders. RecVAE introduces several novel ideas to improve Mult-VAE, including a novel composite prior distribution for the latent codes, a new approach to setting the $\\beta$ hyperparameter for the $\\beta$-VAE framework, and a new approach to training based on alternating updates. In experimental evaluation, we show that RecVAE significantly outperforms previously proposed autoencoder-based models, including Mult-VAE and RaCT, across classical collaborative filtering datasets, and present a detailed ablation study to assess our new developments. Code and models are available at https://github.com/ilya-shenbin/RecVAE.", "field": ["Generative Models"], "task": ["Recommendation Systems"], "method": ["VAE", "Variational Autoencoder"], "dataset": ["Netflix", "MovieLens 20M", "Million Song Dataset"], "metric": ["Recall@50", "Recall@20", "nDCG@100"], "title": "RecVAE: a New Variational Autoencoder for Top-N Recommendations with Implicit Feedback"} {"abstract": "In this paper, we propose a conceptually simple and geometrically\ninterpretable objective function, i.e. additive margin Softmax (AM-Softmax),\nfor deep face verification. In general, the face verification task can be\nviewed as a metric learning problem, so learning large-margin face features\nwhose intra-class variation is small and inter-class difference is large is of\ngreat importance in order to achieve good performance. Recently, Large-margin\nSoftmax and Angular Softmax have been proposed to incorporate the angular\nmargin in a multiplicative manner. In this work, we introduce a novel additive\nangular margin for the Softmax loss, which is intuitively appealing and more\ninterpretable than the existing works. We also emphasize and discuss the\nimportance of feature normalization in the paper. Most importantly, our\nexperiments on LFW BLUFR and MegaFace show that our additive margin softmax\nloss consistently performs better than the current state-of-the-art methods\nusing the same network architecture and training dataset. Our code has also\nbeen made available at https://github.com/happynear/AMSoftmax", "field": ["Output Functions"], "task": ["Face Verification", "Metric Learning"], "method": ["Softmax"], "dataset": ["Trillion Pairs Dataset"], "metric": ["Accuracy"], "title": "Additive Margin Softmax for Face Verification"} {"abstract": "Gradient backpropagation (BP) requires symmetric feedforward and feedback\nconnections -- the same weights must be used for forward and backward passes.\nThis \"weight transport problem\" (Grossberg 1987) is thought to be one of the\nmain reasons to doubt BP's biologically plausibility. Using 15 different\nclassification datasets, we systematically investigate to what extent BP really\ndepends on weight symmetry. In a study that turned out to be surprisingly\nsimilar in spirit to Lillicrap et al.'s demonstration (Lillicrap et al. 2014)\nbut orthogonal in its results, our experiments indicate that: (1) the\nmagnitudes of feedback weights do not matter to performance (2) the signs of\nfeedback weights do matter -- the more concordant signs between feedforward and\ntheir corresponding feedback connections, the better (3) with feedback weights\nhaving random magnitudes and 100% concordant signs, we were able to achieve the\nsame or even better performance than SGD. (4) some\nnormalizations/stabilizations are indispensable for such asymmetric BP to work,\nnamely Batch Normalization (BN) (Ioffe and Szegedy 2015) and/or a \"Batch\nManhattan\" (BM) update rule.", "field": ["Normalization", "Stochastic Optimization"], "task": ["Handwritten Digit Recognition", "Image Classification"], "method": ["Stochastic Gradient Descent", "SGD", "Batch Normalization"], "dataset": ["CIFAR-100", "CIFAR-10", "MNIST", "STL-10", "SVHN"], "metric": ["Percentage error", "PERCENTAGE ERROR", "Percentage correct"], "title": "How Important is Weight Symmetry in Backpropagation?"} {"abstract": "We introduce a general framework for several information extraction tasks\nthat share span representations using dynamically constructed span graphs. The\ngraphs are constructed by selecting the most confident entity spans and linking\nthese nodes with confidence-weighted relation types and coreferences. The\ndynamic span graph allows coreference and relation type confidences to\npropagate through the graph to iteratively refine the span representations.\nThis is unlike previous multi-task frameworks for information extraction in\nwhich the only interaction between tasks is in the shared first-layer LSTM. Our\nframework significantly outperforms the state-of-the-art on multiple\ninformation extraction tasks across multiple datasets reflecting different\ndomains. We further observe that the span enumeration approach is good at\ndetecting nested span entities, with significant F1 score improvement on the\nACE dataset.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Joint Entity and Relation Extraction", "Named Entity Recognition", "Relation Extraction"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["SciERC", "WLPC", "ACE 2005", "ACE 2004"], "metric": ["Entity F1", "Relation F1", "Sentence Encoder", "F1", "RE Micro F1", "NER Micro F1"], "title": "A General Framework for Information Extraction using Dynamic Span Graphs"} {"abstract": "This paper proposes a Deep Learning based edge detector, which is inspired on both HED (Holistically-Nested Edge Detection) and Xception networks. The proposed approach generates thin edge-maps that are plausible for human eyes; it can be used in any edge detection task without previous training or fine tuning process. As a second contribution, a large dataset with carefully annotated edges has been generated. This dataset has been used for training the proposed approach as well the state-of-the-art algorithms for comparisons. Quantitative and qualitative evaluations have been performed on different benchmarks showing improvements with the proposed method when F-measure of ODS and OIS are considered.", "field": ["Output Functions", "Convolutional Neural Networks", "Activation Functions", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections"], "task": ["Boundary Detection", "Edge Detection"], "method": ["Depthwise Convolution", "Average Pooling", "Softmax", "Convolution", "Rectified Linear Units", "ReLU", "1x1 Convolution", "Residual Connection", "Xception", "Depthwise Separable Convolution", "Pointwise Convolution", "Global Average Pooling", "Dense Connections", "Max Pooling"], "dataset": ["BIPED", "CID"], "metric": ["ODS"], "title": "Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection"} {"abstract": "Despite the great success of two-stage detectors, single-stage detector is\nstill a more elegant and efficient way, yet suffers from the two well-known\ndisharmonies during training, i.e. the huge difference in quantity between\npositive and negative examples as well as between easy and hard examples. In\nthis work, we first point out that the essential effect of the two disharmonies\ncan be summarized in term of the gradient. Further, we propose a novel gradient\nharmonizing mechanism (GHM) to be a hedging for the disharmonies. The\nphilosophy behind GHM can be easily embedded into both classification loss\nfunction like cross-entropy (CE) and regression loss function like smooth-$L_1$\n($SL_1$) loss. To this end, two novel loss functions called GHM-C and GHM-R are\ndesigned to balancing the gradient flow for anchor classification and bounding\nbox refinement, respectively. Ablation study on MS COCO demonstrates that\nwithout laborious hyper-parameter tuning, both GHM-C and GHM-R can bring\nsubstantial improvement for single-stage detector. Without any whistles and\nbells, our model achieves 41.6 mAP on COCO test-dev set which surpasses the\nstate-of-the-art method, Focal Loss (FL) + $SL_1$, by 0.8.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Object Detection", "Regression"], "method": ["Average Pooling", "GHM-C", "1x1 Convolution", "ResNet", "Convolution", "ReLU", "Residual Connection", "Grouped Convolution", "Focal Loss", "Batch Normalization", "GHM-R", "Residual Network", "Gradient Harmonizing Mechanism C", "Kaiming Initialization", "ResNeXt Block", "Gradient Harmonizing Mechanism R", "ResNeXt", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Gradient Harmonized Single-stage Detector"} {"abstract": "Background and Objective: Code assignment is of paramount importance in many levels in modern hospitals, from ensuring accurate billing process to creating a valid record of patient care history. However, the coding process is tedious and subjective, and it requires medical coders with extensive training. This study aims to evaluate the performance of deep-learning-based systems to automatically map clinical notes to ICD-9 medical codes. Methods: The evaluations of this research are focused on end-to-end learning methods without manually defined rules. Traditional machine learning algorithms, as well as state-of-the-art deep learning methods such as Recurrent Neural Networks and Convolution Neural Networks, were applied to the Medical Information Mart for Intensive Care (MIMIC-III) dataset. An extensive number of experiments was applied to different settings of the tested algorithm. Results: Findings showed that the deep learning-based methods outperformed other conventional machine learning methods. From our assessment, the best models could predict the top 10 ICD-9 codes with 0.6957 F1 and 0.8967 accuracy and could estimate the top 10 ICD-9 categories with 0.7233 F1 and 0.8588 accuracy. Our implementation also outperformed existing work under certain evaluation metrics. Conclusion: A set of standard metrics was utilized in assessing the performance of ICD-9 code assignment on MIMIC-III dataset. All the developed evaluation tools and resources are available online, which can be used as a baseline for further research.", "field": ["Convolutions"], "task": ["Multi-Label Classification", "Multi-Label Classification Of Biomedical Texts", "Multi-Label Text Classification"], "method": ["Convolution"], "dataset": ["MIMIC-III"], "metric": ["Precision", "Recall"], "title": "An Empirical Evaluation of Deep Learning for ICD-9 Code Assignment using MIMIC-III Clinical Notes"} {"abstract": "The version identification (VI) task deals with the automatic detection of recordings that correspond to the same underlying musical piece. Despite many efforts, VI is still an open problem, with much room for improvement, specially with regard to combining accuracy and scalability. In this paper, we present MOVE, a musically-motivated method for accurate and scalable version identification. MOVE achieves state-of-the-art performance on two publicly-available benchmark sets by learning scalable embeddings in an Euclidean distance space, using a triplet loss and a hard triplet mining strategy. It improves over previous work by employing an alternative input representation, and introducing a novel technique for temporal content summarization, a standardized latent space, and a data augmentation strategy specifically designed for VI. In addition to the main results, we perform an ablation study to highlight the importance of our design choices, and study the relation between embedding dimensionality and model performance.", "field": ["Loss Functions"], "task": ["Cover song identification"], "method": ["Triplet Loss"], "dataset": ["Covers80", "YouTube350"], "metric": ["MAP"], "title": "Accurate and Scalable Version Identification Using Musically-Motivated Embeddings"} {"abstract": "The performance of generative adversarial networks (GANs) heavily deteriorates given a limited amount of training data. This is mainly because the discriminator is memorizing the exact training set. To combat it, we propose Differentiable Augmentation (DiffAugment), a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples. Previous attempts to directly augment the training data manipulate the distribution of real images, yielding little benefit; DiffAugment enables us to adopt the differentiable augmentation for the generated samples, effectively stabilizes training, and leads to better convergence. Experiments demonstrate consistent gains of our method over a variety of GAN architectures and loss functions for both unconditional and class-conditional generation. With DiffAugment, we achieve a state-of-the-art FID of 6.80 with an IS of 100.8 on ImageNet 128x128 and 2-4x reductions of FID given 1,000 images on FFHQ and LSUN. Furthermore, with only 20% training data, we can match the top performance on CIFAR-10 and CIFAR-100. Finally, our method can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms. Code is available at https://github.com/mit-han-lab/data-efficient-gans.", "field": ["Adversarial Training", "Normalization", "Optimization", "Attention Mechanisms", "Generative Adversarial Networks", "Discriminators", "Attention Modules", "Regularization", "Activation Functions", "Latent Variable Sampling", "Convolutions", "Image Feature Extractors", "Stochastic Optimization", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Image Data Augmentation", "Output Functions", "Generative Models", "Skip Connections", "Image Model Blocks"], "task": ["Image Generation"], "method": ["TTUR", "Truncation Trick", "Off-Diagonal Orthogonal Regularization", "Spectral Normalization", "Cutout", "Adam", "Self-Attention GAN", "Projection Discriminator", "Early Stopping", "GAN Hinge Loss", "1x1 Convolution", "Path Length Regularization", "StyleGAN2", "SAGAN Self-Attention Module", "Convolution", "ReLU", "Residual Connection", "Linear Layer", "Leaky ReLU", "Two Time-scale Update Rule", "Dense Connections", "Feedforward Network", "Conditional Batch Normalization", "Non-Local Operation", "Batch Normalization", "Dot-Product Attention", "SAGAN", "Weight Demodulation", "ColorJitter", "Non-Local Block", "Color Jitter", "DiffAugment", "Softmax", "BigGAN", "R1 Regularization", "Residual Block", "Rectified Linear Units"], "dataset": ["CIFAR-10 (20% data)", "CIFAR-10 (10% data)", "ImageNet 128x128", "CIFAR-10"], "metric": ["Inception score", "FID-10k-test", "FID", "IS"], "title": "Differentiable Augmentation for Data-Efficient GAN Training"} {"abstract": "Integrating distributed representations with symbolic operations is essential for reading comprehension requiring complex reasoning, such as counting, sorting and arithmetics, but most existing approaches are hard to scale to more domains or more complex reasoning. In this work, we propose the Neural Symbolic Reader (NeRd), which includes a reader, e.g., BERT, to encode the passage and question, and a programmer, e.g., LSTM, to generate a program that is executed to produce the answer. Compared to previous works, NeRd is more scalable in two aspects: (1) domain-agnostic, i.e., the same neural architecture works for different domains; (2) compositional, i.e., when needed, complex programs can be generated by recursively applying the predefined operators, which become executable and interpretable representations for more complex reasoning. Furthermore, to overcome the challenge of training NeRd with weak supervision, we apply data augmentation techniques and hard Expectation-Maximization (EM) with thresholding. On DROP, a challenging reading comprehension dataset that requires discrete reasoning, NeRd achieves 1.37%/1.18% absolute improvement over the state-of-the-art on EM/F1 metrics. With the same architecture, NeRd significantly outperforms the baselines on MathQA, a math problem benchmark that requires multiple steps of reasoning, by 25.5% absolute increment on accuracy when trained on all the annotated programs. More importantly, NeRd still beats the baselines even when only 20% of the program annotations are given.", "field": ["Regularization", "Attention Modules", "Learning Rate Schedules", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Output Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Data Augmentation", "Question Answering", "Reading Comprehension"], "method": ["Weight Decay", "Long Short-Term Memory", "Adam", "Tanh Activation", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Residual Connection", "Dense Connections", "Layer Normalization", "GELU", "Sigmoid Activation", "WordPiece", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "LSTM", "Dropout", "BERT"], "dataset": ["DROP Test"], "metric": ["F1"], "title": "Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension"} {"abstract": "We show that, in images of man-made environments, the horizon line can usually be hypothesized based on an a contrario detection of second-order grouping events. This allows constraining the extraction of the horizontal vanishing points on that line, thus reducing false detections. Experiments made on three datasets show that our method, not only achieves state-of-the-art performance w.r.t. horizon line detection on two datasets, but also yields much less spurious vanishing points than the previous top-ranked methods.", "field": ["Graph Embeddings"], "task": ["Horizon Line Estimation"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["York Urban Dataset", "Horizon Lines in the Wild", "Eurasian Cities Dataset"], "metric": ["AUC (horizon error)"], "title": "A-Contrario Horizon-First Vanishing Point Detection Using Second-Order Grouping Laws"} {"abstract": "The non-local block is a popular module for strengthening the context modeling ability of a regular convolutional neural network. This paper first studies the non-local block in depth, where we find that its attention computation can be split into two terms, a whitened pairwise term accounting for the relationship between two pixels and a unary term representing the saliency of every pixel. We also observe that the two terms trained alone tend to model different visual clues, e.g. the whitened pairwise term learns within-region relationships while the unary term learns salient boundaries. However, the two terms are tightly coupled in the non-local block, which hinders the learning of each. Based on these findings, we present the disentangled non-local block, where the two terms are decoupled to facilitate learning for both terms. We demonstrate the effectiveness of the decoupled design on various tasks, such as semantic segmentation on Cityscapes, ADE20K and PASCAL Context, object detection on COCO, and action recognition on Kinetics.", "field": ["Image Model Blocks", "Convolutions", "Skip Connections", "Image Feature Extractors"], "task": ["Action Recognition", "Object Detection", "Semantic Segmentation"], "method": ["1x1 Convolution", "Non-Local Block", "Residual Connection", "Non-Local Operation"], "dataset": ["PASCAL Context", "Cityscapes test", "ADE20K val"], "metric": ["Mean IoU (class)", "mIoU"], "title": "Disentangled Non-Local Neural Networks"} {"abstract": "We study the pre-train + fine-tune strategy for data-to-text tasks. Fine-tuning T5 achieves state-of-the-art results on the WebNLG, MultiWoz and ToTTo benchmarks. Such transfer learning enables training of fully end-to-end models that do not rely on any intermediate planning steps, delexicalization or copy mechanisms. T5 pre-training also enables stronger generalization, as evidenced by large improvements on out-of-domain test sets. We hope our work serves as a useful baseline for future research, as pre-training becomes ever more prevalent for data-to-text tasks.", "field": ["Output Functions", "Attention Modules", "Learning Rate Schedules", "Stochastic Optimization", "Regularization", "Activation Functions", "Subword Segmentation", "Normalization", "Tokenizers", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Data-to-Text Generation", "Transfer Learning"], "method": ["Inverse Square Root Schedule", "Layer Normalization", "Byte Pair Encoding", "GLU", "Gated Linear Unit", "BPE", "Softmax", "SentencePiece", "Adafactor", "Multi-Head Attention", "Attention Dropout", "T5", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["WebNLG Full", "MULTIWOZ 2.1", "ToTTo", "WebNLG"], "metric": ["BLEU", "PARENT"], "title": "Text-to-Text Pre-Training for Data-to-Text Tasks"} {"abstract": "We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation. Typically, enforcing invertibility requires partitioning dimensions or restricting network architectures. In contrast, our approach only requires adding a simple normalization step during training, already available in standard frameworks. Invertible ResNets define a generative model which can be trained by maximum likelihood on unlabeled data. To compute likelihoods, we introduce a tractable approximation to the Jacobian log-determinant of a residual block. Our empirical evaluation shows that invertible ResNets perform competitively with both state-of-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Density Estimation", "Image Generation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["MNIST", "CIFAR-10"], "metric": ["bits/dimension"], "title": "Invertible Residual Networks"} {"abstract": "Graph convolutional networks (GCNs), which generalize CNNs to more generic non-Euclidean structures, have achieved remarkable performance for skeleton-based action recognition. However, there still exist several issues in the previous GCN-based models. First, the topology of the graph is set heuristically and fixed over all the model layers and input data. This may not be suitable for the hierarchy of the GCN model and the diversity of the data in action recognition tasks. Second, the second-order information of the skeleton data, i.e., the length and orientation of the bones, is rarely investigated, which is naturally more informative and discriminative for the human action recognition. In this work, we propose a novel multi-stream attention-enhanced adaptive graph convolutional neural network (MS-AAGCN) for skeleton-based action recognition. The graph topology in our model can be either uniformly or individually learned based on the input data in an end-to-end manner. This data-driven approach increases the flexibility of the model for graph construction and brings more generality to adapt to various data samples. Besides, the proposed adaptive graph convolutional layer is further enhanced by a spatial-temporal-channel attention module, which helps the model pay more attention to important joints, frames and features. Moreover, the information of both the joints and bones, together with their motion information, are simultaneously modeled in a multi-stream framework, which shows notable improvement for the recognition accuracy. Extensive experiments on the two large-scale datasets, NTU-RGBD and Kinetics-Skeleton, demonstrate that the performance of our model exceeds the state-of-the-art with a significant margin.", "field": ["Graph Models"], "task": ["Action Recognition", "graph construction", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["NTU RGB+D", "Kinetics-Skeleton dataset"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Skeleton-Based Action Recognition with Multi-Stream Adaptive Graph Convolutional Networks"} {"abstract": "Binary neural networks have attracted numerous attention in recent years. However, mainly due to the information loss stemming from the biased binarization, how to preserve the accuracy of networks still remains a critical issue. In this paper, we attempt to maintain the information propagated in the forward process and propose a Balanced Binary Neural Networks with Gated Residual (BBG for short). First, a weight balanced binarization is introduced to maximize information entropy of binary weights, and thus the informative binary weights can capture more information contained in the activations. Second, for binary activations, a gated residual is further appended to compensate their information loss during the forward process, with a slight overhead. Both techniques can be wrapped as a generic network module that supports various network architectures for different tasks including classification and detection. We evaluate our BBG on image classification tasks over CIFAR-10/100 and ImageNet and on detection task over Pascal VOC. The experimental results show that BBG-Net performs remarkably well across various network architectures such as VGG, ResNet and SSD with the superior performance over state-of-the-art methods in terms of memory consumption, inference speed and accuracy.", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Output Functions", "Proposal Filtering", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Object Detection Models", "Skip Connection Blocks"], "task": ["Binarization", "Image Classification"], "method": ["Average Pooling", "1x1 Convolution", "ResNet", "VGG", "SSD", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Non Maximum Suppression", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Softmax", "Bottleneck Residual Block", "Dropout", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Balanced Binary Neural Networks with Gated Residual"} {"abstract": "Sequence-to-sequence attention-based models on subword units allow simple\nopen-vocabulary end-to-end speech recognition. In this work, we show that such\nmodels can achieve competitive results on the Switchboard 300h and LibriSpeech\n1000h tasks. In particular, we report the state-of-the-art word error rates\n(WER) of 3.54% on the dev-clean and 3.82% on the test-clean evaluation subsets\nof LibriSpeech. We introduce a new pretraining scheme by starting with a high\ntime reduction factor and lowering it during training, which is crucial both\nfor convergence and final performance. In some experiments, we also use an\nauxiliary CTC loss function to help the convergence. In addition, we train long\nshort-term memory (LSTM) language models on subword units. By shallow fusion,\nwe report up to 27% relative improvements in WER over the attention baseline\nwithout a language model.", "field": ["Loss Functions"], "task": ["End-To-End Speech Recognition", "Language Modelling", "Speech Recognition"], "method": ["Connectionist Temporal Classification Loss", "CTC Loss"], "dataset": ["LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "Improved training of end-to-end attention models for speech recognition"} {"abstract": "Region anchors are the cornerstone of modern object detection techniques.\nState-of-the-art detectors mostly rely on a dense anchoring scheme, where\nanchors are sampled uniformly over the spatial domain with a predefined set of\nscales and aspect ratios. In this paper, we revisit this foundational stage.\nOur study shows that it can be done much more effectively and efficiently.\nSpecifically, we present an alternative scheme, named Guided Anchoring, which\nleverages semantic features to guide the anchoring. The proposed method jointly\npredicts the locations where the center of objects of interest are likely to\nexist as well as the scales and aspect ratios at different locations. On top of\npredicted anchor shapes, we mitigate the feature inconsistency with a feature\nadaption module. We also study the use of high-quality proposals to improve\ndetection performance. The anchoring scheme can be seamlessly integrated into\nproposal methods and detectors. With Guided Anchoring, we achieve 9.1% higher\nrecall on MS COCO with 90% fewer anchors than the RPN baseline. We also adopt\nGuided Anchoring in Fast R-CNN, Faster R-CNN and RetinaNet, respectively\nimproving the detection mAP by 2.2%, 2.7% and 1.2%. Code will be available at\nhttps://github.com/open-mmlab/mmdetection.", "field": ["Initialization", "Output Functions", "Convolutional Neural Networks", "Feature Extractors", "Activation Functions", "RoI Feature Extractors", "Loss Functions", "Normalization", "Convolutions", "Pooling Operations", "Anchor Generation Modules", "Skip Connections", "Object Detection Models", "Region Proposal", "Skip Connection Blocks"], "task": ["Object Detection", "Region Proposal"], "method": ["Fast R-CNN", "Average Pooling", "Faster R-CNN", "1x1 Convolution", "Region Proposal Network", "ResNet", "Guided Anchoring", "Convolution", "RoIPool", "ReLU", "Residual Connection", "FPN", "RPN", "Focal Loss", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Softmax", "Feature Pyramid Network", "Bottleneck Residual Block", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Region Proposal by Guided Anchoring"} {"abstract": "In this paper, we introduce an anchor-box free and single shot instance segmentation method, which is conceptually simple, fully convolutional and can be used as a mask prediction module for instance segmentation, by easily embedding it into most off-the-shelf detection methods. Our method, termed PolarMask, formulates the instance segmentation problem as instance center classification and dense distance regression in a polar coordinate. Moreover, we propose two effective approaches to deal with sampling high-quality center examples and optimization for dense distance regression, respectively, which can significantly improve the performance and simplify the training process. Without any bells and whistles, PolarMask achieves 32.9% in mask mAP with single-model and single-scale training/testing on challenging COCO dataset. For the first time, we demonstrate a much simpler and flexible instance segmentation framework achieving competitive accuracy. We hope that the proposed PolarMask framework can serve as a fundamental and strong baseline for single shot instance segmentation tasks. Code is available at: github.com/xieenze/PolarMask.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Object Detection", "Regression", "Semantic Segmentation"], "method": ["ResNet", "ResNeXt Block", "Average Pooling", "Grouped Convolution", "ResNeXt", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO test-dev"], "metric": ["APM", "AP75", "APS", "APL", "AP50", "mask AP"], "title": "PolarMask: Single Shot Instance Segmentation with Polar Representation"} {"abstract": "Training a supernet matters for one-shot neural architecture search (NAS) methods since it serves as a basic performance estimator for different architectures (paths). Current methods mainly hold the assumption that a supernet should give a reasonable ranking over all paths. They thus treat all paths equally, and spare much effort to train paths. However, it is harsh for a single supernet to evaluate accurately on such a huge-scale search space (e.g., $7^{21}$). In this paper, instead of covering all paths, we ease the burden of supernet by encouraging it to focus more on evaluation of those potentially-good ones, which are identified using a surrogate portion of validation data. Concretely, during training, we propose a multi-path sampling strategy with rejection, and greedily filter the weak paths. The training efficiency is thus boosted since the training space has been greedily shrunk from all paths to those potentially-good ones. Moreover, we further adopt an exploration and exploitation policy by introducing an empirical candidate path pool. Our proposed method GreedyNAS is easy-to-follow, and experimental results on ImageNet dataset indicate that it can achieve better Top-1 accuracy under same search space and FLOPs or latency level, but with only $\\sim$60\\% of supernet training cost. By searching on a larger space, our GreedyNAS can also obtain new state-of-the-art architectures.", "field": ["Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Neural Architecture Search", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Image Classification", "Neural Architecture Search"], "method": ["Depthwise Convolution", "Cosine Annealing", "Average Pooling", "RMSProp", "1x1 Convolution", "Nesterov Accelerated Gradient", "Convolution", "GreedyNAS-A", "ReLU", "GreedyNAS-B", "Dense Connections", "GreedyNAS-C", "Batch Normalization", "GreedyNAS", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Sigmoid Activation", "Inverted Residual Block", "Linear Warmup With Linear Decay", "Depthwise Separable Convolution", "Rectified Linear Units"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 1 Accuracy", "Top-1 Error Rate", "Params", "Accuracy", "Top 5 Accuracy"], "title": "GreedyNAS: Towards Fast One-Shot NAS with Greedy Supernet"} {"abstract": "Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet.", "field": ["Image Data Augmentation", "Initialization", "Output Functions", "Convolutional Neural Networks", "Regularization", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["Fine-Grained Image Classification", "Image Classification", "Neural Architecture Search", "Transfer Learning"], "method": ["Depthwise Convolution", "Weight Decay", "Average Pooling", "EfficientNet", "RMSProp", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "ResNet", "AutoAugment", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Swish", "Batch Normalization", "Residual Network", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Kaiming Initialization", "Sigmoid Activation", "Inverted Residual Block", "Softmax", "Bottleneck Residual Block", "LSTM", "Depthwise Separable Convolution", "Dropout", "Stochastic Depth", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["FGVC Aircraft", "CIFAR-100", "CIFAR-10", "Oxford-IIIT Pets", "Flowers-102", "Food-101", "Stanford Cars", "ImageNet", "Birdsnap"], "metric": ["Number of params", "Top 1 Accuracy", "Percentage correct", "PARAMS", "Accuracy", "Top 5 Accuracy"], "title": "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks"} {"abstract": "The joint entity and relation extraction task aims to extract all relational triples from a sentence. In essence, the relational triples contained in a sentence are unordered. However, previous seq2seq based models require to convert the set of triples into a sequence in the training phase. To break this bottleneck, we treat joint entity and relation extraction as a direct set prediction problem, so that the extraction model can get rid of the burden of predicting the order of multiple triples. To solve this set prediction problem, we propose networks featured by transformers with non-autoregressive parallel decoding. Unlike autoregressive approaches that generate triples one by one in a certain order, the proposed networks directly output the final set of triples in one shot. Furthermore, we also design a set-based loss that forces unique predictions via bipartite matching. Compared with cross-entropy loss that highly penalizes small shifts in triple order, the proposed bipartite matching loss is invariant to any permutation of predictions; thus, it can provide the proposed networks with a more accurate training signal by ignoring triple order and focusing on relation types and entities. Experiments on two benchmark datasets show that our proposed model significantly outperforms current state-of-the-art methods. Training code and trained models will be available at http://github.com/DianboWork/SPN4RE.", "field": ["Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models"], "task": ["Joint Entity and Relation Extraction", "Relation Extraction"], "method": ["Long Short-Term Memory", "Tanh Activation", "Sequence to Sequence", "LSTM", "Seq2Seq", "Sigmoid Activation"], "dataset": ["NYT", "WebNLG"], "metric": ["F1"], "title": "Joint Entity and Relation Extraction with Set Prediction Networks"} {"abstract": "Graph Neural Networks (GNNs) have been popularly used for analyzing non-Euclidean data such as social network data and biological data. Despite their success, the design of graph neural networks requires a lot of manual work and domain knowledge. In this paper, we propose a Graph Neural Architecture Search method (GraphNAS for short) that enables automatic search of the best graph neural architecture based on reinforcement learning. Specifically, GraphNAS first uses a recurrent network to generate variable-length strings that describe the architectures of graph neural networks, and then trains the recurrent network with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation data set. Extensive experimental results on node classification tasks in both transductive and inductive learning settings demonstrate that GraphNAS can achieve consistently better performance on the Cora, Citeseer, Pubmed citation network, and protein-protein interaction network. On node classification tasks, GraphNAS can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions"], "task": ["Neural Architecture Search", "Node Classification"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["Cora", "Pubmed", "Citeseer", "PPI"], "metric": ["F1", "Accuracy"], "title": "GraphNAS: Graph Neural Architecture Search with Reinforcement Learning"} {"abstract": "Estimating depth from a single image represents an attractive alternative to more traditional approaches leveraging multiple cameras. In this field, deep learning yielded outstanding results at the cost of needing large amounts of data labeled with precise depth measurements for training. An issue softened by self-supervised approaches leveraging monocular sequences or stereo pairs in place of expensive ground truth depth annotations. This paper enables to further improve monocular depth estimation by integrating into existing self-supervised networks a geometrical prior. Specifically, we propose a sparsity-invariant autoencoder able to process the output of conventional visual odometry algorithms working in synergy with depth-from-mono networks. Experimental results on the KITTI dataset show that by exploiting the geometrical prior, our proposal: i) outperforms existing approaches in the literature and ii) couples well with both compact and complex depth-from-mono architectures, allowing for its deployment on high-end GPUs as well as on embedded devices (e.g., NVIDIA Jetson TX2).", "field": ["Generative Models"], "task": ["Depth And Camera Motion", "Depth Estimation", "Monocular Depth Estimation", "Visual Odometry"], "method": ["AutoEncoder"], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Enhancing self-supervised monocular depth estimation with traditional visual odometry"} {"abstract": "While modern machine translation has relied on large parallel corpora, a\nrecent line of work has managed to train Neural Machine Translation (NMT)\nsystems from monolingual corpora only (Artetxe et al., 2018c; Lample et al.,\n2018). Despite the potential of this approach for low-resource settings,\nexisting systems are far behind their supervised counterparts, limiting their\npractical interest. In this paper, we propose an alternative approach based on\nphrase-based Statistical Machine Translation (SMT) that significantly closes\nthe gap with supervised systems. Our method profits from the modular\narchitecture of SMT: we first induce a phrase table from monolingual corpora\nthrough cross-lingual embedding mappings, combine it with an n-gram language\nmodel, and fine-tune hyperparameters through an unsupervised MERT variant. In\naddition, iterative backtranslation improves results further, yielding, for\ninstance, 14.08 and 26.22 BLEU points in WMT 2014 English-German and\nEnglish-French, respectively, an improvement of more than 7-10 BLEU points over\nprevious unsupervised systems, and closing the gap with supervised SMT (Moses\ntrained on Europarl) down to 2-5 BLEU points. Our implementation is available\nat https://github.com/artetxem/monoses", "field": ["Graph Embeddings"], "task": ["Language Modelling", "Machine Translation", "Unsupervised Machine Translation"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["WMT2014 German-English", "WMT2016 English-German", "WMT2014 French-English", "WMT2016 German-English", "WMT2014 English-German", "WMT2014 English-French"], "metric": ["BLEU", "BLEU score"], "title": "Unsupervised Statistical Machine Translation"} {"abstract": "This paper proposes a deep learning application for efficient classification of amyotrophic lateral sclerosis (ALS) and normal Electromyogram (EMG) signals. EMG signals are helpful in analyzing of the neuromuscular diseases like ALS. ALS is a well-known brain disease, which progressively degenerates the motor neurons. Most of the previous works about EMG signal classification covers a dozen of basic signal processing methodologies such as statistical signal processing, wavelet analysis, and empirical mode decomposition (EMD). In this work, a different application is implemented which is based on time-frequency (TF) representation of EMG signals and convolutional neural networks (CNN). Short Time Fourier Transform (STFT) is considered for TF representation. Two convolution layers, two pooling layer, a fully connected layer and a lost function layer is considered in CNN architecture. The efficiency of the proposed implementation is tested on publicly available EMG dataset. The dataset contains 89 ALS and 133 normal EMG signals with 24 kHz sampling frequency. Experimental results show 96.69% accuracy. The obtained results are also compared with other methods, which show the superiority of the proposed method.", "field": ["Convolutions"], "task": ["ALS Detection", "Electromyography (EMG)"], "method": ["Convolution"], "dataset": ["ALS EMG (University of Copenhagen)"], "metric": ["Accuracy"], "title": "DeepEMGNet: An Application for Efficient Discrimination of ALS and Normal EMG Signals"} {"abstract": "Adversarial learning methods are a promising approach to training robust deep\nnetworks, and can generate complex samples across diverse domains. They also\ncan improve recognition despite the presence of domain shift or dataset bias:\nseveral adversarial approaches to unsupervised domain adaptation have recently\nbeen introduced, which reduce the difference between the training and test\ndomain distributions and thus improve generalization performance. Prior\ngenerative approaches show compelling visualizations, but are not optimal on\ndiscriminative tasks and can be limited to smaller shifts. Prior discriminative\napproaches could handle larger domain shifts, but imposed tied weights on the\nmodel and did not exploit a GAN-based loss. We first outline a novel\ngeneralized framework for adversarial adaptation, which subsumes recent\nstate-of-the-art approaches as special cases, and we use this generalized view\nto better relate the prior approaches. We propose a previously unexplored\ninstance of our general framework which combines discriminative modeling,\nuntied weight sharing, and a GAN loss, which we call Adversarial Discriminative\nDomain Adaptation (ADDA). We show that ADDA is more effective yet considerably\nsimpler than competing domain-adversarial methods, and demonstrate the promise\nof our approach by exceeding state-of-the-art unsupervised adaptation results\non standard cross-domain digit classification tasks and a new more difficult\ncross-modality object classification task.", "field": ["Generative Models", "Convolutions"], "task": ["Domain Adaptation", "Object Classification", "Unsupervised Domain Adaptation", "Unsupervised Image-To-Image Translation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["SVNH-to-MNIST", "MNIST-to-USPS", "SVHN-to-MNIST"], "metric": ["Classification Accuracy", "Accuracy"], "title": "Adversarial Discriminative Domain Adaptation"} {"abstract": "This paper describes our solution for the video recognition task of\nActivityNet Kinetics challenge that ranked the 1st place. Most of existing\nstate-of-the-art video recognition approaches are in favor of an end-to-end\npipeline. One exception is the framework of DevNet. The merit of DevNet is that\nthey first use the video data to learn a network (i.e. fine-tuning or training\nfrom scratch). Instead of directly using the end-to-end classification scores\n(e.g. softmax scores), they extract the features from the learned network and\nthen fed them into the off-the-shelf machine learning models to conduct video\nclassification. However, the effectiveness of this line work has long-term been\nignored and underestimated. In this submission, we extensively use this\nstrategy. Particularly, we investigate four temporal modeling approaches using\nthe learned features: Multi-group Shifting Attention Network, Temporal Xception\nNetwork, Multi-stream sequence Model and Fast-Forward Sequence Model.\nExperiment results on the challenging Kinetics dataset demonstrate that our\nproposed temporal modeling approaches can significantly improve existing\napproaches in the large-scale video recognition tasks. Most remarkably, our\nbest single Multi-group Shifting Attention Network can achieve 77.7% in term of\ntop-1 accuracy and 93.2% in term of top-5 accuracy on the validation set.", "field": ["Initialization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Graph Embeddings", "Skip Connections", "Skip Connection Blocks"], "task": ["Action Classification", "Video Classification", "Video Recognition"], "method": ["ResNet", "LINE", "Average Pooling", "Global Average Pooling", "Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Large-scale Information Network Embedding", "Rectified Linear Units", "Max Pooling"], "dataset": ["Kinetics-400"], "metric": ["Vid acc@5", "Vid acc@1"], "title": "Revisiting the Effectiveness of Off-the-shelf Temporal Modeling Approaches for Large-scale Video Classification"} {"abstract": "Object detection generally requires sliding-window classifiers in tradition or anchor box based predictions in modern deep learning approaches. However, either of these approaches requires tedious configurations in boxes. In this paper, we provide a new perspective where detecting objects is motivated as a high-level semantic feature detection task. Like edges, corners, blobs and other feature detectors, the proposed detector scans for feature points all over the image, for which the convolution is naturally suited. However, unlike these traditional low-level features, the proposed detector goes for a higher-level abstraction, that is, we are looking for central points where there are objects, and modern deep models are already capable of such a high-level semantic abstraction. Besides, like blob detection, we also predict the scales of the central points, which is also a straightforward convolution. Therefore, in this paper, pedestrian and face detection is simplified as a straightforward center and scale prediction task through convolutions. This way, the proposed method enjoys a box-free setting. Though structurally simple, it presents competitive accuracy on several challenging benchmarks, including pedestrian detection and face detection. Furthermore, a cross-dataset evaluation is performed, demonstrating a superior generalization ability of the proposed method", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Face Detection", "Object Detection", "Pedestrian Detection"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CityPersons", "Caltech"], "metric": ["Medium MR^-2", "Small MR^-2", "Reasonable MR^-2", "Test Time", "Large MR^-2", "Heavy MR^-2", "Reasonable Miss Rate", "Partial MR^-2", "Bare MR^-2"], "title": "Center and Scale Prediction: A Box-free Approach for Pedestrian and Face Detection"} {"abstract": "Generalizing over temporal variations is a prerequisite for effective action recognition in videos. Despite significant advances in deep neural networks, it remains a challenge to focus on short-term discriminative motions in relation to the overall performance of an action. We address this challenge by allowing some flexibility in discovering relevant spatio-temporal features. We introduce Squeeze and Recursion Temporal Gates (SRTG), an approach that favors inputs with similar activations with potential temporal variations. We implement this idea with a novel CNN block that uses an LSTM to encapsulate feature dynamics, in conjunction with a temporal gate that is responsible for evaluating the consistency of the discovered dynamics and the modeled features. We show consistent improvement when using SRTG blocks, with only a minimal increase in the number of GFLOPs. On Kinetics-700, we perform on par with current state-of-the-art models, and outperform these on HACS, Moments in Time, UCF-101 and HMDB-51.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Action Recognition In Videos", "Action Recognition In Videos ", "Video Classification"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["HACS", "Moments in Time"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Learn to cycle: Time-consistent feature discovery for action recognition"} {"abstract": "We present Matrix Nets (xNets), a new deep architecture for object detection. xNets map objects with different sizes and aspect ratios into layers where the sizes and the aspect ratios of the objects within their layers are nearly uniform. Hence, xNets provide a scale and aspect ratio aware architecture. We leverage xNets to enhance key-points based object detection. Our architecture achieves mAP of 47.8 on MS COCO, which is higher than any other single-shot detector while using half the number of parameters and training 3x faster than the next best architecture.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Object Detection"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Matrix Nets: A New Deep Architecture for Object Detection"} {"abstract": "Understanding expressed sentiment and emotions are two crucial factors in human multimodal language. This paper describes a Transformer-based joint-encoding (TBJE) for the task of Emotion Recognition and Sentiment Analysis. In addition to use the Transformer architecture, our approach relies on a modular co-attention and a glimpse layer to jointly encode one or more modalities. The proposed solution has also been submitted to the ACL20: Second Grand-Challenge on Multimodal Language to be evaluated on the CMU-MOSEI dataset. The code to replicate the presented experiments is open-source: https://github.com/jbdel/MOSEI_UMONS.", "field": ["Regularization", "Attention Modules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Emotion Recognition", "Multimodal Sentiment Analysis", "Sentiment Analysis"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections", "Rectified Linear Units"], "dataset": ["CMU-MOSEI"], "metric": ["Accuracy"], "title": "A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis"} {"abstract": "With the rapid advances of autonomous driving, it becomes critical to equip its sensing system with more holistic 3D perception. However, existing works focus on parsing either the objects (e.g. cars and pedestrians) or scenes (e.g. trees and buildings) from the LiDAR sensor. In this work, we address the task of LiDAR-based panoptic segmentation, which aims to parse both objects and scenes in a unified manner. As one of the first endeavors towards this new challenging task, we propose the Dynamic Shifting Network (DS-Net), which serves as an effective panoptic segmentation framework in the point cloud realm. In particular, DS-Net has three appealing properties: 1) strong backbone design. DS-Net adopts the cylinder convolution that is specifically designed for LiDAR point clouds. The extracted features are shared by the semantic branch and the instance branch which operates in a bottom-up clustering style. 2) Dynamic Shifting for complex point distributions. We observe that commonly-used clustering algorithms like BFS or DBSCAN are incapable of handling complex autonomous driving scenes with non-uniform point cloud distributions and varying instance sizes. Thus, we present an efficient learnable clustering module, dynamic shifting, which adapts kernel functions on-the-fly for different instances. 3) Consensus-driven Fusion. Finally, consensus-driven fusion is used to deal with the disagreement between semantic and instance predictions. To comprehensively evaluate the performance of LiDAR-based panoptic segmentation, we construct and curate benchmarks from two large-scale autonomous driving LiDAR datasets, SemanticKITTI and nuScenes. Extensive experiments demonstrate that our proposed DS-Net achieves superior accuracies over current state-of-the-art methods. Notably, we achieve 1st place on the public leaderboard of SemanticKITTI, outperforming 2nd place by 2.6% in terms of the PQ metric.", "field": ["Convolutions"], "task": ["Autonomous Driving", "Panoptic Segmentation"], "method": ["Convolution"], "dataset": ["SemanticKITTI"], "metric": ["PQst", "RQ", "SQst", "PQ_dagger", "RQth", "RQst", "SQth", "PQth", "mIoU", "PQ", "SQ"], "title": "LiDAR-based Panoptic Segmentation via Dynamic Shifting Network"} {"abstract": "This paper presents a region-partition based attraction field dual\nrepresentation for line segment maps, and thus poses the problem of line\nsegment detection (LSD) as the region coloring problem. The latter is then\naddressed by learning deep convolutional neural networks (ConvNets) for\naccuracy, robustness and efficiency. For a 2D line segment map, our dual\nrepresentation consists of three components: (i) A region-partition map in\nwhich every pixel is assigned to one and only one line segment; (ii) An\nattraction field map in which every pixel in a partition region is encoded by\nits 2D projection vector w.r.t. the associated line segment; and (iii) A\nsqueeze module which squashes the attraction field to a line segment map that\nalmost perfectly recovers the input one. By leveraging the duality, we learn\nConvNets to compute the attraction field maps for raw in-put images, followed\nby the squeeze module for LSD, in an end-to-end manner. Our method rigorously\naddresses several challenges in LSD such as local ambiguity and class\nimbalance. Our method also harnesses the best practices developed in ConvNets\nbased semantic segmentation methods such as the encoder-decoder architecture\nand the a-trous convolution. In experiments, our method is tested on the\nWireFrame dataset and the YorkUrban dataset with state-of-the-art performance\nobtained. Especially, we advance the performance by 4.5 percents on the\nWireFrame dataset. Our method is also fast with 6.6~10.4 FPS, outperforming\nmost of existing line segment detectors.", "field": ["Graph Embeddings"], "task": ["Line Segment Detection", "Semantic Segmentation"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["York Urban Dataset", "wireframe dataset"], "metric": ["sAP15", "sAP10", "F1 score", "sAP5"], "title": "Learning Attraction Field Representation for Robust Line Segment Detection"} {"abstract": "Robust face representation is imperative to highly accurate face recognition.\nIn this work, we propose an open source face recognition method with deep\nrepresentation named as VIPLFaceNet, which is a 10-layer deep convolutional\nneural network with 7 convolutional layers and 3 fully-connected layers.\nCompared with the well-known AlexNet, our VIPLFaceNet takes only 20% training\ntime and 60% testing time, but achieves 40\\% drop in error rate on the\nreal-world face recognition benchmark LFW. Our VIPLFaceNet achieves 98.60% mean\naccuracy on LFW using one single network. An open-source C++ SDK based on\nVIPLFaceNet is released under BSD license. The SDK takes about 150ms to process\none face image in a single thread on an i7 desktop CPU. VIPLFaceNet provides a\nstate-of-the-art start point for both academic and industrial face recognition\napplications.", "field": ["Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations"], "task": ["Face Recognition"], "method": ["Grouped Convolution", "Softmax", "Convolution", "1x1 Convolution", "ReLU", "Rectified Linear Units", "AlexNet", "Dropout", "Dense Connections", "Local Response Normalization", "Max Pooling"], "dataset": ["Labeled Faces in the Wild"], "metric": ["Accuracy"], "title": "VIPLFaceNet: An Open Source Deep Face Recognition SDK"} {"abstract": "Since the person re-identification task often suffers from the problem of pose changes and occlusions, some attentive local features are often suppressed when training CNNs. In this paper, we propose the Batch DropBlock (BDB) Network which is a two branch network composed of a conventional ResNet-50 as the global branch and a feature dropping branch. The global branch encodes the global salient representations. Meanwhile, the feature dropping branch consists of an attentive feature learning module called Batch DropBlock, which randomly drops the same region of all input feature maps in a batch to reinforce the attentive feature learning of local regions. The network then concatenates features from both branches and provides a more comprehensive and spatially distributed feature representation. Albeit simple, our method achieves state-of-the-art on person re-identification and it is also applicable to general metric learning tasks. For instance, we achieve 76.4% Rank-1 accuracy on the CUHK03-Detect dataset and 83.0% Recall-1 score on the Stanford Online Products dataset, outperforming the existing works by a large margin (more than 6%).", "field": ["Regularization"], "task": ["Image Retrieval", "Metric Learning", "Person Re-Identification"], "method": ["DropBlock"], "dataset": ["CUHK03 labeled"], "metric": ["Rank-1", "MAP"], "title": "Batch DropBlock Network for Person Re-identification and Beyond"} {"abstract": "Single-person human pose estimation facilitates markerless movement analysis in sports, as well as in clinical applications. Still, state-of-the-art models for human pose estimation generally do not meet the requirements of real-life applications. The proliferation of deep learning techniques has resulted in the development of many advanced approaches. However, with the progresses in the field, more complex and inefficient models have also been introduced, which have caused tremendous increases in computational demands. To cope with these complexity and inefficiency challenges, we propose a novel convolutional neural network architecture, called EfficientPose, which exploits recently proposed EfficientNets in order to deliver efficient and scalable single-person pose estimation. EfficientPose is a family of models harnessing an effective multi-scale feature extractor and computationally efficient detection blocks using mobile inverted bottleneck convolutions, while at the same time ensuring that the precision of the pose configurations is still improved. Due to its low complexity and efficiency, EfficientPose enables real-world applications on edge devices by limiting the memory footprint and computational cost. The results from our experiments, using the challenging MPII single-person benchmark, show that the proposed EfficientPose models substantially outperform the widely-used OpenPose model both in terms of accuracy and computational efficiency. In particular, our top-performing model achieves state-of-the-art accuracy on single-person MPII, with low-complexity ConvNets.", "field": ["Output Functions", "Image Representations", "Stochastic Optimization", "Feature Extractors", "Regularization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["Pose Estimation"], "method": ["Depthwise Convolution", "Average Pooling", "EfficientNet", "RMSProp", "1x1 Convolution", "Cross-resolution features", "Low-resolution input", "E-MBConv", "E-swish", "High-resolution input", "Convolution", "ReLU", "Mobile DenseNet", "PAFs", "Dense Connections", "Swish", "Batch Normalization", "Low-level backbone", "Pointwise Convolution", "Squeeze-and-Excitation Block", "Part Affinity Fields", "Sigmoid Activation", "Heatmap", "Inverted Residual Block", "Transposed convolution", "Dropout", "Depthwise Separable Convolution", "Rectified Linear Units", "High-level backbone"], "dataset": ["MPII Single Person", "MPII Human Pose"], "metric": ["PCKh@0.1", "PCKh-0.5", "PCKh@0.5"], "title": "EfficientPose: Scalable single-person pose estimation"} {"abstract": "Attention-based long short-term memory (LSTM) networks have proven to be\nuseful in aspect-level sentiment classification. However, due to the\ndifficulties in annotating aspect-level data, existing public datasets for this\ntask are all relatively small, which largely limits the effectiveness of those\nneural models. In this paper, we explore two approaches that transfer knowledge\nfrom document- level data, which is much less expensive to obtain, to improve\nthe performance of aspect-level sentiment classification. We demonstrate the\neffectiveness of our approaches on 4 public datasets from SemEval 2014, 2015,\nand 2016, and we show that attention-based LSTM benefits from document-level\nknowledge in multiple ways.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Exploiting Document Knowledge for Aspect-level Sentiment Classification"} {"abstract": "Sentiment classification is an important process in understanding people's perception towards a product, service, or topic. Many natural language processing models have been proposed to solve the sentiment classification problem. However, most of them have focused on binary sentiment classification. In this paper, we use a promising deep learning model called BERT to solve the fine-grained sentiment classification task. Experiments show that our model outperforms other popular models for this task without sophisticated architecture. We also demonstrate the effectiveness of transfer learning in natural language processing in the process.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Sentiment Analysis", "Transfer Learning"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SST-2 Binary classification", "SST-5 Fine-grained classification"], "metric": ["Accuracy"], "title": "Fine-grained Sentiment Classification using BERT"} {"abstract": "Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.", "field": ["Image Data Augmentation", "Image Scaling Strategies", "Initialization", "Convolutional Neural Networks", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Few-Shot Learning", "Fine-Grained Image Classification", "Image Classification", "Representation Learning"], "method": ["Average Pooling", "Weight Standardization", "Mixup", "1x1 Convolution", "ResNet", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "Random Resized Crop", "FixRes", "Residual Network", "Kaiming Initialization", "SGD with Momentum", "Group Normalization", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ObjectNet", "VTAB-1k", "CIFAR-100", "Oxford 102 Flowers", "CIFAR-10", "Oxford-IIIT Pets", "ImageNet ReaL", "Flowers-102", "ObjectNet (Bounding Box)", "ImageNet"], "metric": ["Number of params", "Top 1 Accuracy", "Percentage correct", "Top-1 Accuracy", "Params", "Top-1 Error Rate", "Accuracy", "Top 5 Accuracy"], "title": "Big Transfer (BiT): General Visual Representation Learning"} {"abstract": "Heatmap regression has been used for landmark localization for quite a while\nnow. Most of the methods use a very deep stack of bottleneck modules for\nheatmap classification stage, followed by heatmap regression to extract the\nkeypoints. In this paper, we present a single dendritic CNN, termed as Pose\nConditioned Dendritic Convolution Neural Network (PCD-CNN), where a\nclassification network is followed by a second and modular classification\nnetwork, trained in an end to end fashion to obtain accurate landmark points.\nFollowing a Bayesian formulation, we disentangle the 3D pose of a face image\nexplicitly by conditioning the landmark estimation on pose, making it different\nfrom multi-tasking approaches. Extensive experimentation shows that\nconditioning on pose reduces the localization error by making it agnostic to\nface pose. The proposed model can be extended to yield variable number of\nlandmark points and hence broadening its applicability to other datasets.\nInstead of increasing depth or width of the network, we train the CNN\nefficiently with Mask-Softmax Loss and hard sample mining to achieve upto\n$15\\%$ reduction in error compared to state-of-the-art methods for extreme and\nmedium pose face images from challenging datasets including AFLW, AFW, COFW and\nIBUG.", "field": ["Convolutions", "Output Functions"], "task": ["Face Alignment", "Regression"], "method": ["Heatmap", "Convolution"], "dataset": ["COFW"], "metric": ["Mean Error Rate"], "title": "Disentangling 3D Pose in A Dendritic CNN for Unconstrained 2D Face Alignment"} {"abstract": "Recognizing named entities in a document is a key task in many NLP applications. Although current state-of-the-art approaches to this task reach a high performance on clean text (e.g. newswire genres), those algorithms dramatically degrade when they are moved to noisy environments such as social media domains. We present two systems that address the challenges of processing social media data using character-level phonetics and phonology, word embeddings, and Part-of-Speech tags as features. The first model is a multitask end-to-end Bidirectional Long Short-Term Memory (BLSTM)-Conditional Random Field (CRF) network whose output layer contains two CRF classifiers. The second model uses a multitask BLSTM network as feature extractor that transfers the learning to a CRF classifier for the final prediction. Our systems outperform the current F1 scores of the state of the art on the Workshop on Noisy User-generated Text 2017 dataset by 2.45% and 3.69%, establishing a more suitable approach for social media environments.", "field": ["Structured Prediction"], "task": ["Word Embeddings"], "method": ["Conditional Random Field", "CRF"], "dataset": ["Long-tail emerging entities"], "metric": ["F1 (surface form)", "F1"], "title": "Modeling Noisiness to Recognize Named Entities using Multitask Neural Networks on Social Media"} {"abstract": "Can we automatically group images into semantically meaningful clusters when ground-truth annotations are absent? The task of unsupervised image classification remains an important, and open challenge in computer vision. Several recent approaches have tried to tackle this problem in an end-to-end fashion. In this paper, we deviate from recent works, and advocate a two-step approach where feature learning and clustering are decoupled. First, a self-supervised task from representation learning is employed to obtain semantically meaningful features. Second, we use the obtained features as a prior in a learnable clustering approach. In doing so, we remove the ability for cluster learning to depend on low-level features, which is present in current end-to-end learning approaches. Experimental evaluation shows that we outperform state-of-the-art methods by large margins, in particular +26.6% on CIFAR10, +25.0% on CIFAR100-20 and +21.3% on STL10 in terms of classification accuracy. Furthermore, our method is the first to perform well on a large-scale dataset for image classification. In particular, we obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime without the use of any ground-truth annotations. The code is made publicly available at https://github.com/wvangansbeke/Unsupervised-Classification.", "field": ["Self-Supervised Learning", "Image Data Augmentation", "Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Loss Functions", "Convolutions", "Clustering", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification", "Image Clustering", "Representation Learning", "Semi-Supervised Image Classification", "Unsupervised Image Classification"], "method": ["InfoNCE", "Average Pooling", "MoCo v2", "1x1 Convolution", "Normalized Temperature-scaled Cross Entropy Loss", "SCAN-clustering", "ResNet", "Convolution", "ReLU", "SimCLR", "Residual Connection", "Dense Connections", "Feedforward Network", "Random Resized Crop", "Batch Normalization", "Semantic Clustering by Adopting Nearest Neighbours", "Residual Network", "ColorJitter", "Kaiming Initialization", "k-Means Clustering", "Color Jitter", "NT-Xent", "Random Gaussian Blur", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet - 1% labeled data", "CIFAR-100", "CIFAR-10", "STL-10", "CIFAR-20", "ImageNet"], "metric": ["Train set", "Train Split", "ARI", "Top 1 Accuracy", "Backbone", "Accuracy (%)", "Train Set", "NMI", "Accuracy", "Top 5 Accuracy"], "title": "SCAN: Learning to Classify Images without Labels"} {"abstract": "Attention mechanism has been shown to be effective for person re-identification (Re-ID). However, the learned attentive feature embeddings which are often not naturally diverse nor uncorrelated, will compromise the retrieval performance based on the Euclidean distance. We advocate that enforcing diversity could greatly complement the power of attention. To this end, we propose an Attentive but Diverse Network (ABD-Net), which seamlessly integrates attention modules and diversity regularization throughout the entire network, to learn features that are representative, robust, and more discriminative. Specifically, we introduce a pair of complementary attention modules, focusing on channel aggregation and position awareness, respectively. Furthermore, a new efficient form of orthogonality constraint is derived to enforce orthogonality on both hidden activations and weights. Through careful ablation studies, we verify that the proposed attentive and diverse terms each contributes to the performance gains of ABD-Net. On three popular benchmarks, ABD-Net consistently outperforms existing state-of-the-art methods.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Person Re-Identification"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["MSMT17", "Market-1501", "DukeMTMC-reID"], "metric": ["Rank-1", "mAP", "MAP"], "title": "ABD-Net: Attentive but Diverse Person Re-Identification"} {"abstract": "Machine lipreading is a special type of automatic speech recognition (ASR)\nwhich transcribes human speech by visually interpreting the movement of related\nface regions including lips, face, and tongue. Recently, deep neural network\nbased lipreading methods show great potential and have exceeded the accuracy of\nexperienced human lipreaders in some benchmark datasets. However, lipreading is\nstill far from being solved, and existing methods tend to have high error rates\non the wild data. In this paper, we propose LCANet, an end-to-end deep neural\nnetwork based lipreading system. LCANet encodes input video frames using a\nstacked 3D convolutional neural network (CNN), highway network and\nbidirectional GRU network. The encoder effectively captures both short-term and\nlong-term spatio-temporal information. More importantly, LCANet incorporates a\ncascaded attention-CTC decoder to generate output texts. By cascading CTC with\nattention, it partially eliminates the defect of the conditional independence\nassumption of CTC within the hidden neural layers, and this yields notably\nperformance improvement as well as faster convergence. The experimental results\nshow the proposed system achieves a 1.3% CER and 3.0% WER on the GRID corpus\ndatabase, leading to a 12.3% improvement compared to the state-of-the-art\nmethods.", "field": ["Recurrent Neural Networks", "Activation Functions", "Feedforward Networks", "Miscellaneous Components"], "task": ["Lipreading", "Speech Recognition"], "method": ["Gated Recurrent Unit", "Highway Network", "Highway Layer", "GRU", "Sigmoid Activation"], "dataset": ["GRID corpus (mixed-speech)"], "metric": ["Word Error Rate (WER)"], "title": "LCANet: End-to-End Lipreading with Cascaded Attention-CTC"} {"abstract": "Semantic segmentation of 3D meshes is an important problem for 3D scene understanding. In this paper we revisit the classic multiview representation of 3D meshes and study several techniques that make them effective for 3D semantic segmentation of meshes. Given a 3D mesh reconstructed from RGBD sensors, our method effectively chooses different virtual views of the 3D mesh and renders multiple 2D channels for training an effective 2D semantic segmentation model. Features from multiple per view predictions are finally fused on 3D mesh vertices to predict mesh semantic segmentation labels. Using the large scale indoor 3D semantic segmentation benchmark of ScanNet, we show that our virtual views enable more effective training of 2D semantic segmentation networks than previous multiview approaches. When the 2D per pixel predictions are aggregated on 3D surfaces, our virtual multiview fusion method is able to achieve significantly better 3D semantic segmentation results compared to all prior multiview approaches and competitive with recent 3D convolution approaches.", "field": ["Convolutions"], "task": ["3D Semantic Segmentation", "Scene Understanding", "Semantic Segmentation"], "method": ["3D Convolution", "Convolution"], "dataset": ["ScanNet"], "metric": ["3DIoU"], "title": "Virtual Multi-view Fusion for 3D Semantic Segmentation"} {"abstract": "Recent efforts show that neural networks are vulnerable to small but intentional perturbations on input features in visual classification tasks. Due to the additional consideration of connections between examples (\\eg articles with citation link tend to be in the same class), graph neural networks could be more sensitive to the perturbations, since the perturbations from connected examples exacerbate the impact on a target example. Adversarial Training (AT), a dynamic regularization technique, can resist the worst-case perturbations on input features and is a promising choice to improve model robustness and generalization. However, existing AT methods focus on standard classification, being less effective when training models on graph since it does not model the impact from connected examples. In this work, we explore adversarial training on graph, aiming to improve the robustness and generalization of models learned on graph. We propose Graph Adversarial Training (GraphAT), which takes the impact from connected examples into account when learning to construct and resist perturbations. We give a general formulation of GraphAT, which can be seen as a dynamic regularization scheme based on the graph structure. To demonstrate the utility of GraphAT, we employ it on a state-of-the-art graph neural network model --- Graph Convolutional Network (GCN). We conduct experiments on two citation graphs (Citeseer and Cora) and a knowledge graph (NELL), verifying the effectiveness of GraphAT which outperforms normal training on GCN by 4.51% in node classification accuracy. Codes are available via: https://github.com/fulifeng/GraphAT.", "field": ["Graph Models"], "task": ["Node Classification"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["Cora", "Citeseer", "NELL"], "metric": ["Accuracy"], "title": "Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure"} {"abstract": "The paper describes the development process of the Tilde{'}s NMT systems that were submitted for the WMT 2018 shared task on news translation. We describe the data filtering and pre-processing workflows, the NMT system training architectures, and automatic evaluation results. For the WMT 2018 shared task, we submitted seven systems (both constrained and unconstrained) for English-Estonian and Estonian-English translation directions. The submitted systems were trained using Transformer models.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["WMT 2018 English-Estonian", "WMT 2018 Estonian-English"], "metric": ["BLEU"], "title": "Tilde's Machine Translation Systems for WMT 2018"} {"abstract": "Existing deep learning approaches on 3d human pose estimation for videos are either based on Recurrent or Convolutional Neural Networks (RNNs or CNNs). However, RNN-based frameworks can only tackle sequences with limited frames because sequential models are sensitive to bad frames and tend to drift over long sequences. Although existing CNN-based temporal frameworks attempt to address the sensitivity and drift problems by concurrently processing all input frames in the sequence, the existing state-of-the-art CNN-based framework is limited to 3d pose estimation of a single frame from a sequential input. In this paper, we propose a deep learning-based framework that utilizes matrix factorization for sequential 3d human poses estimation. Our approach processes all input frames concurrently to avoid the sensitivity and drift problems, and yet outputs the 3d pose estimates for every frame in the input sequence. More specifically, the 3d poses in all frames are represented as a motion matrix factorized into a trajectory bases matrix and a trajectory coefficient matrix. The trajectory bases matrix is precomputed from matrix factorization approaches such as Singular Value Decomposition (SVD) or Discrete Cosine Transform (DCT), and the problem of sequential 3d pose estimation is reduced to training a deep network to regress the trajectory coefficient matrix. We demonstrate the effectiveness of our framework on long sequences by achieving state-of-the-art performances on multiple benchmark datasets. Our source code is available at: https://github.com/jiahaoLjh/trajectory-pose-3d.", "field": ["Fourier-related Transforms"], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": ["Discrete Cosine Transform"], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)", "Multi-View or Monocular", "Using 2D ground-truth joints"], "title": "Trajectory Space Factorization for Deep Video-Based 3D Human Pose Estimation"} {"abstract": "In the last few years, large improvements in image clustering have been driven by the recent advances in deep learning. However, due to the architectural complexity of deep neural networks, there is no mathematical theory that explains the success of deep clustering techniques. In this work we introduce Projected-Scattering Spectral Clustering (PSSC), a state-of-the-art, stable, and fast algorithm for image clustering, which is also mathematically interpretable. PSSC includes a novel method to exploit the geometric structure of the scattering transform of small images. This method is inspired by the observation that, in the scattering transform domain, the subspaces formed by the eigenvectors corresponding to the few largest eigenvalues of the data matrices of individual classes are nearly shared among different classes. Therefore, projecting out those shared subspaces reduces the intra-class variability, substantially increasing the clustering performance. We call this method Projection onto Orthogonal Complement (POC). Our experiments demonstrate that PSSC obtains the best results among all shallow clustering algorithms. Moreover, it achieves comparable clustering performance to that of recent state-of-the-art clustering techniques, while reducing the execution time by more than one order of magnitude. In the spirit of reproducible research, we publish a high quality code repository along with the paper.", "field": ["Image Representations"], "task": ["Deep Clustering", "Image Clustering"], "method": ["ScatNet", "Scattering Transform"], "dataset": ["MNIST-test", "USPS", "Fashion-MNIST", "MNIST-full"], "metric": ["NMI", "Accuracy"], "title": "Scattering Transform Based Image Clustering using Projection onto Orthogonal Complement"} {"abstract": "Person re-identification (re-ID) aims at identifying the same persons' images across different cameras. However, domain diversities between different datasets pose an evident challenge for adapting the re-ID model trained on one dataset to another one. State-of-the-art unsupervised domain adaptation methods for person re-ID transferred the learned knowledge from the source domain by optimizing with pseudo labels created by clustering algorithms on the target domain. Although they achieved state-of-the-art performances, the inevitable label noise caused by the clustering procedure was ignored. Such noisy pseudo labels substantially hinders the model's capability on further improving feature representations on the target domain. In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner. In addition, the common practice is to adopt both the classification loss and the triplet loss jointly for achieving optimal performances in person re-ID models. However, conventional triplet loss cannot work with softly refined labels. To solve this problem, a novel soft softmax-triplet loss is proposed to support learning with soft pseudo triplet labels for achieving the optimal domain adaptation performance. The proposed MMT framework achieves considerable improvements of 14.4%, 18.2%, 13.1% and 16.4% mAP on Market-to-Duke, Duke-to-Market, Market-to-MSMT and Duke-to-MSMT unsupervised domain adaptation tasks. Code is available at https://github.com/yxgeee/MMT.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Person Re-Identification", "Unsupervised Domain Adaptation", "Unsupervised Person Re-Identification"], "method": ["ResNet", "Average Pooling", "Triplet Loss", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Market to Duke", "Market to MSMT", "Market-1501->MSMT17", "DukeMTMC-reID->MSMT17", "DukeMTMC-reID->Market-1501", "Duke to MSMT", "Market-1501->DukeMTMC-reID", "Duke to Market"], "metric": ["rank-10", "mAP", "Rank-10", "Rank-1", "Top-1 (%)", "rank-1", "Rank-5", "rank-5"], "title": "Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification"} {"abstract": "Recent literature has shown that features obtained from supervised training of CNNs may over-emphasize texture rather than encoding high-level information. In self-supervised learning in particular, texture as a low-level cue may provide shortcuts that prevent the network from learning higher level representations. To address these problems we propose to use classic methods based on anisotropic diffusion to augment training using images with suppressed texture. This simple method helps retain important edge information and suppress texture at the same time. We empirically show that our method achieves state-of-the-art results on object detection and image classification with eight diverse datasets in either supervised or self-supervised learning tasks such as MoCoV2 and Jigsaw. Our method is particularly effective for transfer learning tasks and we observed improved performance on five standard transfer learning datasets. The large improvements (up to 11.49\\%) on the Sketch-ImageNet dataset, DTD dataset and additional visual analyses with saliency maps suggest that our approach helps in learning better representations that better transfer.", "field": ["Self-Supervised Learning", "Image Data Augmentation", "Initialization", "Convolutional Neural Networks", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification", "Object Detection", "Self-Supervised Learning", "Transfer Learning"], "method": ["InfoNCE", "Average Pooling", "1x1 Convolution", "Normalized Temperature-scaled Cross Entropy Loss", "ResNet", "MoCo", "Convolution", "ReLU", "SimCLR", "Residual Connection", "Dense Connections", "Feedforward Network", "Momentum Contrast", "Random Resized Crop", "Batch Normalization", "Residual Network", "ColorJitter", "Kaiming Initialization", "Jigsaw", "Color Jitter", "NT-Xent", "Random Gaussian Blur", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["PASCAL VOC 2007", "ImageNet"], "metric": ["Top 1 Accuracy", "MAP"], "title": "Learning Visual Representations for Transfer Learning by Suppressing Texture"} {"abstract": "In this paper, we address the semantic segmentation task with a new context aggregation scheme named \\emph{object context}, which focuses on enhancing the role of object information. Motivated by the fact that the category of each pixel is inherited from the object it belongs to, we define the object context for each pixel as the set of pixels that belong to the same category as the given pixel in the image. We use a binary relation matrix to represent the relationship between all pixels, where the value one indicates the two selected pixels belong to the same category and zero otherwise. We propose to use a dense relation matrix to serve as a surrogate for the binary relation matrix. The dense relation matrix is capable to emphasize the contribution of object information as the relation scores tend to be larger on the object pixels than the other pixels. Considering that the dense relation matrix estimation requires quadratic computation overhead and memory consumption w.r.t. the input size, we propose an efficient interlaced sparse self-attention scheme to model the dense relations between any two of all pixels via the combination of two sparse relation matrices. To capture richer context information, we further combine our interlaced sparse self-attention scheme with the conventional multi-scale context schemes including pyramid pooling~\\citep{zhao2017pyramid} and atrous spatial pyramid pooling~\\citep{chen2018deeplab}. We empirically show the advantages of our approach with competitive performances on five challenging benchmarks including: Cityscapes, ADE20K, LIP, PASCAL-Context and COCO-Stuff", "field": ["Semantic Segmentation Models", "Semantic Segmentation Modules", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Miscellaneous Components"], "task": ["Scene Parsing", "Semantic Segmentation"], "method": ["Dilated Convolution", "PSPNet", "Average Pooling", "Auxiliary Classifier", "Convolution", "Batch Normalization", "ReLU", "Rectified Linear Units", "Pyramid Pooling Module"], "dataset": ["Cityscapes test"], "metric": ["Mean IoU (class)"], "title": "OCNet: Object Context Network for Scene Parsing"} {"abstract": "The task of object segmentation in videos is usually accomplished by processing appearance and motion information separately using standard 2D convolutional networks, followed by a learned fusion of the two sources of information. On the other hand, 3D convolutional networks have been successfully applied for video classification tasks, but have not been leveraged as effectively to problems involving dense per-pixel interpretation of videos compared to their 2D convolutional counterparts and lag behind the aforementioned networks in terms of performance. In this work, we show that 3D CNNs can be effectively applied to dense video prediction tasks such as salient object segmentation. We propose a simple yet effective encoder-decoder network architecture consisting entirely of 3D convolutions that can be trained end-to-end using a standard cross-entropy loss. To this end, we leverage an efficient 3D encoder, and propose a 3D decoder architecture, that comprises novel 3D Global Convolution layers and 3D Refinement modules. Our approach outperforms existing state-of-the-arts by a large margin on the DAVIS'16 Unsupervised, FBMS and ViSal dataset benchmarks in addition to being faster, thus showing that our architecture can efficiently learn expressive spatio-temporal features and produce high quality video segmentation masks. Our code and models will be made publicly available.", "field": ["Convolutions"], "task": ["Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Classification", "Video Object Segmentation", "Video Prediction", "Video Segmentation", "Video Semantic Segmentation"], "method": ["Convolution"], "dataset": ["DAVIS-2016", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-Score", "J&F"], "title": "Making a Case for 3D Convolutions for Object Segmentation in Videos"} {"abstract": "This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that our approach and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification", "Representation Learning", "Semi-Supervised Image Classification"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet - 1% labeled data", "ImageNet - 10% labeled data"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "S4L: Self-Supervised Semi-Supervised Learning"} {"abstract": "In this paper, we introduce SalsaNext for the uncertainty-aware semantic segmentation of a full 3D LiDAR point cloud in real-time. SalsaNext is the next version of SalsaNet [1] which has an encoder-decoder architecture where the encoder unit has a set of ResNet blocks and the decoder part combines upsampled features from the residual blocks. In contrast to SalsaNet, we introduce a new context module, replace the ResNet encoder blocks with a new residual dilated convolution stack with gradually increasing receptive fields and add the pixel-shuffle layer in the decoder. Additionally, we switch from stride convolution to average pooling and also apply central dropout treatment. To directly optimize the Jaccard index, we further combine the weighted cross-entropy loss with Lovasz-Softmax loss [2]. We finally inject a Bayesian treatment to compute the epistemic and aleatoric uncertainties for each point in the cloud. We provide a thorough quantitative evaluation on the Semantic-KITTI dataset [3], which demonstrates that the proposed SalsaNext outperforms other state-of-the-art semantic segmentation networks and ranks first on the Semantic-KITTI leaderboard. We also release our source code https://github.com/TiagoCortinhal/SalsaNext.", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["3D Semantic Segmentation", "Autonomous Driving", "Semantic Segmentation"], "method": ["ResNet", "Dilated Convolution", "Average Pooling", "Residual Block", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Dropout", "Kaiming Initialization", "Lovasz-Softmax", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["SemanticKITTI"], "metric": ["mIoU"], "title": "SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving"} {"abstract": "Recurrent neural networks (RNNs) stand at the forefront of many recent\ndevelopments in deep learning. Yet a major difficulty with these models is\ntheir tendency to overfit, with dropout shown to fail when applied to recurrent\nlayers. Recent results at the intersection of Bayesian modelling and deep\nlearning offer a Bayesian interpretation of common deep learning techniques\nsuch as dropout. This grounding of dropout in approximate Bayesian inference\nsuggests an extension of the theoretical results, offering insights into the\nuse of dropout with RNN models. We apply this new variational inference based\ndropout technique in LSTM and GRU models, assessing it on language modelling\nand sentiment analysis tasks. The new approach outperforms existing techniques,\nand to the best of our knowledge improves on the single model state-of-the-art\nin language modelling with the Penn Treebank (73.4 test perplexity). This\nextends our arsenal of variational tools in deep learning.", "field": ["Recurrent Neural Networks", "Activation Functions", "Regularization"], "task": ["Bayesian Inference", "Language Modelling", "Sentiment Analysis", "Variational Inference"], "method": ["Gated Recurrent Unit", "Variational Dropout", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Dropout", "Embedding Dropout", "GRU", "Sigmoid Activation"], "dataset": ["Penn Treebank (Word Level)"], "metric": ["Validation perplexity", "Test perplexity"], "title": "A Theoretically Grounded Application of Dropout in Recurrent Neural Networks"} {"abstract": "Emotion recognition in conversations is crucial for building empathetic machines. Present works in this domain do not explicitly consider the inter-personal influences that thrive in the emotional dynamics of dialogues. To this end, we propose Interactive COnversational memory Network (ICON), a multimodal emotion detection framework that extracts multimodal features from conversational videos and hierarchically models the self- and inter-speaker emotional influences into global memories. Such memories generate contextual summaries which aid in predicting the emotional orientation of utterance-videos. Our model outperforms state-of-the-art networks on multiple classification and regression tasks in two benchmark datasets.", "field": ["Working Memory Models"], "task": ["Emotion Recognition", "Emotion Recognition in Conversation", "Multimodal Emotion Recognition", "Regression"], "method": ["Memory Network"], "dataset": ["IEMOCAP", "SEMAINE"], "metric": ["MAE (Arousal)", "MAE (Power)", "MAE (Valence)", "MAE (Expectancy)", "F1"], "title": "ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection"} {"abstract": "This paper explores sequential modelling of polyphonic music with deep neural networks. While recent breakthroughs have focussed on network architecture, we demonstrate that the representation of the sequence can make an equally significant contribution to the performance of the model as measured by validation set loss. By extracting salient features inherent to the training dataset, the model can either be conditioned on these features or trained to predict said features as extra components of the sequences being modelled. We show that training a neural network to predict a seemingly more complex sequence, with extra features included in the series being modelled, can improve overall model performance significantly. We first introduce TonicNet, a GRU-based model trained to initially predict the chord at a given time-step before then predicting the notes of each voice at that time-step, in contrast with the typical approach of predicting only the notes. We then evaluate TonicNet on the canonical JSB Chorales dataset and obtain state-of-the-art results.", "field": ["Output Functions", "Regularization", "Learning Rate Schedules", "Recurrent Neural Networks", "Feedforward Networks", "Skip Connections"], "task": ["Music Generation", "Music Modeling"], "method": ["Gated Recurrent Unit", "Variational Dropout", "Softmax", "Concatenated Skip Connection", "1cycle learning rate scheduling policy", "1cycle", "Dropout", "GRU", "Dense Connections"], "dataset": ["JSB Chorales"], "metric": ["NLL"], "title": "Improving Polyphonic Music Models with Feature-Rich Encoding"} {"abstract": "Detecting pedestrian has been arguably addressed as a special topic beyond\ngeneral object detection. Although recent deep learning object detectors such\nas Fast/Faster R-CNN [1, 2] have shown excellent performance for general object\ndetection, they have limited success for detecting pedestrian, and previous\nleading pedestrian detectors were in general hybrid methods combining\nhand-crafted and deep convolutional features. In this paper, we investigate\nissues involving Faster R-CNN [2] for pedestrian detection. We discover that\nthe Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a\nstand-alone pedestrian detector, but surprisingly, the downstream classifier\ndegrades the results. We argue that two reasons account for the unsatisfactory\naccuracy: (i) insufficient resolution of feature maps for handling small\ninstances, and (ii) lack of any bootstrapping strategy for mining hard negative\nexamples. Driven by these observations, we propose a very simple but effective\nbaseline for pedestrian detection, using an RPN followed by boosted forests on\nshared, high-resolution convolutional feature maps. We comprehensively evaluate\nthis method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting\ncompetitive accuracy and good speed. Code will be made publicly available.", "field": ["Output Functions", "RoI Feature Extractors", "Convolutions", "Region Proposal", "Object Detection Models"], "task": ["Object Detection", "Pedestrian Detection", "Region Proposal"], "method": ["RPN", "Faster R-CNN", "Softmax", "RoIPool", "Convolution", "Region Proposal Network"], "dataset": ["Caltech"], "metric": ["Reasonable Miss Rate"], "title": "Is Faster R-CNN Doing Well for Pedestrian Detection?"} {"abstract": "We propose a novel memory cell for recurrent neural networks that dynamically maintains information across long windows of time using relatively few resources. The Legendre Memory Unit~(LMU) is mathematically derived to orthogonalize its continuous-time history -- doing so by solving $d$ coupled ordinary differential equations~(ODEs), whose phase space linearly maps onto sliding windows of time via the Legendre polynomials up to degree $d - 1$. Backpropagation across LMUs outperforms equivalently-sized LSTMs on a chaotic time-series prediction task, improves memory capacity by two orders of magnitude, and significantly reduces training and inference times. LMUs can efficiently handle temporal dependencies spanning $100\\text{,}000$ time-steps, converge rapidly, and use few internal state-variables to learn complex functions spanning long windows of time -- exceeding state-of-the-art performance among RNNs on permuted sequential MNIST. These results are due to the network's disposition to learn scale-invariant features independently of step size. Backpropagation through the ODE solver allows each layer to adapt its internal time-step, enabling the network to learn task-relevant time-scales. We demonstrate that LMU memory cells can be implemented using $m$ recurrently-connected Poisson spiking neurons, $\\mathcal{O}( m )$ time and memory, with error scaling as $\\mathcal{O}( d / \\sqrt{m} )$. We discuss implementations of LMUs on analog and digital neuromorphic hardware.", "field": ["Recurrent Neural Networks"], "task": ["Sequential Image Classification", "Time Series", "Time Series Prediction"], "method": ["Legendre Memory Unit", "LMU"], "dataset": ["Sequential MNIST"], "metric": ["Permuted Accuracy"], "title": "Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks"} {"abstract": "As an instance-level recognition problem, person re-identification (ReID) relies on discriminative features, which not only capture different spatial scales but also encapsulate an arbitrary combination of multiple scales. We call features of both homogeneous and heterogeneous scales omni-scale features. In this paper, a novel deep ReID CNN is designed, termed Omni-Scale Network (OSNet), for omni-scale feature learning. This is achieved by designing a residual block composed of multiple convolutional streams, each detecting features at a certain scale. Importantly, a novel unified aggregation gate is introduced to dynamically fuse multi-scale features with input-dependent channel-wise weights. To efficiently learn spatial-channel correlations and avoid overfitting, the building block uses pointwise and depthwise convolutions. By stacking such block layer-by-layer, our OSNet is extremely lightweight and can be trained from scratch on existing ReID benchmarks. Despite its small model size, OSNet achieves state-of-the-art performance on six person ReID datasets, outperforming most large-sized models, often by a clear margin. Code and models are available at: \\url{https://github.com/KaiyangZhou/deep-person-reid}.", "field": ["Activation Functions", "Normalization", "Convolutions", "Skip Connections", "Skip Connection Blocks"], "task": ["Person Re-Identification"], "method": ["Batch Normalization", "Convolution", "ReLU", "Residual Connection", "Residual Block", "Rectified Linear Units"], "dataset": ["CUHK03", "CUHK03 detected", "MSMT17", "DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "mAP", "MAP"], "title": "Omni-Scale Feature Learning for Person Re-Identification"} {"abstract": "We present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.", "field": ["Convolutions", "Activation Functions", "Recurrent Neural Networks", "Loss Functions"], "task": ["Speech Separation"], "method": ["uPIT", "Long Short-Term Memory", "Convolution", "PReLU", "ReLU", "Parameterized ReLU", "LSTM", "utterance level permutation invariant training", "Rectified Linear Units"], "dataset": ["wsj0-2mix", "WSJ0-5mix", "WSJ0-3mix", "WSJ0-4mix"], "metric": ["SI-SDRi"], "title": "Voice Separation with an Unknown Number of Multiple Speakers"} {"abstract": "Quantitative analysis of brain tumors is critical for clinical decision\nmaking. While manual segmentation is tedious, time consuming and subjective,\nthis task is at the same time very challenging to solve for automatic\nsegmentation methods. In this paper we present our most recent effort on\ndeveloping a robust segmentation algorithm in the form of a convolutional\nneural network. Our network architecture was inspired by the popular U-Net and\nhas been carefully modified to maximize brain tumor segmentation performance.\nWe use a dice loss function to cope with class imbalances and use extensive\ndata augmentation to successfully prevent overfitting. Our method beats the\ncurrent state of the art on BraTS 2015, is one of the leading methods on the\nBraTS 2017 validation set (dice scores of 0.896, 0.797 and 0.732 for whole\ntumor, tumor core and enhancing tumor, respectively) and achieves very good\nDice scores on the test set (0.858 for whole, 0.775 for core and 0.647 for\nenhancing tumor). We furthermore take part in the survival prediction\nsubchallenge by training an ensemble of a random forest regressor and\nmultilayer perceptrons on shape features describing the tumor subregions. Our\napproach achieves 52.6% accuracy, a Spearman correlation coefficient of 0.496\nand a mean square error of 209607 on the test set.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Brain Tumor Segmentation", "Data Augmentation", "Decision Making", "Tumor Segmentation"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["BRATS-2015"], "metric": ["Dice Score"], "title": "Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge"} {"abstract": "Data augmentation is an effective technique for improving the accuracy of\nmodern image classifiers. However, current data augmentation implementations\nare manually designed. In this paper, we describe a simple procedure called\nAutoAugment to automatically search for improved data augmentation policies. In\nour implementation, we have designed a search space where a policy consists of\nmany sub-policies, one of which is randomly chosen for each image in each\nmini-batch. A sub-policy consists of two operations, each operation being an\nimage processing function such as translation, rotation, or shearing, and the\nprobabilities and magnitudes with which the functions are applied. We use a\nsearch algorithm to find the best policy such that the neural network yields\nthe highest validation accuracy on a target dataset. Our method achieves\nstate-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without\nadditional data). On ImageNet, we attain a Top-1 accuracy of 83.5% which is\n0.4% better than the previous record of 83.1%. On CIFAR-10, we achieve an error\nrate of 1.5%, which is 0.6% better than the previous state-of-the-art.\nAugmentation policies we find are transferable between datasets. The policy\nlearned on ImageNet transfers well to achieve significant improvements on other\ndatasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft,\nand Stanford Cars.", "field": ["Recurrent Neural Networks", "Activation Functions", "Image Data Augmentation"], "task": ["Data Augmentation", "Fine-Grained Image Classification", "Image Augmentation", "Image Classification"], "method": ["Long Short-Term Memory", "AutoAugment", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["FGVC Aircraft", "CIFAR-100", "Oxford 102 Flowers", "Oxford-IIIT Pets", "Caltech-101", "SVHN", "Stanford Cars"], "metric": ["Percentage error", "Accuracy", "Percentage correct", "Top-1 Error Rate"], "title": "AutoAugment: Learning Augmentation Policies from Data"} {"abstract": "The way that information propagates in neural networks is of great\nimportance. In this paper, we propose Path Aggregation Network (PANet) aiming\nat boosting information flow in proposal-based instance segmentation framework.\nSpecifically, we enhance the entire feature hierarchy with accurate\nlocalization signals in lower layers by bottom-up path augmentation, which\nshortens the information path between lower layers and topmost feature. We\npresent adaptive feature pooling, which links feature grid and all feature\nlevels to make useful information in each feature level propagate directly to\nfollowing proposal subnetworks. A complementary branch capturing different\nviews for each proposal is created to further improve mask prediction. These\nimprovements are simple to implement, with subtle extra computational overhead.\nOur PANet reaches the 1st place in the COCO 2017 Challenge Instance\nSegmentation task and the 2nd place in Object Detection task without\nlarge-batch training. It is also state-of-the-art on MVD and Cityscapes. Code\nis available at https://github.com/ShuLiu1993/PANet", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Stochastic Optimization", "Feature Extractors", "RoI Feature Extractors", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Region Proposal", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": ["Weight Decay", "Average Pooling", "Bottom-up Path Augmentation", "1x1 Convolution", "RoIAlign", "PAFPN", "Region Proposal Network", "ResNet", "Convolution", "Adaptive Feature Pooling", "ReLU", "Residual Connection", "FPN", "Dense Connections", "Max Pooling", "RPN", "Grouped Convolution", "Batch Normalization", "Residual Network", "Kaiming Initialization", "ResNeXt Block", "SGD with Momentum", "ResNeXt", "Feature Pyramid Network", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "PANet"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50", "mask AP"], "title": "Path Aggregation Network for Instance Segmentation"} {"abstract": "Building instance segmentation models that are data-efficient and can handle rare object categories is an important challenge in computer vision. Leveraging data augmentations is a promising direction towards addressing this challenge. Here, we perform a systematic study of the Copy-Paste augmentation ([13, 12]) for instance segmentation where we randomly paste objects onto an image. Prior studies on Copy-Paste relied on modeling the surrounding visual context for pasting the objects. However, we find that the simple mechanism of pasting objects randomly is good enough and can provide solid gains on top of strong baselines. Furthermore, we show Copy-Paste is additive with semi-supervised methods that leverage extra data through pseudo labeling (e.g. self-training). On COCO instance segmentation, we achieve 49.1 mask AP and 57.3 box AP, an improvement of +0.6 mask AP and +1.5 box AP over the previous state-of-the-art. We further demonstrate that Copy-Paste can lead to significant improvements on the LVIS benchmark. Our baseline model outperforms the LVIS 2020 Challenge winning entry by +3.6 mask AP on rare categories.", "field": ["Convolutional Neural Networks", "Feature Extractors", "Normalization", "Policy Gradient Methods", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Image Models", "Stochastic Optimization", "Recurrent Neural Networks", "Feedforward Networks", "Neural Architecture Search", "Skip Connection Blocks", "Image Data Augmentation", "Initialization", "Output Functions", "RoI Feature Extractors", "Instance Segmentation Models", "Skip Connections", "Image Model Blocks"], "task": ["Data Augmentation", "Image Augmentation", "Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": ["Depthwise Convolution", "simple Copy-Paste", "Cascade Mask R-CNN", "Average Pooling", "EfficientNet", "Global Average Pooling", "RMSProp", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "RoIAlign", "Proximal Policy Optimization", "ResNet", "Entropy Regularization", "Convolution", "NAS-FPN", "ReLU", "Residual Connection", "Dense Connections", "Swish", "Batch Normalization", "Residual Network", "PPO", "Kaiming Initialization", "Pointwise Convolution", "Neural Architecture Search", "Squeeze-and-Excitation Block", "Sigmoid Activation", "Copy-Paste", "Inverted Residual Block", "Softmax", "LSTM", "Stochastic Depth", "Depthwise Separable Convolution", "Bottleneck Residual Block", "Dropout", "Residual Block", "Mask R-CNN", "Rectified Linear Units", "Max Pooling"], "dataset": ["PASCAL VOC 2012 val", "COCO minival", "COCO test-dev", "LVIS v1.0", "PASCAL VOC 2007"], "metric": ["box AP", "MAP", "mask AP", "mIoU"], "title": "Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation"} {"abstract": "Fluorescein Angiography (FA) is a technique that employs the designated camera for Fundus photography incorporating excitation and barrier filters. FA also requires fluorescein dye that is injected intravenously, which might cause adverse effects ranging from nausea, vomiting to even fatal anaphylaxis. Currently, no other fast and non-invasive technique exists that can generate FA without coupling with Fundus photography. To eradicate the need for an invasive FA extraction procedure, we introduce an Attention-based Generative network that can synthesize Fluorescein Angiography from Fundus images. The proposed gan incorporates multiple attention based skip connections in generators and comprises novel residual blocks for both generators and discriminators. It utilizes reconstruction, feature-matching, and perceptual loss along with adversarial training to produces realistic Angiograms that is hard for experts to distinguish from real ones. Our experiments confirm that the proposed architecture surpasses recent state-of-the-art generative networks for fundus-to-angio translation task.", "field": ["Stochastic Optimization"], "task": ["Fundus to Angiography Generation"], "method": ["Feedback Alignment", "FA"], "dataset": ["Fundus Fluorescein Angiogram Photographs & Colour Fundus Images of Diabetic Patients"], "metric": ["Kernel Inception Distance", "FID"], "title": "Attention2AngioGAN: Synthesizing Fluorescein Angiography from Retinal Fundus Images using Generative Adversarial Networks"} {"abstract": "Representation learning promises to unlock deep learning for the long tail of vision tasks without expensive labelled datasets. Yet, the absence of a unified evaluation for general visual representations hinders progress. Popular protocols are often too constrained (linear classification), limited in diversity (ImageNet, CIFAR, Pascal-VOC), or only weakly related to representation quality (ELBO, reconstruction error). We present the Visual Task Adaptation Benchmark (VTAB), which defines good representations as those that adapt to diverse, unseen tasks with few examples. With VTAB, we conduct a large-scale study of many popular publicly-available representation learning algorithms. We carefully control confounders such as architecture and tuning budget. We address questions like: How effective are ImageNet representations beyond standard natural datasets? How do representations trained via generative and discriminative models compare? To what extent can self-supervision replace labels? And, how close are we to general visual representations?", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification", "Representation Learning"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["VTAB-1k"], "metric": ["Top-1 Accuracy"], "title": "A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark"} {"abstract": "High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \\emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \\emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\\url{https://github.com/HRNet}}.", "field": ["Initialization", "Output Functions", "Convolutional Neural Networks", "RoI Feature Extractors", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Instance Segmentation Models", "Region Proposal", "Skip Connections", "Object Detection Models", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Object Detection", "Pose Estimation", "Representation Learning", "Semantic Segmentation"], "method": ["Average Pooling", "Faster R-CNN", "Cascade Corner Pooling", "1x1 Convolution", "RoIAlign", "Region Proposal Network", "Center Pooling", "HRNet", "ResNet", "RoIPool", "Convolution", "ReLU", "Residual Connection", "RPN", "Batch Normalization", "Residual Network", "Cascade R-CNN", "Kaiming Initialization", "Softmax", "Bottleneck Residual Block", "CenterNet", "Mask R-CNN", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Cityscapes val", "COCO minival", "COCO test-dev", "CamVid", "PASCAL Context"], "metric": ["APM", "Mean IoU", "mIoU", "box AP", "AP75", "APS", "APL", "AP50", "mask AP"], "title": "Deep High-Resolution Representation Learning for Visual Recognition"} {"abstract": "We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PreNorm) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose $\\ell_2$ normalization with a single scale parameter (ScaleNorm) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FixNorm). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT'15 English-Vietnamese. We observe sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT'14 English-German), ScaleNorm and FixNorm remain competitive but PreNorm degrades performance.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation", "Word Embeddings"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["IWSLT2015 English-Vietnamese"], "metric": ["BLEU"], "title": "Transformers without Tears: Improving the Normalization of Self-Attention"} {"abstract": "Generative Adversarial Networks (GANs) convergence in a high-resolution\nsetting with a computational constrain of GPU memory capacity (from 12GB to 24\nGB) has been beset with difficulty due to the known lack of convergence rate\nstability. In order to boost network convergence of DCGAN (Deep Convolutional\nGenerative Adversarial Networks) and achieve good-looking high-resolution\nresults we propose a new layered network structure, HDCGAN, that incorporates\ncurrent state-of-the-art techniques for this effect. A novel dataset, Curt\\'o &\nZarza, containing human faces from different ethnical groups in a wide variety\nof illumination conditions and image resolutions is introduced. Curt\\'o is\nenhanced with HDCGAN synthetic images, thus being the first GAN augmented face\ndataset. We conduct extensive experiments on CelebA (MS-SSIM 0.1978 and\nDistance of Fr\\'echet 8.77) and Curt\\'o.", "field": ["Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Generative Models"], "task": ["Image Generation", "MS-SSIM", "SSIM"], "method": ["HDCGAN", "Generative Adversarial Network", "Scaled Exponential Linear Unit", "High-resolution Deep Convolutional Generative Adversarial Networks", "Adam", "SELU", "GAN", "Batch Normalization", "Tanh Activation", "Convolution", "ReLU", "DCGAN", "Deep Convolutional GAN", "Leaky ReLU", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["CelebA 128x128", "CelebA 64x64"], "metric": ["FID", "MS-SSIM"], "title": "High-Resolution Deep Convolutional Generative Adversarial Networks"} {"abstract": "Transductive inference is an effective means of tackling the data deficiency problem in few-shot learning settings. A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples, or confidence-weighted average of all the query samples. However, a caveat here is that the model confidence may be unreliable, which may lead to incorrect predictions. To tackle this issue, we propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries such that they improve the model's transductive inference performance on unseen tasks. We achieve this by meta-learning an input-adaptive distance metric over a task distribution under various model and data perturbations, which will enforce consistency on the model predictions under diverse uncertainties for unseen tasks. Moreover, we additionally suggest a regularization which explicitly enforces the consistency on the predictions across the different dimensions of a high-dimensional embedding vector. We validate our few-shot learning model with meta-learned confidence on four benchmark datasets, on which it largely outperforms strong recent baselines and obtains new state-of-the-art results. Further application on semi-supervised few-shot learning tasks also yields significant performance improvements over the baselines. The source code of our algorithm is available at https://github.com/seongmin-kye/MCT.", "field": ["Pooling Operations"], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning"], "method": ["Global Average Pooling", "Average Pooling"], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Mini-ImageNet - 1-Shot Learning"], "metric": ["Accuracy"], "title": "Meta-Learned Confidence for Few-shot Learning"} {"abstract": "In this paper, we present CensNet, Convolution with Edge-Node Switching graph neural network, for semi-supervised classification and regression in graph-structured data with both node and edge features. CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space. By using line graph of the original undirected graph, the role of nodes and edges are switched, and two novel graph convolution operations are proposed for feature propagation. Experimental results on real-world academic citation networks and quantum chemistry graphs show that our approach has achieved or matched the state-of-the-art performance.", "field": ["Convolutions", "Graph Embeddings"], "task": ["Graph Classification", "Graph Embedding", "Graph Regression", "Node Classification", "Regression"], "method": ["LINE", "Large-scale Information Network Embedding", "Convolution"], "dataset": ["Tox21 ", "Lipophilicity "], "metric": ["RMSE@80%Train", "AUC@80%Train"], "title": "CensNet: Convolution with Edge-Node Switching in Graph Neural Networks"} {"abstract": "Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two- and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications.", "field": ["Temporal Convolutions"], "task": ["Music Source Separation", "Speaker Separation", "Speech Enhancement", "Speech Separation"], "method": ["ConvTasNet", "Convolutional time-domain audio separation network"], "dataset": ["wsj0-2mix", "MUSDB18"], "metric": ["SDR (vocals)", "SDR (bass)", "SI-SDRi", "SDR (drums)", "SDR (other)"], "title": "Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation"} {"abstract": "Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Citation Intent Classification", "Dependency Parsing", "Language Modelling", "Medical Named Entity Recognition", "Named Entity Recognition", "Participant Intervention Comparison Outcome Extraction", "Relation Extraction", "Sentence Classification"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["ChemProt", "Paper Field", "JNLPBA", "BC5CDR", "SciCite", "GENIA - UAS", "ACL-ARC", "EBM-NLP", "PubMed 20k RCT", "GENIA - LAS", "NCBI-disease", "SciERC", "ScienceCite"], "metric": ["F1"], "title": "SciBERT: A Pretrained Language Model for Scientific Text"} {"abstract": "We propose a new method for learning the structure of convolutional neural\nnetworks (CNNs) that is more efficient than recent state-of-the-art methods\nbased on reinforcement learning and evolutionary algorithms. Our approach uses\na sequential model-based optimization (SMBO) strategy, in which we search for\nstructures in order of increasing complexity, while simultaneously learning a\nsurrogate model to guide the search through structure space. Direct comparison\nunder the same search space shows that our method is up to 5 times more\nefficient than the RL method of Zoph et al. (2018) in terms of number of models\nevaluated, and 8 times faster in terms of total compute. The structures we\ndiscover in this way achieve state of the art classification accuracies on\nCIFAR-10 and ImageNet.", "field": ["Output Functions", "Stochastic Optimization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Neural Architecture Search"], "task": ["Image Classification", "Neural Architecture Search"], "method": ["Depthwise Convolution", "PNAS", "Progressive Neural Architecture Search", "Feedforward Network", "RMSProp", "Softmax", "Convolution", "Depthwise Separable Convolution", "Pointwise Convolution", "Dense Connections", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 5 Accuracy", "Top 1 Accuracy"], "title": "Progressive Neural Architecture Search"} {"abstract": "Recent neural network models have significantly advanced the task of coreference resolution. However, current neural coreference models are usually trained with heuristic loss functions that are computed over a sequence of local decisions. In this paper, we introduce an end-to-end reinforcement learning based coreference resolution model to directly optimize coreference evaluation metrics. Specifically, we modify the state-of-the-art higher-order mention ranking approach in Lee et al. (2018) to a reinforced policy gradient model by incorporating the reward associated with a sequence of coreference linking actions. Furthermore, we introduce maximum entropy regularization for adequate exploration to prevent the model from prematurely converging to a bad local optimum. Our proposed model achieves new state-of-the-art performance on the English OntoNotes v5.0 benchmark.", "field": ["Regularization"], "task": ["Coreference Resolution"], "method": ["Entropy Regularization"], "dataset": ["OntoNotes", "CoNLL 2012"], "metric": ["Avg F1", "F1"], "title": "End-to-end Deep Reinforcement Learning Based Coreference Resolution"} {"abstract": "Machine learning models are commonly trained end-to-end and in a supervised setting, using paired (input, output) data. Classical examples include recent super-resolution methods that train on pairs of (low-resolution, high-resolution) images. However, these end-to-end approaches require re-training every time there is a distribution shift in the inputs (e.g., night images vs daylight) or relevant latent variables (e.g., camera blur or hand motion). In this work, we leverage state-of-the-art (SOTA) generative models (here StyleGAN2) for building powerful image priors, which enable application of Bayes' theorem for many downstream reconstruction tasks. Our method, called Bayesian Reconstruction through Generative Models (BRGM), uses a single pre-trained generator model to solve different image restoration tasks, i.e., super-resolution and in-painting, by combining it with different forward corruption models. We demonstrate BRGM on three large, yet diverse, datasets that enable us to build powerful priors: (i) 60,000 images from the Flick Faces High Quality dataset (ii) 240,000 chest X-rays from MIMIC III and (iii) a combined collection of 5 brain MRI datasets with 7,329 scans. Across all three datasets and without any dataset-specific hyperparameter tuning, our approach yields state-of-the-art performance on super-resolution, particularly at low-resolution levels, as well as inpainting, compared to state-of-the-art methods that are specific to each reconstruction task. Our source code and all pre-trained models are available online: https://razvanmarinescu.github.io/brgm/.", "field": ["Regularization", "Activation Functions", "Normalization", "Convolutions", "Generative Models"], "task": ["Image Denoising", "Image Inpainting", "Image Reconstruction", "Image Restoration", "Image Super-Resolution", "Super-Resolution"], "method": ["Convolution", "Weight Demodulation", "Leaky ReLU", "R1 Regularization", "Path Length Regularization", "StyleGAN2"], "dataset": ["FFHQ 256 x 256 - 4x upscaling", "FFHQ", "FFHQ 1024 x 1024", "FFHQ 64x64 - 4x upscaling"], "metric": ["SSIM", "PSNR", "RMSE", "LPIPS"], "title": "Bayesian Image Reconstruction using Deep Generative Models"} {"abstract": "We learn to compute optical flow by combining a classical spatial-pyramid\nformulation with deep learning. This estimates large motions in a\ncoarse-to-fine approach by warping one image of a pair at each pyramid level by\nthe current flow estimate and computing an update to the flow. Instead of the\nstandard minimization of an objective function at each pyramid level, we train\none deep network per level to compute the flow update. Unlike the recent\nFlowNet approach, the networks do not need to deal with large motions; these\nare dealt with by the pyramid. This has several advantages. First, our Spatial\nPyramid Network (SPyNet) is much simpler and 96% smaller than FlowNet in terms\nof model parameters. This makes it more efficient and appropriate for embedded\napplications. Second, since the flow at each pyramid level is small (< 1\npixel), a convolutional approach applied to pairs of warped images is\nappropriate. Third, unlike FlowNet, the learned convolution filters appear\nsimilar to classical spatio-temporal filters, giving insight into the method\nand how to improve it. Our results are more accurate than FlowNet on most\nstandard benchmarks, suggesting a new direction of combining classical flow\nmethods with deep learning.", "field": ["Convolutions"], "task": ["Dense Pixel Correspondence Estimation", "Optical Flow Estimation"], "method": ["Convolution"], "dataset": ["HPatches", "Sintel-final", "Sintel-clean"], "metric": ["Viewpoint IV AEPE", "Average End-Point Error", "Viewpoint III AEPE", "Viewpoint I AEPE", "Viewpoint V AEPE", "Viewpoint II AEPE"], "title": "Optical Flow Estimation using a Spatial Pyramid Network"} {"abstract": "Effective convolutional neural networks are trained on large sets of labeled\ndata. However, creating large labeled datasets is a very costly and\ntime-consuming task. Semi-supervised learning uses unlabeled data to train a\nmodel with higher accuracy when there is a limited set of labeled data\navailable. In this paper, we consider the problem of semi-supervised learning\nwith convolutional neural networks. Techniques such as randomized data\naugmentation, dropout and random max-pooling provide better generalization and\nstability for classifiers that are trained using gradient descent. Multiple\npasses of an individual sample through the network might lead to different\npredictions due to the non-deterministic behavior of these techniques. We\npropose an unsupervised loss function that takes advantage of the stochastic\nnature of these methods and minimizes the difference between the predictions of\nmultiple passes of a training sample through the network. We evaluate the\nproposed method on several benchmark datasets.", "field": ["Regularization"], "task": ["Data Augmentation", "Semi-Supervised Image Classification"], "method": ["Dropout"], "dataset": ["cifar-100, 10000 Labels", "SVHN, 250 Labels"], "metric": ["Accuracy"], "title": "Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning"} {"abstract": "Recent work has shown that data augmentation has the potential to significantly improve the generalization of deep learning models. Recently, automated augmentation strategies have led to state-of-the-art results in image classification and object detection. While these strategies were optimized for improving validation accuracy, they also led to state-of-the-art results in semi-supervised learning and improved robustness to common corruptions of images. An obstacle to a large-scale adoption of these methods is a separate search phase which increases the training complexity and may substantially increase the computational cost. Additionally, due to the separate search phase, these approaches are unable to adjust the regularization strength based on model or dataset size. Automated augmentation policies are often found by training small models on small datasets and subsequently applied to train larger models. In this work, we remove both of these obstacles. RandAugment has a significantly reduced search space which allows it to be trained on the target task with no need for a separate proxy task. Furthermore, due to the parameterization, the regularization strength may be tailored to different model and dataset sizes. RandAugment can be used uniformly across different tasks and datasets and works out of the box, matching or surpassing all previous automated augmentation approaches on CIFAR-10/100, SVHN, and ImageNet. On the ImageNet dataset we achieve 85.0% accuracy, a 0.6% increase over the previous state-of-the-art and 1.0% increase over baseline augmentation. On object detection, RandAugment leads to 1.0-1.3% improvement over baseline augmentation, and is within 0.3% mAP of AutoAugment on COCO. Finally, due to its interpretable hyperparameter, RandAugment may be used to investigate the role of data augmentation with varying model and dataset size. Code is available online.", "field": ["Convolutional Neural Networks", "Feature Extractors", "Normalization", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Object Detection Models", "Image Models", "Stochastic Optimization", "Recurrent Neural Networks", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Image Data Augmentation", "Initialization", "Learning Rate Schedules", "Skip Connections", "Image Model Blocks"], "task": ["Data Augmentation", "Image Classification", "Object Detection"], "method": ["Depthwise Convolution", "Weight Decay", "Cosine Annealing", "Average Pooling", "EfficientNet", "RMSProp", "Cutout", "Long Short-Term Memory", "RandAugment", "Tanh Activation", "1x1 Convolution", "ResNet", "Random Horizontal Flip", "AutoAugment", "Convolution", "ReLU", "Residual Connection", "FPN", "WideResNet", "ShakeDrop", "Wide Residual Block", "Dense Connections", "Swish", "Focal Loss", "Random Resized Crop", "Batch Normalization", "Residual Network", "ColorJitter", "Pointwise Convolution", "Kaiming Initialization", "Squeeze-and-Excitation Block", "Sigmoid Activation", "SGD with Momentum", "Color Jitter", "Inverted Residual Block", "Feature Pyramid Network", "LSTM", "Bottleneck Residual Block", "Depthwise Separable Convolution", "Dropout", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["SVHN", "ImageNet"], "metric": ["Percentage error", "Number of params", "Top 1 Accuracy"], "title": "RandAugment: Practical automated data augmentation with a reduced search space"} {"abstract": "Attention models have been intensively studied to improve NLP tasks such as\nmachine comprehension via both question-aware passage attention model and\nself-matching attention model. Our research proposes phase conductor\n(PhaseCond) for attention models in two meaningful ways. First, PhaseCond, an\narchitecture of multi-layered attention models, consists of multiple phases\neach implementing a stack of attention layers producing passage representations\nand a stack of inner or outer fusion layers regulating the information flow.\nSecond, we extend and improve the dot-product attention function for PhaseCond\nby simultaneously encoding multiple question and passage embedding layers from\ndifferent perspectives. We demonstrate the effectiveness of our proposed model\nPhaseCond on the SQuAD dataset, showing that our model significantly\noutperforms both state-of-the-art single-layered and multiple-layered attention\nmodels. We deepen our results with new findings via both detailed qualitative\nanalysis and visualized examples showing the dynamic changes through\nmulti-layered attention models.", "field": ["Attention Mechanisms"], "task": ["Question Answering", "Reading Comprehension"], "method": ["Dot-Product Attention"], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "Phase Conductor on Multi-layered Attentions for Machine Comprehension"} {"abstract": "The deep generative adversarial networks (GAN) recently have been shown to be\npromising for different computer vision applications, like image edit- ing,\nsynthesizing high resolution images, generating videos, etc. These networks and\nthe corresponding learning scheme can handle various visual space map- pings.\nWe approach GANs with a novel training method and learning objective, to\ndiscover multiple object instances for three cases: 1) synthesizing a picture\nof a specific object within a cluttered scene; 2) localizing different\ncategories in images for weakly supervised object detection; and 3) improving\nobject discov- ery in object detection pipelines. A crucial advantage of our\nmethod is that it learns a new deep similarity metric, to distinguish multiple\nobjects in one im- age. We demonstrate that the network can act as an\nencoder-decoder generating parts of an image which contain an object, or as a\nmodified deep CNN to rep- resent images for object detection in supervised and\nweakly supervised scheme. Our ranking GAN offers a novel way to search through\nimages for object specific patterns. We have conducted experiments for\ndifferent scenarios and demonstrate the method performance for object\nsynthesizing and weakly supervised object detection and classification using\nthe MS-COCO and PASCAL VOC datasets.", "field": ["Generative Models", "Convolutions"], "task": ["Object Detection", "Object Discovery", "Weakly Supervised Object Detection"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["COCO test-dev"], "metric": ["AP50"], "title": "Weakly Supervised Object Discovery by Generative Adversarial & Ranking Networks"} {"abstract": "Feedback alignment was proposed to address the biological implausibility of the backpropagation algorithm which requires the transportation of the weight transpose during the backwards pass. The idea was later built upon with the proposal of direct feedback alignment (DFA), which propagates the error directly from the output layer to each hidden layer in the backward path using a fixed random weight matrix. This contribution was significant because it allowed for the parallelization of the backwards pass by the use of these feedback connections. However, just as feedback alignment, DFA does not perform well in deep convolutional networks. We propose to learn the backward weight matrices in DFA, adopting the methodology of Kolen-Pollack learning, to improve training and inference accuracy in deep convolutional neural networks by updating the direct feedback connections such that they come to estimate the forward path. The proposed method improves the accuracy of learning by direct feedback connections and reduces the gap between parallel training to serial training by means of backpropagation.", "field": ["Stochastic Optimization"], "task": ["Image Classification"], "method": ["KP", "DFA", "Kollen-Pollack Learning", "FA", "Feedback Alignment", "Direct Feedback Alignment"], "dataset": ["CIFAR-100"], "metric": ["Percentage correct"], "title": "Learning the Connections in Direct Feedback Alignment"}