{"abstract": "We present a new end-to-end generative adversarial network (GAN) for single image motion deblurring, named DeblurGAN-v2, which considerably boosts state-of-the-art deblurring efficiency, quality, and flexibility. DeblurGAN-v2 is based on a relativistic conditional GAN with a double-scale discriminator. For the first time, we introduce the Feature Pyramid Network into deblurring, as a core building block in the generator of DeblurGAN-v2. It can flexibly work with a wide range of backbones, to navigate the balance between performance and efficiency. The plug-in of sophisticated backbones (e.g., Inception-ResNet-v2) can lead to solid state-of-the-art deblurring. Meanwhile, with light-weight backbones (e.g., MobileNet and its variants), DeblurGAN-v2 reaches 10-100 times faster than the nearest competitors, while maintaining close to state-of-the-art results, implying the option of real-time video deblurring. We demonstrate that DeblurGAN-v2 obtains very competitive performance on several popular benchmarks, in terms of deblurring quality (both objective and subjective), as well as efficiency. Besides, we show the architecture to be effective for general image restoration tasks too. Our codes, models and data are available at: https://github.com/KupynOrest/DeblurGANv2", "field": ["Generative Models", "Convolutions"], "task": ["Deblurring", "Image Restoration", "Single-Image Blind Deblurring"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["RealBlur-R", "RealBlur-J", "GoPro", "RealBlur-J (trained on GoPro)", "RealBlur-R (trained on GoPro)", "HIDE (trained on GOPRO)"], "metric": ["SSIM", "SSIM (sRGB)", "PSNR", "PSNR (sRGB)"], "title": "DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better"} {"abstract": "Depthwise convolution is becoming increasingly popular in modern efficient ConvNets, but its kernel size is often overlooked. In this paper, we systematically study the impact of different kernel sizes, and observe that combining the benefits of multiple kernel sizes can lead to better accuracy and efficiency. Based on this observation, we propose a new mixed depthwise convolution (MixConv), which naturally mixes up multiple kernel sizes in a single convolution. As a simple drop-in replacement of vanilla depthwise convolution, our MixConv improves the accuracy and efficiency for existing MobileNets on both ImageNet classification and COCO object detection. To demonstrate the effectiveness of MixConv, we integrate it into AutoML search space and develop a new family of models, named as MixNets, which outperform previous mobile models including MobileNetV2 [20] (ImageNet top-1 accuracy +4.2%), ShuffleNetV2 [16] (+3.5%), MnasNet [26] (+1.3%), ProxylessNAS [2] (+2.2%), and FBNet [27] (+2.0%). In particular, our MixNet-L achieves a new state-of-the-art 78.9% ImageNet top-1 accuracy under typical mobile settings (<600M FLOPS). Code is at https://github.com/ tensorflow/tpu/tree/master/models/official/mnasnet/mixnet", "field": ["Regularization", "Convolutional Neural Networks", "Output Functions", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["AutoML", "Image Classification", "Object Detection"], "method": ["MixConv", "Average Pooling", "1x1 Convolution", "MobileNetV2", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "MobileNetV1", "MixNet", "Swish", "Grouped Convolution", "Batch Normalization", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Mixed Depthwise Convolution", "Sigmoid Activation", "Inverted Residual Block", "Softmax", "Dropout", "Depthwise Separable Convolution", "Residual Block", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 5 Accuracy", "Top 1 Accuracy"], "title": "MixConv: Mixed Depthwise Convolutional Kernels"} {"abstract": "Person Re-Identification is a challenging task that aims to retrieve all instances of a query image across a system of non-overlapping cameras. Due to the various extreme changes of view, it is common that local regions that could be used to match people are suppressed, which leads to a scenario where approaches have to evaluate the similarity of images based on less informative regions. In this work, we introduce the Top-DB-Net, a method based on Top DropBlock that pushes the network to learn to focus on the scene foreground, with special emphasis on the most task-relevant regions and, at the same time, encodes low informative regions to provide high discriminability. The Top-DB-Net is composed of three streams: (i) a global stream encodes rich image information from a backbone, (ii) the Top DropBlock stream encourages the backbone to encode low informative regions with high discriminative features, and (iii) a regularization stream helps to deal with the noise created by the dropping process of the second stream, when testing the first two streams are used. Vast experiments on three challenging datasets show the capabilities of our approach against state-of-the-art methods. Qualitative results demonstrate that our method exhibits better activation maps focusing on reliable parts of the input images.", "field": ["Regularization"], "task": ["Person Re-Identification"], "method": ["DropBlock"], "dataset": ["CUHK03 detected", "DukeMTMC-reID", "Market-1501", "CUHK03 labeled"], "metric": ["Rank-1", "MAP"], "title": "Top-DB-Net: Top DropBlock for Activation Enhancement in Person Re-Identification"} {"abstract": "Context information is critical for image semantic segmentation. Especially in indoor scenes, the large variation of object scales makes spatial-context an important factor for improving the segmentation performance. Thus, in this paper, we propose a novel variational context-deformable (VCD) module to learn adaptive receptive-field in a structured fashion. Different from standard ConvNets, which share fixed-size spatial context for all pixels, the VCD module learns a deformable spatial-context with the guidance of depth information: depth information provides clues for identifying real local neighborhoods. Specifically, adaptive Gaussian kernels are learned with the guidance of multimodal information. By multiplying the learned Gaussian kernel with standard convolution filters, the VCD module can aggregate flexible spatial context for each pixel during convolution. The main contributions of this work are as follows: 1) a novel VCD module is proposed, which exploits learnable Gaussian kernels to enable feature learning with structured adaptive-context; 2) variational Bayesian probabilistic modeling is introduced for the training of VCD module, which can make it continuous and more stable; 3) a perspective-aware guidance module is designed to take advantage of multi-modal information for RGB-D segmentation. We evaluate the proposed approach on three widely-used datasets, and the performance improvement has shown the effectiveness of the proposed method.\r", "field": ["Convolutions"], "task": ["Scene Parsing", "Semantic Segmentation"], "method": ["Convolution"], "dataset": ["Cityscapes test"], "metric": ["mIoU"], "title": "Variational Context-Deformable ConvNets for Indoor Scene Parsing"} {"abstract": "Accurate environment perception is essential for automated driving. When using monocular cameras, the distance estimation of elements in the environment poses a major challenge. Distances can be more easily estimated when the camera perspective is transformed to a bird's eye view (BEV). For flat surfaces, Inverse Perspective Mapping (IPM) can accurately transform images to a BEV. Three-dimensional objects such as vehicles and vulnerable road users are distorted by this transformation making it difficult to estimate their position relative to the sensor. This paper describes a methodology to obtain a corrected 360{\\deg} BEV image given images from multiple vehicle-mounted cameras. The corrected BEV image is segmented into semantic classes and includes a prediction of occluded areas. The neural network approach does not rely on manually labeled data, but is trained on a synthetic dataset in such a way that it generalizes well to real-world data. By using semantically segmented images as input, we reduce the reality gap between simulated and real-world data and are able to show that our method can be successfully applied in the real world. Extensive experiments conducted on the synthetic data demonstrate the superiority of our approach compared to IPM. Source code and datasets are available at https://github.com/ika-rwth-aachen/Cam2BEV", "field": ["Semantic Segmentation Models", "Output Functions", "Semantic Segmentation Modules", "Convolutional Neural Networks", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["Bird View Synthesis", "Cross-View Image-to-Image Translation", "Image Stitching", "Semantic Segmentation"], "method": ["Depthwise Convolution", "Dilated Convolution", "Average Pooling", "1x1 Convolution", "MobileNetV2", "Convolution", "uNetXST", "Batch Normalization", "Spatial Transformer", "Pointwise Convolution", "Atrous Spatial Pyramid Pooling", "Inverted Residual Block", "Softmax", "Concatenated Skip Connection", "DeepLabv3", "ASPP", "Depthwise Separable Convolution", "Max Pooling", "Spatial Pyramid Pooling"], "dataset": ["Cam2BEV"], "metric": ["Mean IoU"], "title": "A Sim2Real Deep Learning Approach for the Transformation of Images from Multiple Vehicle-Mounted Cameras to a Semantically Segmented Image in Bird's Eye View"} {"abstract": "Recently, it has attracted much attention to build reliable named entity recognition (NER) systems using limited annotated data. Nearly all existing works heavily rely on domain-specific resources, such as external lexicons and knowledge bases. However, such domain-specific resources are often not available, meanwhile it's difficult and expensive to construct the resources, which has become a key obstacle to wider adoption. To tackle the problem, in this work, we propose a novel robust and domain-adaptive approach RDANER for low-resource NER, which only uses cheap and easily obtainable resources. Extensive experiments on three benchmark datasets demonstrate that our approach achieves the best performance when only using cheap and easily obtainable resources, and delivers competitive results against state-of-the-art methods which use difficultly obtainable domainspecific resources. All our code and corpora can be found on https://github.com/houking-can/RDANER.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Low Resource Named Entity Recognition", "Named Entity Recognition"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SciERC", "BC5CDR", "NCBI-disease"], "metric": ["F1"], "title": "A Robust and Domain-Adaptive Approach for Low-Resource Named Entity Recognition"} {"abstract": "The tradeoff between receptive field size and efficiency is a crucial issue\nin low level vision. Plain convolutional networks (CNNs) generally enlarge the\nreceptive field at the expense of computational cost. Recently, dilated\nfiltering has been adopted to address this issue. But it suffers from gridding\neffect, and the resulting receptive field is only a sparse sampling of input\nimage with checkerboard patterns. In this paper, we present a novel multi-level\nwavelet CNN (MWCNN) model for better tradeoff between receptive field size and\ncomputational efficiency. With the modified U-Net architecture, wavelet\ntransform is introduced to reduce the size of feature maps in the contracting\nsubnetwork. Furthermore, another convolutional layer is further used to\ndecrease the channels of feature maps. In the expanding subnetwork, inverse\nwavelet transform is then deployed to reconstruct the high resolution feature\nmaps. Our MWCNN can also be explained as the generalization of dilated\nfiltering and subsampling, and can be applied to many image restoration tasks.\nThe experimental results clearly show the effectiveness of MWCNN for image\ndenoising, single image super-resolution, and JPEG image artifacts removal.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Denoising", "Image Denoising", "Image Restoration", "Image Super-Resolution", "JPEG Artifact Correction", "Super-Resolution"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["Urban100 sigma15", "BSD100 - 4x upscaling", "Set12 sigma25", "LIVE1 (Quality 10 Color)", "ICB (Quality 10 Grayscale)", "ICB (Quality 10 Color)", "Set14 - 2x upscaling", "BSD100 - 2x upscaling", "BSD68 sigma50", "Urban100 - 3x upscaling", "LIVE1 (Quality 40 Grayscale)", "LIVE1 (Quality 20 Color)", "BSD68 sigma25", "Classic5 (Quality 20 Grayscale)", "Set5 - 2x upscaling", "Urban100 - 4x upscaling", "ICB (Quality 30 Color)", "Set5 - 3x upscaling", "Urban100 sigma25", "Set14 - 4x upscaling", "Set12 sigma50", "Set12 sigma15", "Urban100 sigma50", "Set14 - 3x upscaling", "Live1 (Quality 10 Grayscale)", "Classic5 (Quality 10 Grayscale)", "LIVE1 (Quality 20 Grayscale)", "Set5 - 4x upscaling", "ICB (Quality 20 Color)", "LIVE1 (Quality 30 Grayscale)", "Classic5 (Quality 40 Grayscale)", "BSD68 sigma15", "BSD100 - 3x upscaling", "Urban100 - 2x upscaling", "Classic5 (Quality 30 Grayscale)", "ICB (Quality 20 Grayscale)"], "metric": ["SSIM", "PSNR", "PSNR-B"], "title": "Multi-level Wavelet-CNN for Image Restoration"} {"abstract": "We propose a new bottom-up method for multi-person 2D human pose estimation\nthat is particularly well suited for urban mobility such as self-driving cars\nand delivery robots. The new method, PifPaf, uses a Part Intensity Field (PIF)\nto localize body parts and a Part Association Field (PAF) to associate body\nparts with each other to form full human poses. Our method outperforms previous\nmethods at low resolution and in crowded, cluttered and occluded scenes thanks\nto (i) our new composite field PAF encoding fine-grained information and (ii)\nthe choice of Laplace loss for regressions which incorporates a notion of\nuncertainty. Our architecture is based on a fully convolutional, single-shot,\nbox-free design. We perform on par with the existing state-of-the-art bottom-up\nmethod on the standard COCO keypoint task and produce state-of-the-art results\non a modified COCO keypoint task for the transportation domain.", "field": ["Image Representations"], "task": ["2D Human Pose Estimation", "Keypoint Detection", "Multi-Person Pose Estimation", "Pose Estimation", "Self-Driving Cars"], "method": ["Composite Fields"], "dataset": ["COCO test-dev"], "metric": ["APM", "AP", "APL"], "title": "PifPaf: Composite Fields for Human Pose Estimation"} {"abstract": "The pressure of ever-increasing patient demand and budget restrictions make hospital bed management a daily challenge for clinical staff. Most critical is the efficient allocation of resource-heavy Intensive Care Unit (ICU) beds to the patients who need life support. Central to solving this problem is knowing for how long the current set of ICU patients are likely to stay in the unit. In this work, we propose a new deep learning model based on the combination of temporal convolution and pointwise (1x1) convolution, to solve the length of stay prediction task on the eICU and MIMIC-IV critical care datasets. The model - which we refer to as Temporal Pointwise Convolution (TPC) - is specifically designed to mitigate common challenges with Electronic Health Records, such as skewness, irregular sampling and missing data. In doing so, we have achieved significant performance benefits of 18-68% (metric and dataset dependent) over the commonly used Long-Short Term Memory (LSTM) network, and the multi-head self-attention network known as the Transformer. By adding mortality prediction as a side-task, we can improve performance further still, resulting in a mean absolute deviation of 1.55 days (eICU) and 2.28 days (MIMIC-IV) on predicting remaining length of stay.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Subword Segmentation", "Normalization", "Convolutions", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Length-of-Stay prediction", "Mortality Prediction", "Predicting Patient Outcomes"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Convolution", "1x1 Convolution", "Residual Connection", "Label Smoothing", "Scaled Dot-Product Attention", "Dropout", "Pointwise Convolution", "Dense Connections"], "dataset": ["eICU Collaborative Research Database"], "metric": ["Kappa"], "title": "Temporal Pointwise Convolutional Networks for Length of Stay Prediction in the Intensive Care Unit"} {"abstract": "We rethink a well-know bottom-up approach for multi-person pose estimation and propose an improved one. The improved approach surpasses the baseline significantly thanks to (1) an intuitional yet more sensible representation, which we refer to as body parts to encode the connection information between keypoints, (2) an improved stacked hourglass network with attention mechanisms, (3) a novel focal L2 loss which is dedicated to hard keypoint and keypoint association (body part) mining, and (4) a robust greedy keypoint assignment algorithm for grouping the detected keypoints into individual poses. Our approach not only works straightforwardly but also outperforms the baseline by about 15% in average precision and is comparable to the state of the art on the MS-COCO test-dev dataset. The code and pre-trained models are publicly available online.", "field": ["Pose Estimation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections", "Image Model Blocks"], "task": ["Multi-Person Pose Estimation", "Pose Estimation"], "method": ["Convolution", "1x1 Convolution", "ReLU", "Residual Connection", "Hourglass Module", "Stacked Hourglass Network", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO test-dev"], "metric": ["APM", "AR50", "AP", "APL", "AR"], "title": "Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation"} {"abstract": "This paper presents a new method SOLOIST, which uses transfer learning to efficiently build task-oriented dialog systems at scale. We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog modules (e.g., state tracker, dialog policy, response generator) into a single neural model. We pre-train, on large heterogeneous dialog corpora, a large-scale Transformer model which can generate dialog responses grounded in user goals and real-world knowledge for task completion. The pre-trained model can be efficiently adapted to accomplish a new dialog task with a handful of task-specific dialogs via machine teaching. Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost. We will release our code and pre-trained models for reproducible research.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["End-To-End Dialogue Modelling", "Few-Shot Learning", "Language Modelling", "Transfer Learning"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["MULTIWOZ 2.0"], "metric": ["MultiWOZ (Inform)", "BLEU", "MultiWOZ (Success)"], "title": "SOLOIST: Few-shot Task-Oriented Dialog with A Single Pre-trained Auto-regressive Model"} {"abstract": "U-Net has been providing state-of-the-art performance in many medical image segmentation problems. Many modifications have been proposed for U-Net, such as attention U-Net, recurrent residual convolutional U-Net (R2-UNet), and U-Net with residual blocks or blocks with dense connections. However, all these modifications have an encoder-decoder structure with skip connections, and the number of paths for information flow is limited. We propose LadderNet in this paper, which can be viewed as a chain of multiple U-Nets. Instead of only one pair of encoder branch and decoder branch in U-Net, a LadderNet has multiple pairs of encoder-decoder branches, and has skip connections between every pair of adjacent decoder and decoder branches in each level. Inspired by the success of ResNet and R2-UNet, we use modified residual blocks where two convolutional layers in one block share the same weights. A LadderNet has more paths for information flow because of skip connections and residual blocks, and can be viewed as an ensemble of Fully Convolutional Networks (FCN). The equivalence to an ensemble of FCNs improves segmentation accuracy, while the shared weights within each residual block reduce parameter number. Semantic segmentation is essential for retinal disease detection. We tested LadderNet on two benchmark datasets for blood vessel segmentation in retinal images, and achieved superior performance over methods in the literature. The implementation is provided \\url{https://github.com/juntang-zhuang/LadderNet}", "field": ["Semantic Segmentation Models", "Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Medical Image Segmentation", "Retinal Vessel Segmentation", "Semantic Segmentation"], "method": ["ResNet", "U-Net", "Average Pooling", "Residual Block", "Concatenated Skip Connection", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CHASE_DB1", "DRIVE"], "metric": ["F1 score", "AUC"], "title": "LadderNet: Multi-path networks based on U-Net for medical image segmentation"} {"abstract": "Online hate speech is a newborn problem in our modern society which is growing at a steady rate exploiting weaknesses of the corresponding regimes that characterise several social media platforms. Therefore, this phenomenon is mainly cultivated through such comments, either during users' interaction or on posted multimedia context. Nowadays, giant companies own platforms where many millions of users log in daily. Thus, protection of their users from exposure to similar phenomena for keeping up with the corresponding law, as well as for retaining a high quality of offered services, seems mandatory. Having a robust and reliable mechanism for identifying and preventing the uploading of related material would have a huge effect on our society regarding several aspects of our daily life. On the other hand, its absence would deteriorate heavily the total user experience, while its erroneous operation might raise several ethical issues. In this work, we present a protocol for creating a more suitable dataset, regarding its both informativeness and representativeness aspects, favouring the safer capture of hate speech occurrence, without at the same time restricting its applicability to other classification problems. Moreover, we produce and publish a textual dataset with two variants: binary and multi-label, called `ETHOS', based on YouTube and Reddit comments validated through figure-eight crowdsourcing platform. Our assumption about the production of more compatible datasets is further investigated by applying various classification models and recording their behaviour over several appropriate metrics.", "field": ["Convolutions"], "task": ["Hate Speech Detection"], "method": ["1x1 Convolution"], "dataset": ["Ethos MultiLabel", "Ethos Binary"], "metric": ["Classification Accuracy", "Precision", "F1-score", "Hamming Loss"], "title": "ETHOS: an Online Hate Speech Detection Dataset"} {"abstract": "Designed to learn long-range interactions on sequential data, transformers continue to show state-of-the-art results on a wide variety of tasks. In contrast to CNNs, they contain no inductive bias that prioritizes local interactions. This makes them expressive, but also computationally infeasible for long sequences, such as high-resolution images. We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images. We show how to (i) use CNNs to learn a context-rich vocabulary of image constituents, and in turn (ii) utilize transformers to efficiently model their composition within high-resolution images. Our approach is readily applied to conditional synthesis tasks, where both non-spatial information, such as object classes, and spatial information, such as segmentations, can control the generated image. In particular, we present the first results on semantically-guided synthesis of megapixel images with transformers. Project page at https://compvis.github.io/taming-transformers/ .", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Image Generation", "Image-to-Image Translation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["ADE20K Labels-to-Photos", "COCO-Stuff Labels-to-Photos"], "metric": ["FID"], "title": "Taming Transformers for High-Resolution Image Synthesis"} {"abstract": "Model efficiency has become increasingly important in computer vision. In this paper, we systematically study neural network architecture design choices for object detection and propose several key optimizations to improve efficiency. First, we propose a weighted bi-directional feature pyramid network (BiFPN), which allows easy and fast multiscale feature fusion; Second, we propose a compound scaling method that uniformly scales the resolution, depth, and width for all backbone, feature network, and box/class prediction networks at the same time. Based on these optimizations and better backbones, we have developed a new family of object detectors, called EfficientDet, which consistently achieve much better efficiency than prior art across a wide spectrum of resource constraints. In particular, with single model and single-scale, our EfficientDet-D7 achieves state-of-the-art 55.1 AP on COCO test-dev with 77M parameters and 410B FLOPs, being 4x - 9x smaller and using 13x - 42x fewer FLOPs than previous detectors. Code is available at https://github.com/google/automl/tree/master/efficientdet.", "field": ["Image Model Blocks", "Image Data Augmentation", "Regularization", "Stochastic Optimization", "Learning Rate Schedules", "Feature Extractors", "Activation Functions", "Normalization", "Loss Functions", "Convolutions", "Feedforward Networks", "Pooling Operations", "Object Detection Models", "Image Models", "Skip Connection Blocks"], "task": ["AutoML", "Object Detection", "Real-Time Object Detection"], "method": ["Depthwise Convolution", "Weight Decay", "Cosine Annealing", "Average Pooling", "EfficientNet", "RMSProp", "EfficientDet", "1x1 Convolution", "BiFPN", "Random Horizontal Flip", "Convolution", "ReLU", "Dense Connections", "Swish", "Image Scale Augmentation", "Focal Loss", "Batch Normalization", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Sigmoid Activation", "SGD with Momentum", "Inverted Residual Block", "Linear Warmup With Cosine Annealing", "Dropout", "Depthwise Separable Convolution", "Rectified Linear Units"], "dataset": ["COCO", "COCO minival", "COCO test-dev"], "metric": ["APM", "FPS", "MAP", "box AP", "AP75", "APS", "APL", "AP50"], "title": "EfficientDet: Scalable and Efficient Object Detection"} {"abstract": "The development of efficient models for predicting specific properties through machine learning is of great importance for the innovation of chemistry and material science. However, predicting electronic structure properties like frontier molecular orbital HOMO and LUMO energy levels and their HOMO-LUMO gaps from the small-sized molecule data to larger molecules remains a challenge. Here we develop a multi-level attention strategy that enables chemical interpretable insights to be fused into multi-task learning of up to 110,000 records of data in QM9 for random split evaluation. The good transferability for predicting larger molecules outside the training set is demonstrated in both QM9 and Alchemy datasets. The efficient and accurate prediction of 12 properties including dipole moment, HOMO, and Gibbs free energy within chemical accuracy is achieved by using our specifically designed interpretable multi-level attention neural network, named as DeepMoleNet. Remarkably, the present multi-task deep learning model adopts the atom-centered symmetry functions (ACSFs) descriptor as one of the prediction targets, rather than using ACSFs as input in the conventional way. The proposed multi-level attention neural network is applicable to high-throughput screening of numerous chemical species to accelerate rational designs of drug, material, and chemical reactions.", "field": ["Activation Functions"], "task": ["Drug Discovery", "Formation Energy", "Multi-Task Learning"], "method": ["Rational", "Rational Activation Function"], "dataset": ["QM9"], "metric": ["Error ratio"], "title": "Transferable Multi-level Attention Neural Network for Accurate Prediction of Quantum Chemistry Properties via Multi-task Learning"} {"abstract": "For machine reading comprehension, the capacity of effectively modeling the linguistic knowledge from the detail-riddled and lengthy passages and getting ride of the noises is essential to improve its performance. Traditional attentive models attend to all words without explicit constraint, which results in inaccurate concentration on some dispensable words. In this work, we propose using syntax to guide the text modeling by incorporating explicit syntactic constraints into attention mechanism for better linguistically motivated word representations. In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention. Syntax-guided network (SG-Net) is then composed of this extra SDOI-SAN and the SAN from the original Transformer encoder through a dual contextual architecture for better linguistics inspired representation. To verify its effectiveness, the proposed SG-Net is applied to typical pre-trained language model BERT which is right based on a Transformer encoder. Extensive experiments on popular benchmarks including SQuAD 2.0 and RACE show that the proposed SG-Net design helps achieve substantial performance improvement over strong baselines.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": ["Weight Decay", "Adam", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Transformer", "ReLU", "Residual Connection", "Dense Connections", "Layer Normalization", "Label Smoothing", "GELU", "WordPiece", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Dropout", "BERT", "Rectified Linear Units"], "dataset": ["SQuAD2.0 dev", "SQuAD2.0"], "metric": ["EM", "F1"], "title": "SG-Net: Syntax-Guided Machine Reading Comprehension"} {"abstract": "Recurrent neural networks have been very successful at predicting sequences\nof words in tasks such as language modeling. However, all such models are based\non the conventional classification framework, where the model is trained\nagainst one-hot targets, and each word is represented both as an input and as\nan output in isolation. This causes inefficiencies in learning both in terms of\nutilizing all of the information and in terms of the number of parameters\nneeded to train. We introduce a novel theoretical framework that facilitates\nbetter learning in language modeling, and show that our framework leads to\ntying together the input embedding and the output projection matrices, greatly\nreducing the number of trainable variables. Our framework leads to state of the\nart performance on the Penn Treebank with a variety of network models.", "field": ["Parameter Sharing"], "task": ["Language Modelling"], "method": ["Weight Tying"], "dataset": ["Penn Treebank (Word Level)", "WikiText-2"], "metric": ["Validation perplexity", "Test perplexity"], "title": "Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling"} {"abstract": "Neural architecture search (NAS) with an accuracy predictor that predicts the accuracy of candidate architectures has drawn increasing interests due to its simplicity and effectiveness. Previous works employ neural network based predictors which unfortunately cannot well exploit the tabular data representations of network architectures. As decision tree-based models can better handle tabular data, in this paper, we propose to leverage gradient boosting decision tree (GBDT) as the predictor for NAS and demonstrate that it can improve the prediction accuracy and help to find better architectures than neural network based predictors. Moreover, considering that a better and compact search space can ease the search process, we propose to prune the search space gradually according to important features derived from GBDT using an interpreting tool named SHAP. In this way, NAS can be performed by first pruning the search space (using GBDT as a pruner) and then searching a neural architecture (using GBDT as a predictor), which is more efficient and effective. Experiments on NASBench-101 and ImageNet demonstrate the effectiveness of GBDT for NAS: (1) NAS with GBDT predictor finds top-10 architecture (among all the architectures in the search space) with $0.18\\%$ test regret on NASBench-101, and achieves $24.2\\%$ top-1 error rate on ImageNet; and (2) GBDT based search space pruning and neural architecture search further achieves $23.5\\%$ top-1 error rate on ImageNet.", "field": ["Interpretability"], "task": ["Neural Architecture Search"], "method": ["Shapley Additive Explanations", "SHAP"], "dataset": ["ImageNet"], "metric": ["Top-1 Error Rate", "MACs", "Params", "Accuracy"], "title": "Neural Architecture Search with GBDT"} {"abstract": "Although great progress in supervised person re-identification (Re-ID) has been made recently, due to the viewpoint variation of a person, Re-ID remains a massive visual challenge. Most existing viewpoint-based person Re-ID methods project images from each viewpoint into separated and unrelated sub-feature spaces. They only model the identity-level distribution inside an individual viewpoint but ignore the underlying relationship between different viewpoints. To address this problem, we propose a novel approach, called \\textit{Viewpoint-Aware Loss with Angular Regularization }(\\textbf{VA-reID}). Instead of one subspace for each viewpoint, our method projects the feature from different viewpoints into a unified hypersphere and effectively models the feature distribution on both the identity-level and the viewpoint-level. In addition, rather than modeling different viewpoints as hard labels used for conventional viewpoint classification, we introduce viewpoint-aware adaptive label smoothing regularization (VALSR) that assigns the adaptive soft label to feature representation. VALSR can effectively solve the ambiguity of the viewpoint cluster label assignment. Extensive experiments on the Market1501 and DukeMTMC-reID datasets demonstrated that our method outperforms the state-of-the-art supervised Re-ID methods.", "field": ["Regularization"], "task": ["Person Re-Identification"], "method": ["Label Smoothing"], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "Rank-5", "MAP"], "title": "Viewpoint-Aware Loss with Angular Regularization for Person Re-Identification"} {"abstract": "In a task-oriented dialog system, the goal of dialog state tracking (DST) is to monitor the state of the conversation from the dialog history. Recently, many deep learning based methods have been proposed for the task. Despite their impressive performance, current neural architectures for DST are typically heavily-engineered and conceptually complex, making it difficult to implement, debug, and maintain them in a production setting. In this work, we propose a simple but effective DST model based on BERT. In addition to its simplicity, our approach also has a number of other advantages: (a) the number of parameters does not grow with the ontology size (b) the model can operate in situations where the domain ontology may change dynamically. Experimental results demonstrate that our BERT-based model outperforms previous methods by a large margin, achieving new state-of-the-art results on the standard WoZ 2.0 dataset. Finally, to make the model small and fast enough for resource-restricted systems, we apply the knowledge distillation method to compress our model. The final compressed model achieves comparable results with the original model while being 8x smaller and 7x faster.", "field": ["Output Functions", "Regularization", "Stochastic Optimization", "Attention Modules", "Learning Rate Schedules", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Dialogue State Tracking", "Knowledge Distillation"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["Wizard-of-Oz"], "metric": ["Request", "Joint"], "title": "A Simple but Effective BERT Model for Dialog State Tracking on Resource-Limited Systems"} {"abstract": "Fine-grained visual classification (FGVC) is much more challenging than traditional classification tasks due to the inherently subtle intra-class object variations. Recent works mainly tackle this problem by focusing on how to locate the most discriminative parts, more complementary parts, and parts of various granularities. However, less effort has been placed to which granularities are the most discriminative and how to fuse information cross multi-granularity. In this work, we propose a novel framework for fine-grained visual classification to tackle these problems. In particular, we propose: (i) a progressive training strategy that effectively fuses features from different granularities, and (ii) a random jigsaw patch generator that encourages the network to learn features at specific granularities. We obtain state-of-the-art performances on several standard FGVC benchmark datasets, where the proposed method consistently outperforms existing methods or delivers competitive results. The code will be available at https://github.com/PRIS-CV/PMG-Progressive-Multi-Granularity-Training.", "field": ["Self-Supervised Learning"], "task": ["Fine-Grained Image Classification"], "method": ["Jigsaw"], "dataset": [" CUB-200-2011", "Stanford Cars", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "Fine-Grained Visual Classification via Progressive Multi-Granularity Training of Jigsaw Patches"} {"abstract": "Recently the LARS and LAMB optimizers have been proposed for training neural networks faster using large batch sizes. LARS and LAMB add layer-wise normalization to the update rules of Heavy-ball momentum and Adam, respectively, and have become popular in prominent benchmarks and deep learning libraries. However, without fair comparisons to standard optimizers, it remains an open question whether LARS and LAMB have any benefit over traditional, generic algorithms. In this work we demonstrate that standard optimization algorithms such as Nesterov momentum and Adam can match or exceed the results of LARS and LAMB at large batch sizes. Our results establish new, stronger baselines for future comparisons at these batch sizes and shed light on the difficulties of comparing optimizers for neural network training more generally.", "field": ["Large Batch Optimization", "Stochastic Optimization"], "task": ["Image Classification", "Question Answering", "Stochastic Optimization"], "method": ["LARS", "Adam", "LAMB", "Nesterov Accelerated Gradient"], "dataset": ["SQuAD1.1", "ImageNet"], "metric": ["F1", "Top 1 Accuracy"], "title": "A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes"} {"abstract": "Knowledge graphs enable a wide variety of applications, including question\nanswering and information retrieval. Despite the great effort invested in their\ncreation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata)\nremain incomplete. We introduce Relational Graph Convolutional Networks\n(R-GCNs) and apply them to two standard knowledge base completion tasks: Link\nprediction (recovery of missing facts, i.e. subject-predicate-object triples)\nand entity classification (recovery of missing entity attributes). R-GCNs are\nrelated to a recent class of neural networks operating on graphs, and are\ndeveloped specifically to deal with the highly multi-relational data\ncharacteristic of realistic knowledge bases. We demonstrate the effectiveness\nof R-GCNs as a stand-alone model for entity classification. We further show\nthat factorization models for link prediction such as DistMult can be\nsignificantly improved by enriching them with an encoder model to accumulate\nevidence over multiple inference steps in the relational graph, demonstrating a\nlarge improvement of 29.8% on FB15k-237 over a decoder-only baseline.", "field": ["Graph Models"], "task": ["Graph Classification", "Information Retrieval", "Knowledge Base Completion", "Knowledge Graphs", "Link Prediction", "Node Classification"], "method": ["Relational Graph Convolution Network", "RGCN"], "dataset": ["MUTAG", "AIFB", "BGS", "AM"], "metric": ["Accuracy"], "title": "Modeling Relational Data with Graph Convolutional Networks"} {"abstract": "Datasets, Transforms and Models specific to Computer Vision", "field": ["Output Functions", "Regularization", "Learning Rate Schedules", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Miscellaneous Components"], "task": ["Image Classification"], "method": ["Depthwise Convolution", "Weight Decay", "Average Pooling", "Channel Shuffle", "ShuffleNet V2 Block", "1x1 Convolution", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Batch Normalization", "ShuffleNet v2", "Squeeze-and-Excitation Block", "Step Decay", "Sigmoid Activation", "Softmax", "Global Average Pooling", "Rectified Linear Units", "ShuffleNet V2 Downsampling Block"], "dataset": ["ImageNet"], "metric": ["Top 1 Accuracy"], "title": "ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design"} {"abstract": "Multi-Object Tracking (MOT) is a challenging task in the complex scene such\nas surveillance and autonomous driving. In this paper, we propose a novel\ntracklet processing method to cleave and re-connect tracklets on crowd or\nlong-term occlusion by Siamese Bi-Gated Recurrent Unit (GRU). The tracklet\ngeneration utilizes object features extracted by CNN and RNN to create the\nhigh-confidence tracklet candidates in sparse scenario. Due to mis-tracking in\nthe generation process, the tracklets from different objects are split into\nseveral sub-tracklets by a bidirectional GRU. After that, a Siamese GRU based\ntracklet re-connection method is applied to link the sub-tracklets which belong\nto the same object to form a whole trajectory. In addition, we extract the\ntracklet images from existing MOT datasets and propose a novel dataset to train\nour networks. The proposed dataset contains more than 95160 pedestrian images.\nIt has 793 different persons in it. On average, there are 120 images for each\nperson with positions and sizes. Experimental results demonstrate the\nadvantages of our model over the state-of-the-art methods on MOT16.", "field": ["Recurrent Neural Networks"], "task": ["Autonomous Driving", "Multi-Object Tracking", "Multiple Object Tracking", "Object Tracking"], "method": ["Gated Recurrent Unit", "GRU"], "dataset": ["MOT16"], "metric": ["MOTA"], "title": "Trajectory Factory: Tracklet Cleaving and Re-connection by Deep Siamese Bi-GRU for Multiple Object Tracking"} {"abstract": "Graph neural networks have shown significant success in the field of graph representation learning. Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations. Nevertheless, one layer of these neighborhood aggregation methods only consider immediate neighbors, and the performance decreases when going deeper to enable larger receptive fields. Several recent studies attribute this performance deterioration to the over-smoothing issue, which states that repeated propagation makes node representations of different classes indistinguishable. In this work, we study this observation systematically and develop new insights towards deeper graph neural networks. First, we provide a systematical analysis on this issue and argue that the key factor compromising the performance significantly is the entanglement of representation transformation and propagation in current graph convolution operations. After decoupling these two operations, deeper graph neural networks can be used to learn graph node representations from larger receptive fields. We further provide a theoretical analysis of the above observation when building very deep models, which can serve as a rigorous and gentle description of the over-smoothing issue. Based on our theoretical and empirical analysis, we propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields. A set of experiments on citation, co-authorship, and co-purchase datasets have confirmed our analysis and insights and demonstrated the superiority of our proposed methods.", "field": ["Convolutions"], "task": ["Graph Representation Learning", "Node Classification", "Representation Learning"], "method": ["Convolution"], "dataset": ["Coauthor CS", "Coauthor Physics", "AMZ Photo", "AMZ Computers", "Cora with Public Split: fixed 20 nodes per class", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Towards Deeper Graph Neural Networks"} {"abstract": "Acoustic novelty detection aims at identifying abnormal/novel acoustic signals which differ from the reference/normal data that the system was trained with. In this paper we present a novel unsupervised approach based on a denoising autoencoder. In our approach auditory spectral features are processed by a denoising autoencoder with bidirectional Long Short-Term Memory recurrent neural networks. We use the reconstruction error between the input and the output of the autoencoder as activation signal to detect novel events. The autoencoder is trained on a public database which contains recordings of typical in-home situations such as talking, watching television, playing and eating. The evaluation was performed on more than 260 different abnormal events. We compare results with state-of-the-art methods and we conclude that our novel approach significantly outperforms existing methods by achieving up to 93.4% F-Measure.", "field": ["Generative Models"], "task": ["Acoustic Novelty Detection", "Denoising"], "method": ["AutoEncoder", "Denoising Autoencoder"], "dataset": ["A3Lab PASCAL CHiME"], "metric": ["F1"], "title": "A novel approach for automatic acoustic novelty detection using a denoising autoencoder with bidirectional LSTM neural networks"} {"abstract": "Lip-reading aims to infer the speech content from the lip movement sequence and can be seen as a typical sequence-to-sequence (seq2seq) problem which translates the input image sequence of lip movements to the text sequence of the speech content. However, the traditional learning process of seq2seq models always suffers from two problems: the exposure bias resulted from the strategy of \"teacher-forcing\", and the inconsistency between the discriminative optimization target (usually the cross-entropy loss) and the final evaluation metric (usually the character/word error rate). In this paper, we propose a novel pseudo-convolutional policy gradient (PCPG) based method to address these two problems. On the one hand, we introduce the evaluation metric (refers to the character error rate in this paper) as a form of reward to optimize the model together with the original discriminative target. On the other hand, inspired by the local perception property of convolutional operation, we perform a pseudo-convolutional operation on the reward and loss dimension, so as to take more context around each time step into account to generate a robust reward and loss for the whole optimization. Finally, we perform a thorough comparison and evaluation on both the word-level and sentence-level benchmarks. The results show a significant improvement over other related methods, and report either a new state-of-the-art performance or a competitive accuracy on all these challenging benchmarks, which clearly proves the advantages of our approach.", "field": ["Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models"], "task": ["Lipreading", "Lip Reading"], "method": ["Long Short-Term Memory", "Tanh Activation", "Sequence to Sequence", "LSTM", "Seq2Seq", "Sigmoid Activation"], "dataset": ["Lip Reading in the Wild", "LRW-1000"], "metric": ["Top-1 Accuracy"], "title": "Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence Lip-Reading"} {"abstract": "We propose a novel deep network structure called \"Network In Network\" (NIN)\nto enhance model discriminability for local patches within the receptive field.\nThe conventional convolutional layer uses linear filters followed by a\nnonlinear activation function to scan the input. Instead, we build micro neural\nnetworks with more complex structures to abstract the data within the receptive\nfield. We instantiate the micro neural network with a multilayer perceptron,\nwhich is a potent function approximator. The feature maps are obtained by\nsliding the micro networks over the input in a similar manner as CNN; they are\nthen fed into the next layer. Deep NIN can be implemented by stacking mutiple\nof the above described structure. With enhanced local modeling via the micro\nnetwork, we are able to utilize global average pooling over feature maps in the\nclassification layer, which is easier to interpret and less prone to\noverfitting than traditional fully connected layers. We demonstrated the\nstate-of-the-art classification performances with NIN on CIFAR-10 and\nCIFAR-100, and reasonable performances on SVHN and MNIST datasets.", "field": ["Convolutions", "Pooling Operations"], "task": ["Image Classification"], "method": ["1x1 Convolution", "Global Average Pooling", "Average Pooling"], "dataset": ["SVHN", "MNIST", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Network In Network"} {"abstract": "In this brief technical report we introduce the CINIC-10 dataset as a plug-in\nextended alternative for CIFAR-10. It was compiled by combining CIFAR-10 with\nimages selected and downsampled from the ImageNet database. We present the\napproach to compiling the dataset, illustrate the example images for different\nclasses, give pixel distributions for each part of the repository, and give\nsome standard benchmarks for well known models. Details for download, usage,\nand compilation can be found in the associated github repository.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification"], "method": ["ResNet", "ResNeXt Block", "Average Pooling", "Grouped Convolution", "ResNeXt", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CINIC-10"], "metric": ["Accuracy"], "title": "CINIC-10 is not ImageNet or CIFAR-10"} {"abstract": "It is well known that featuremap attention and multi-path representation are important for visual recognition. In this paper, we present a modularized architecture, which applies the channel-wise attention on different network branches to leverage their success in capturing cross-feature interactions and learning diverse representations. Our design results in a simple and unified computation block, which can be parameterized using only a few variables. Our model, named ResNeSt, outperforms EfficientNet in accuracy and latency trade-off on image classification. In addition, ResNeSt has achieved superior transfer learning results on several public benchmarks serving as the backbone, and has been adopted by the winning entries of COCO-LVIS challenge. The source code for complete system and pretrained models are publicly available.", "field": ["Feature Extractors", "Normalization", "Attention Mechanisms", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Image Models", "Semantic Segmentation Models", "Stochastic Optimization", "Recurrent Neural Networks", "Feedforward Networks", "Skip Connection Blocks", "Image Data Augmentation", "Initialization", "Semantic Segmentation Modules", "Output Functions", "Learning Rate Schedules", "RoI Feature Extractors", "Skip Connections", "Image Model Blocks"], "task": ["Image Classification", "Instance Segmentation", "Object Detection", "Panoptic Segmentation", "Semantic Segmentation", "Transfer Learning"], "method": ["Depthwise Convolution", "Weight Decay", "Dilated Convolution", "Cosine Annealing", "Average Pooling", "EfficientNet", "RMSProp", "Cutout", "Long Short-Term Memory", "Mixup", "Tanh Activation", "1x1 Convolution", "RoIAlign", "ResNeSt", "Channel-wise Soft Attention", "Random Horizontal Flip", "AutoAugment", "Convolution", "ReLU", "Residual Connection", "FPN", "Dense Connections", "Deformable Convolution", "Swish", "Image Scale Augmentation", "Random Resized Crop", "Batch Normalization", "Label Smoothing", "ColorJitter", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Kaiming Initialization", "Split Attention", "Atrous Spatial Pyramid Pooling", "Sigmoid Activation", "DropBlock", "Color Jitter", "SGD with Momentum", "Inverted Residual Block", "Softmax", "Feature Pyramid Network", "Linear Warmup With Cosine Annealing", "DeepLabv3", "LSTM", "ASPP", "Depthwise Separable Convolution", "Dropout", "Global Average Pooling", "Rectified Linear Units", "Spatial Pyramid Pooling"], "dataset": ["COCO panoptic", "Cityscapes val", "ADE20K", "ADE20K val", "COCO minival", "COCO test-dev", "PASCAL Context", "ImageNet", "Cityscapes test"], "metric": ["Validation mIoU", "APM", "Top 1 Accuracy", "mIoU", "Mean IoU (class)", "box AP", "PQ", "AP75", "APS", "APL", "AP50", "mask AP"], "title": "ResNeSt: Split-Attention Networks"} {"abstract": "Detecting human bodies in highly crowded scenes is a challenging problem. Two main reasons result in such a problem: 1). weak visual cues of heavily occluded instances can hardly provide sufficient information for accurate detection; 2). heavily occluded instances are easier to be suppressed by Non-Maximum-Suppression (NMS). To address these two issues, we introduce a variant of two-stage detectors called PS-RCNN. PS-RCNN first detects slightly/none occluded objects by an R-CNN module (referred as P-RCNN), and then suppress the detected instances by human-shaped masks so that the features of heavily occluded instances can stand out. After that, PS-RCNN utilizes another R-CNN module specialized in heavily occluded human detection (referred as S-RCNN) to detect the rest missed objects by P-RCNN. Final results are the ensemble of the outputs from these two R-CNNs. Moreover, we introduce a High Resolution RoI Align (HRRA) module to retain as much of fine-grained features of visible parts of the heavily occluded humans as possible. Our PS-RCNN significantly improves recall and AP by 4.49% and 2.92% respectively on CrowdHuman, compared to the baseline. Similar improvements on Widerperson are also achieved by the PS-RCNN.", "field": ["Convolutions", "Pooling Operations", "Object Detection Models", "Non-Parametric Classification"], "task": ["Human Detection", "Object Detection"], "method": ["R-CNN", "Support Vector Machine", "SVM", "Convolution", "Max Pooling"], "dataset": ["CrowdHuman (full body)", "WiderPerson"], "metric": ["AP"], "title": "PS-RCNN: Detecting Secondary Human Instances in a Crowd via Primary Object Suppression"} {"abstract": "PixelCNN achieves state-of-the-art results in density estimation for natural\nimages. Although training is fast, inference is costly, requiring one network\nevaluation per pixel; O(N) for N pixels. This can be sped up by caching\nactivations, but still involves generating each pixel sequentially. In this\nwork, we propose a parallelized PixelCNN that allows more efficient inference\nby modeling certain pixel groups as conditionally independent. Our new PixelCNN\nmodel achieves competitive density estimation and orders of magnitude speedup -\nO(log N) sampling instead of O(N) - enabling the practical generation of\n512x512 images. We evaluate the model on class-conditional image generation,\ntext-to-image synthesis, and action-conditional video generation, showing that\nour model achieves the best results among non-pixel-autoregressive density\nmodels that allow efficient sampling.", "field": ["Generative Models"], "task": ["Conditional Image Generation", "Density Estimation", "Image Compression", "Image Generation", "Video Generation"], "method": ["PixelCNN"], "dataset": ["ImageNet 64x64", "ImageNet32"], "metric": ["bpsp", "Bits per dim"], "title": "Parallel Multiscale Autoregressive Density Estimation"} {"abstract": "Domain adaptation is essential to enable wide usage of deep learning based networks trained using large labeled datasets. Adversarial learning based techniques have shown their utility towards solving this problem using a discriminator that ensures source and target distributions are close. However, here we suggest that rather than using a point estimate, it would be useful if a distribution based discriminator could be used to bridge this gap. This could be achieved using multiple classifiers or using traditional ensemble methods. In contrast, we suggest that a Monte Carlo dropout based ensemble discriminator could suffice to obtain the distribution based discriminator. Specifically, we propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution and the corresponding reverse gradients are used to align the source and target feature representations. The detailed results and thorough ablation analysis show that our model outperforms state-of-the-art results.", "field": ["Regularization"], "task": ["Domain Adaptation"], "method": ["Dropout"], "dataset": ["Office-31"], "metric": ["Average Accuracy"], "title": "Curriculum based Dropout Discriminator for Domain Adaptation"} {"abstract": "We present a solution for the goal of extracting a video from a single motion\nblurred image to sequentially reconstruct the clear views of a scene as beheld\nby the camera during the time of exposure. We first learn motion representation\nfrom sharp videos in an unsupervised manner through training of a convolutional\nrecurrent video autoencoder network that performs a surrogate task of video\nreconstruction. Once trained, it is employed for guided training of a motion\nencoder for blurred images. This network extracts embedded motion information\nfrom the blurred image to generate a sharp video in conjunction with the\ntrained recurrent video decoder. As an intermediate step, we also design an\nefficient architecture that enables real-time single image deblurring and\noutperforms competing methods across all factors: accuracy, speed, and\ncompactness. Experiments on real scenes and standard datasets demonstrate the\nsuperiority of our framework over the state-of-the-art and its ability to\ngenerate a plausible sequence of temporally consistent sharp frames.", "field": ["Generative Models"], "task": ["Deblurring", "Video Reconstruction"], "method": ["AutoEncoder"], "dataset": ["GoPro"], "metric": ["SSIM", "PSNR"], "title": "Bringing Alive Blurred Moments"} {"abstract": "We consider the problem of anomaly detection in images, and present a new\ndetection technique. Given a sample of images, all known to belong to a\n\"normal\" class (e.g., dogs), we show how to train a deep neural model that can\ndetect out-of-distribution images (i.e., non-dog objects). The main idea behind\nour scheme is to train a multi-class model to discriminate between dozens of\ngeometric transformations applied on all the given images. The auxiliary\nexpertise learned by the model generates feature detectors that effectively\nidentify, at test time, anomalous images based on the softmax activation\nstatistics of the model when applied on transformed images. We present\nextensive experiments using the proposed detector, which indicate that our\nalgorithm improves state-of-the-art methods by a wide margin.", "field": ["Output Functions"], "task": ["Anomaly Detection"], "method": ["Softmax"], "dataset": ["One-class CIFAR-100", "One-class CIFAR-10"], "metric": ["AUROC"], "title": "Deep Anomaly Detection Using Geometric Transformations"} {"abstract": "We tackle the problem of one-shot instance segmentation: Given an example image of a novel, previously unknown object category, find and segment all objects of this category within a complex scene. To address this challenging new task, we propose Siamese Mask R-CNN. It extends Mask R-CNN by a Siamese backbone encoding both reference image and scene, allowing it to target detection and segmentation towards the reference category. We demonstrate empirical results on MS Coco highlighting challenges of the one-shot setting: while transferring knowledge about instance segmentation to novel object categories works very well, targeting the detection network towards the reference category appears to be more difficult. Our work provides a first strong baseline for one-shot instance segmentation and will hopefully inspire further research into more powerful and flexible scene analysis algorithms. Code is available at: https://github.com/bethgelab/siamese-mask-rcnn", "field": ["Convolutions", "RoI Feature Extractors", "Output Functions", "Instance Segmentation Models"], "task": ["Few-Shot Learning", "Few-Shot Object Detection", "Instance Segmentation", "Object Detection", "One-Shot Instance Segmentation", "One-Shot Learning", "One-Shot Object Detection"], "method": ["Mask R-CNN", "Softmax", "RoIAlign", "Convolution"], "dataset": ["COCO"], "metric": ["AP 0.5"], "title": "One-Shot Instance Segmentation"} {"abstract": "The Visual Dialogue task requires an agent to engage in a conversation about\nan image with a human. It represents an extension of the Visual Question\nAnswering task in that the agent needs to answer a question about an image, but\nit needs to do so in light of the previous dialogue that has taken place. The\nkey challenge in Visual Dialogue is thus maintaining a consistent, and natural\ndialogue while continuing to answer questions correctly. We present a novel\napproach that combines Reinforcement Learning and Generative Adversarial\nNetworks (GANs) to generate more human-like responses to questions. The GAN\nhelps overcome the relative paucity of training data, and the tendency of the\ntypical MLE-based approach to generate overly terse answers. Critically, the\nGAN is tightly integrated into the attention mechanism that generates\nhuman-interpretable reasons for each answer. This means that the discriminative\nmodel of the GAN has the task of assessing whether a candidate answer is\ngenerated by a human or not, given the provided reason. This is significant\nbecause it drives the generative model to produce high quality answers that are\nwell supported by the associated reasoning. The method also generates the\nstate-of-the-art results on the primary benchmark.", "field": ["Generative Models", "Convolutions"], "task": ["Question Answering", "Visual Dialog", "Visual Question Answering"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["VisDial v0.9 val"], "metric": ["R@10", "R@5", "Mean Rank", "MRR", "R@1"], "title": "Are You Talking to Me? Reasoned Visual Dialog Generation through Adversarial Learning"} {"abstract": "Knowledge graph embedding is an important task and it will benefit lots of downstream applications. Currently, deep neural networks based methods achieve state-of-the-art performance. However, most of these existing methods are very complex and need much time for training and inference. To address this issue, we propose a simple but effective atrous convolution based knowledge graph embedding method. Compared with existing state-of-the-art methods, our method has following main characteristics. First, it effectively increases feature interactions by using atrous convolutions. Second, to address the original information forgotten issue and vanishing/exploding gradient issue, it uses the residual learning method. Third, it has simpler structure but much higher parameter efficiency. We evaluate our method on six benchmark datasets with different evaluation metrics. Extensive experiments show that our model is very effective. On these diverse datasets, it achieves better results than the compared state-of-the-art methods on most of evaluation metrics. The source codes of our model could be found at https://github.com/neukg/AcrE.", "field": ["Convolutions"], "task": ["Graph Embedding", "Knowledge Graph Embedding"], "method": ["Convolution"], "dataset": ["FB15k"], "metric": ["MRR"], "title": "Knowledge Graph Embedding with Atrous Convolution and Residual Learning"} {"abstract": "The recent advances in deep neural networks have led to effective\nvision-based reinforcement learning methods that have been employed to obtain\nhuman-level controllers in Atari 2600 games from pixel data. Atari 2600 games,\nhowever, do not resemble real-world tasks since they involve non-realistic 2D\nenvironments and the third-person perspective. Here, we propose a novel\ntest-bed platform for reinforcement learning research from raw visual\ninformation which employs the first-person perspective in a semi-realistic 3D\nworld. The software, called ViZDoom, is based on the classical first-person\nshooter video game, Doom. It allows developing bots that play the game using\nthe screen buffer. ViZDoom is lightweight, fast, and highly customizable via a\nconvenient mechanism of user scenarios. In the experimental part, we test the\nenvironment by trying to learn bots for two scenarios: a basic move-and-shoot\ntask and a more complex maze-navigation problem. Using convolutional deep\nneural networks with Q-learning and experience replay, for both scenarios, we\nwere able to train competent bots, which exhibit human-like behaviors. The\nresults confirm the utility of ViZDoom as an AI research platform and imply\nthat visual reinforcement learning in 3D realistic first-person perspective\nenvironments is feasible.", "field": ["Off-Policy TD Control"], "task": ["Atari Games", "FPS Games", "Game of Doom", "Q-Learning"], "method": ["Q-Learning"], "dataset": ["ViZDoom Basic Scenario"], "metric": ["Average Score"], "title": "ViZDoom: A Doom-based AI Research Platform for Visual Reinforcement Learning"} {"abstract": "Earlier work demonstrates the promise of deep-learning-based approaches for\npoint cloud segmentation; however, these approaches need to be improved to be\npractically useful. To this end, we introduce a new model SqueezeSegV2 that is\nmore robust to dropout noise in LiDAR point clouds. With improved model\nstructure, training loss, batch normalization and additional input channel,\nSqueezeSegV2 achieves significant accuracy improvement when trained on real\ndata. Training models for point cloud segmentation requires large amounts of\nlabeled point-cloud data, which is expensive to obtain. To sidestep the cost of\ncollection and annotation, simulators such as GTA-V can be used to create\nunlimited amounts of labeled, synthetic data. However, due to domain shift,\nmodels trained on synthetic data often do not generalize well to the real\nworld. We address this problem with a domain-adaptation training pipeline\nconsisting of three major components: 1) learned intensity rendering, 2)\ngeodesic correlation alignment, and 3) progressive domain calibration. When\ntrained on real data, our new model exhibits segmentation accuracy improvements\nof 6.0-8.6% over the original SqueezeSeg. When training our new model on\nsynthetic data using the proposed domain adaptation pipeline, we nearly double\ntest accuracy on real-world data, from 29.0% to 57.4%. Our source code and\nsynthetic dataset will be open-sourced.", "field": ["Regularization", "Normalization"], "task": ["3D Semantic Segmentation", "Domain Adaptation", "Semantic Segmentation", "Unsupervised Domain Adaptation"], "method": ["Dropout", "Batch Normalization"], "dataset": ["SemanticKITTI"], "metric": ["mIoU"], "title": "SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud"} {"abstract": "Few-shot learning algorithms aim to learn model parameters capable of adapting to unseen classes with the help of only a few labeled examples. A recent regularization technique - Manifold Mixup focuses on learning a general-purpose representation, robust to small changes in the data distribution. Since the goal of few-shot learning is closely linked to robust representation learning, we study Manifold Mixup in this problem setting. Self-supervised learning is another technique that learns semantically meaningful features, using only the inherent structure of the data. This work investigates the role of learning relevant feature manifold for few-shot tasks using self-supervision and regularization techniques. We observe that regularizing the feature manifold, enriched via self-supervised techniques, with Manifold Mixup significantly improves few-shot learning performance. We show that our proposed method S2M2 beats the current state-of-the-art accuracy on standard few-shot learning datasets like CIFAR-FS, CUB, mini-ImageNet and tiered-ImageNet by 3-8 %. Through extensive experimentation, we show that the features learned using our approach generalize to complex few-shot evaluation tasks, cross-domain scenarios and are robust against slight changes to data distribution.", "field": ["Image Data Augmentation", "Regularization"], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Representation Learning", "Self-Supervised Learning"], "method": ["Manifold Mixup", "Mixup"], "dataset": ["CIFAR-FS 5-way (5-shot)", "Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CIFAR-FS 5-way (1-shot)", "CUB 200 5-way 1-shot", "CUB 200 5-way 5-shot", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Charting the Right Manifold: Manifold Mixup for Few-shot Learning"} {"abstract": "Recent work has shown that depth estimation from a stereo pair of images can\nbe formulated as a supervised learning task to be resolved with convolutional\nneural networks (CNNs). However, current architectures rely on patch-based\nSiamese networks, lacking the means to exploit context information for finding\ncorrespondence in illposed regions. To tackle this problem, we propose PSMNet,\na pyramid stereo matching network consisting of two main modules: spatial\npyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage\nof the capacity of global context information by aggregating context in\ndifferent scales and locations to form a cost volume. The 3D CNN learns to\nregularize cost volume using stacked multiple hourglass networks in conjunction\nwith intermediate supervision. The proposed approach was evaluated on several\nbenchmark datasets. Our method ranked first in the KITTI 2012 and 2015\nleaderboards before March 18, 2018. The codes of PSMNet are available at:\nhttps://github.com/JiaRenChang/PSMNet.", "field": ["Semantic Segmentation Modules", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations"], "task": ["Depth Estimation", "Stereo Matching", "Stereo Matching Hand"], "method": ["Average Pooling", "Batch Normalization", "Convolution", "ReLU", "Spatial Pyramid Pooling", "Rectified Linear Units", "Pyramid Pooling Module"], "dataset": ["KITTI Depth Completion Validation"], "metric": ["RMSE"], "title": "Pyramid Stereo Matching Network"} {"abstract": "A very deep convolutional neural network (CNN) has recently achieved great\nsuccess for image super-resolution (SR) and offered hierarchical features as\nwell. However, most deep CNN based SR models do not make full use of the\nhierarchical features from the original low-resolution (LR) images, thereby\nachieving relatively-low performance. In this paper, we propose a novel\nresidual dense network (RDN) to address this problem in image SR. We fully\nexploit the hierarchical features from all the convolutional layers.\nSpecifically, we propose residual dense block (RDB) to extract abundant local\nfeatures via dense connected convolutional layers. RDB further allows direct\nconnections from the state of preceding RDB to all the layers of current RDB,\nleading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is\nthen used to adaptively learn more effective features from preceding and\ncurrent local features and stabilizes the training of wider network. After\nfully obtaining dense local features, we use global feature fusion to jointly\nand adaptively learn global hierarchical features in a holistic way. Extensive\nexperiments on benchmark datasets with different degradation models show that\nour RDN achieves favorable performance against state-of-the-art methods.", "field": ["Activation Functions", "Normalization", "Convolutions", "Skip Connections", "Image Model Blocks"], "task": ["Color Image Denoising", "Image Super-Resolution", "Super-Resolution"], "method": ["Dense Block", "Concatenated Skip Connection", "Batch Normalization", "Convolution", "ReLU", "Rectified Linear Units"], "dataset": ["Set14 - 4x upscaling", "CBSD68 sigma50", "Manga109 - 4x upscaling", "BSD100 - 4x upscaling", "Set5 - 4x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Residual Dense Network for Image Super-Resolution"} {"abstract": "Point clouds contain rich spatial information, which provides complementary cues for gesture recognition. In this paper, we formulate gesture recognition as an irregular sequence recognition problem and aim to capture long-term spatial correlations across point cloud sequences. A novel and effective PointLSTM is proposed to propagate information from past to future while preserving the spatial structure. The proposed PointLSTM combines state information from neighboring points in the past with current features to update the current states by a weight-shared LSTM layer. This method can be integrated into many other sequence learning approaches. In the task of gesture recognition, the proposed PointLSTM achieves state-of-the-art results on two challenging datasets (NVGesture and SHREC'17) and outperforms previous skeleton-based methods. To show its advantages in generalization, we evaluate our method on MSR Action3D dataset, and it produces competitive results with previous skeleton-based methods.\r", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Gesture Recognition", "Hand Gesture Recognition"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["NVGesture", "SHREC 2017 track on 3D Hand Gesture Recognition", "SHREC 2017"], "metric": ["14 gestures accuracy", "28 gestures accuracy", "Accuracy"], "title": "An Efficient PointLSTM for Point Clouds Based Gesture Recognition"} {"abstract": "Most existing methods of semantic segmentation still suffer from two aspects\nof challenges: intra-class inconsistency and inter-class indistinction. To\ntackle these two problems, we propose a Discriminative Feature Network (DFN),\nwhich contains two sub-networks: Smooth Network and Border Network.\nSpecifically, to handle the intra-class inconsistency problem, we specially\ndesign a Smooth Network with Channel Attention Block and global average pooling\nto select the more discriminative features. Furthermore, we propose a Border\nNetwork to make the bilateral features of boundary distinguishable with deep\nsemantic boundary supervision. Based on our proposed DFN, we achieve\nstate-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean\nIOU on Cityscapes dataset.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val", "Cityscapes test"], "metric": ["Mean IoU", "Mean IoU (class)", "mIoU"], "title": "Learning a Discriminative Feature Network for Semantic Segmentation"} {"abstract": "Latest insights from biology show that intelligence does not only emerge from the connections between the neurons, but that individual neurons shoulder more computational responsibility. Current Neural Network architecture design and search are biased on fixed activation functions. Using more advanced learnable activation functions provide Neural Networks with higher learning capacity. However, general guidance for building such networks is still missing. In this work, we first explain why rationals offer an optimal choice for activation functions. We then show that they are closed under residual connections, and inspired by recurrence for residual networks we derive a self-regularized version of Rationals: Recurrent Rationals. We demonstrate that (Recurrent) Rational Networks lead to high performance improvements on Image Classification and Deep Reinforcement Learning.", "field": ["Q-Learning Networks", "Initialization", "Convolutional Neural Networks", "Off-Policy TD Control", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Replay Memory", "Skip Connections", "Skip Connection Blocks"], "task": ["Atari Games", "General Reinforcement Learning", "Image Classification"], "method": ["Average Pooling", "Tanh Activation", "1x1 Convolution", "Softplus", "ResNet", "Mish", "Convolution", "Double DQN", "ReLU", "Residual Connection", "Experience Replay", "DQN", "Dense Connections", "Double Q-learning", "Q-Learning", "Rational Activation Function", "Rational", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Deep Q-Network", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Atari 2600 Kangaroo", "Atari 2600 Video Pinball", "Atari 2600 Enduro", "Atari 2600 Seaquest", "Atari 2600 Asterix", "Atari 2600 Tennis", "Atari 2600 Time Pilot", "Atari 2600 Breakout", "Atari 2600 James Bond", "Atari 2600 Tutankham", "Atari 2600 Space Invaders", "Atari 2600 Battle Zone", "Atari 2600 Pong", "Atari 2600 Q*Bert", "Atari 2600 Skiing"], "metric": ["Score"], "title": "Recurrent Rational Networks"} {"abstract": "DeepPrior is a simple approach based on Deep Learning that predicts the joint\n3D locations of a hand given a depth map. Since its publication early 2015, it\nhas been outperformed by several impressive works. Here we show that with\nsimple improvements: adding ResNet layers, data augmentation, and better\ninitial hand localization, we achieve better or similar performance than more\nsophisticated recent methods on the three main benchmarks (NYU, ICVL, MSRA)\nwhile keeping the simplicity of the original method. Our new implementation is\navailable at https://github.com/moberweger/deep-prior-pp .", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["3D Hand Pose Estimation", "Data Augmentation", "Hand Pose Estimation", "Pose Estimation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ICVL Hands", "NYU Hands", "MSRA Hands"], "metric": ["Average 3D Error"], "title": "DeepPrior++: Improving Fast and Accurate 3D Hand Pose Estimation"} {"abstract": "We present a novel deep Recurrent Neural Network (RNN) model for acoustic\nmodelling in Automatic Speech Recognition (ASR). We term our contribution as a\nTC-DNN-BLSTM-DNN model, the model combines a Deep Neural Network (DNN) with\nTime Convolution (TC), followed by a Bidirectional Long Short-Term Memory\n(BLSTM), and a final DNN. The first DNN acts as a feature processor to our\nmodel, the BLSTM then generates a context from the sequence acoustic signal,\nand the final DNN takes the context and models the posterior probabilities of\nthe acoustic states. We achieve a 3.47 WER on the Wall Street Journal (WSJ)\neval92 task or more than 8% relative improvement over the baseline DNN models.", "field": ["Convolutions"], "task": ["Acoustic Modelling", "Speech Recognition"], "method": ["Convolution"], "dataset": ["WSJ eval92"], "metric": ["Word Error Rate (WER)"], "title": "Deep Recurrent Neural Networks for Acoustic Modelling"} {"abstract": "In this paper we establish rigorous benchmarks for image classifier\nrobustness. Our first benchmark, ImageNet-C, standardizes and expands the\ncorruption robustness topic, while showing which classifiers are preferable in\nsafety-critical applications. Then we propose a new dataset called ImageNet-P\nwhich enables researchers to benchmark a classifier's robustness to common\nperturbations. Unlike recent robustness research, this benchmark evaluates\nperformance on common corruptions and perturbations not worst-case adversarial\nperturbations. We find that there are negligible changes in relative corruption\nrobustness from AlexNet classifiers to ResNet classifiers. Afterward we\ndiscover ways to enhance corruption and perturbation robustness. We even find\nthat a bypassed adversarial defense provides substantial common perturbation\nrobustness. Together our benchmarks may aid future work toward networks that\nrobustly generalize.", "field": ["Initialization", "Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Adversarial Defense", "Domain Generalization"], "method": ["Average Pooling", "1x1 Convolution", "ResNet", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Grouped Convolution", "Batch Normalization", "Rectified Linear Units", "Residual Network", "AlexNet", "Kaiming Initialization", "Softmax", "Bottleneck Residual Block", "Dropout", "Residual Block", "Global Average Pooling", "Local Response Normalization", "Max Pooling"], "dataset": ["ImageNet-C"], "metric": ["mean Corruption Error (mCE)"], "title": "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations"} {"abstract": "Deep convolutional neural networks (CNNs) have delivered superior performance\nin many computer vision tasks. In this paper, we propose a novel deep fully\nconvolutional network model for accurate salient object detection. The key\ncontribution of this work is to learn deep uncertain convolutional features\n(UCF), which encourage the robustness and accuracy of saliency detection. We\nachieve this via introducing a reformulated dropout (R-dropout) after specific\nconvolutional layers to construct an uncertain ensemble of internal feature\nunits. In addition, we propose an effective hybrid upsampling method to reduce\nthe checkerboard artifacts of deconvolution operators in our decoder network.\nThe proposed methods can also be applied to other deep convolutional networks.\nCompared with existing saliency detection methods, the proposed UCF model is\nable to incorporate uncertainties for more accurate object boundary inference.\nExtensive experiments demonstrate that our proposed saliency model performs\nfavorably against state-of-the-art approaches. The uncertain feature learning\nmechanism as well as the upsampling method can significantly improve\nperformance on other pixel-wise vision tasks.", "field": ["Regularization"], "task": ["Object Detection", "RGB Salient Object Detection", "Saliency Detection", "Salient Object Detection"], "method": ["Dropout"], "dataset": ["DUT-OMRON", "DUTS-TE"], "metric": ["MAE", "F-measure"], "title": "Learning Uncertain Convolutional Features for Accurate Saliency Detection"} {"abstract": "ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is one of the most authoritative academic competitions in the field of Computer Vision (CV) in recent years. But applying ILSVRC's annual champion directly to fine-grained visual categorization (FGVC) tasks does not achieve good performance. To FGVC tasks, the small inter-class variations and the large intra-class variations make it a challenging problem. Our attention object location module (AOLM) can predict the position of the object and attention part proposal module (APPM) can propose informative part regions without the need of bounding-box or part annotations. The obtained object images not only contain almost the entire structure of the object, but also contains more details, part images have many different scales and more fine-grained features, and the raw images contain the complete object. The three kinds of training images are supervised by our multi-branch network. Therefore, our multi-branch and multi-scale learning network(MMAL-Net) has good classification ability and robustness for images of different scales. Our approach can be trained end-to-end, while provides short inference time. Through the comprehensive experiments demonstrate that our approach can achieves state-of-the-art results on CUB-200-2011, FGVC-Aircraft and Stanford Cars datasets. Our code will be available at https://github.com/ZF1044404254/MMAL-Net", "field": ["Proposal Filtering"], "task": ["Fine-Grained Image Classification", "Fine-Grained Image Recognition", "Fine-Grained Visual Categorization", "Object Recognition"], "method": ["Adaptive NMS"], "dataset": ["Stanford Cars", "CUB-200-2011", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "Multi-branch and Multi-scale Attention Learning for Fine-Grained Visual Categorization"} {"abstract": "Graph representation learning has recently been applied to a broad spectrum of problems ranging from computer graphics and chemistry to high energy physics and social media. The popularity of graph neural networks has sparked interest, both in academia and in industry, in developing methods that scale to very large graphs such as Facebook or Twitter social networks. In most of these approaches, the computational cost is alleviated by a sampling strategy retaining a subset of node neighbors or subgraphs at training time. In this paper we propose a new, efficient and scalable graph deep learning architecture which sidesteps the need for graph sampling by using graph convolutional filters of different size that are amenable to efficient precomputation, allowing extremely fast training and inference. Our architecture allows using different local graph operators (e.g. motif-induced adjacency matrices or Personalized Page Rank diffusion matrix) to best suit the task at hand. We conduct extensive experimental evaluation on various open benchmarks and show that our approach is competitive with other state-of-the-art architectures, while requiring a fraction of the training and inference time. Moreover, we obtain state-of-the-art results on ogbn-papers100M, the largest public graph dataset, with over 110 million nodes and 1.5 billion edges.", "field": ["Image Model Blocks", "Convolutions", "Pooling Operations"], "task": ["Graph Representation Learning", "Graph Sampling", "Node Classification", "Representation Learning"], "method": ["Inception Module", "1x1 Convolution", "Max Pooling", "Convolution"], "dataset": ["Coauthor CS", "PPI", "Reddit", "AMZ Photo", "AMZ Comp"], "metric": ["F1", "Accuracy"], "title": "SIGN: Scalable Inception Graph Neural Networks"} {"abstract": "Knowledge graphs are important resources for many artificial intelligence tasks but often suffer from incompleteness. In this work, we propose to use pre-trained language models for knowledge graph completion. We treat triples in knowledge graphs as textual sequences and propose a novel framework named Knowledge Graph Bidirectional Encoder Representations from Transformer (KG-BERT) to model these triples. Our method takes entity and relation descriptions of a triple as input and computes scoring function of the triple with the KG-BERT language model. Experimental results on multiple benchmark knowledge graphs show that our method can achieve state-of-the-art performance in triple classification, link prediction and relation prediction tasks.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Knowledge Graph Completion", "Knowledge Graphs", "Language Modelling", "Link Prediction", "Triple Classification"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["WN18RR", "FB15k-237"], "metric": ["Hits@10", "MR"], "title": "KG-BERT: BERT for Knowledge Graph Completion"} {"abstract": "In this paper, we propose a novel generative model named Stacked Generative\nAdversarial Networks (SGAN), which is trained to invert the hierarchical\nrepresentations of a bottom-up discriminative network. Our model consists of a\ntop-down stack of GANs, each learned to generate lower-level representations\nconditioned on higher-level representations. A representation discriminator is\nintroduced at each feature hierarchy to encourage the representation manifold\nof the generator to align with that of the bottom-up discriminative network,\nleveraging the powerful discriminative representations to guide the generative\nmodel. In addition, we introduce a conditional loss that encourages the use of\nconditional information from the layer above, and a novel entropy loss that\nmaximizes a variational lower bound on the conditional entropy of generator\noutputs. We first train each stack independently, and then train the whole\nmodel end-to-end. Unlike the original GAN that uses a single noise vector to\nrepresent all the variations, our SGAN decomposes variations into multiple\nlevels and gradually resolves uncertainties in the top-down generative process.\nBased on visual inspection, Inception scores and visual Turing test, we\ndemonstrate that SGAN is able to generate images of much higher quality than\nGANs without stacking.", "field": ["Generative Models", "Convolutions"], "task": ["Conditional Image Generation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["CIFAR-10"], "metric": ["Inception score"], "title": "Stacked Generative Adversarial Networks"} {"abstract": "This paper proposes hybrid semi-Markov conditional random fields (SCRFs) for\nneural sequence labeling in natural language processing. Based on conventional\nconditional random fields (CRFs), SCRFs have been designed for the tasks of\nassigning labels to segments by extracting features from and describing\ntransitions between segments instead of words. In this paper, we improve the\nexisting SCRF methods by employing word-level and segment-level information\nsimultaneously. First, word-level labels are utilized to derive the segment\nscores in SCRFs. Second, a CRF output layer and an SCRF output layer are\nintegrated into an unified neural network and trained jointly. Experimental\nresults on CoNLL 2003 named entity recognition (NER) shared task show that our\nmodel achieves state-of-the-art performance when no external knowledge is used.", "field": ["Structured Prediction"], "task": ["Named Entity Recognition"], "method": ["Conditional Random Field", "CRF"], "dataset": ["CoNLL 2003 (English)"], "metric": ["F1"], "title": "Hybrid semi-Markov CRF for Neural Sequence Labeling"} {"abstract": "We explore the benefits of decreasing the input length of transformers. First, we show that initially training the model on short subsequences, before moving on to longer ones, both reduces overall training time and, surprisingly, gives a large improvement in perplexity. We then show how to improve the efficiency of recurrence methods in transformers, which let models condition on previously processed tokens (when generating sequences that are larger than the maximal length that the transformer can handle at once). Existing methods require computationally expensive relative position embeddings; we introduce a simple alternative of adding absolute position embeddings to queries and keys instead of to word embeddings, which efficiently produces superior results. By combining these techniques, we increase training speed by 65%, make generation nine times faster, and substantially improve perplexity on WikiText-103, without adding any parameters.", "field": ["Activation Functions", "Normalization"], "task": ["Language Modelling", "Word Embeddings"], "method": ["GELU", "Layer Normalization", "Gaussian Linear Error Units"], "dataset": ["WikiText-103"], "metric": ["Number of params", "Validation perplexity", "Test perplexity"], "title": "Shortformer: Better Language Modeling using Shorter Inputs"} {"abstract": "Sample efficiency remains a fundamental issue of reinforcement learning. Model-based algorithms try to make better use of data by simulating the environment with a model. We propose a new neural network architecture for world models based on a vector quantized-variational autoencoder (VQ-VAE) to encode observations and a convolutional LSTM to predict the next embedding indices. A model-free PPO agent is trained purely on simulated experience from the world model. We adopt the setup introduced by Kaiser et al. (2020), which only allows 100K interactions with the real environment. We apply our method on 36 Atari environments and show that we reach comparable performance to their SimPLe algorithm, while our model is significantly smaller.", "field": ["Policy Gradient Methods", "Regularization", "Recurrent Neural Networks", "Activation Functions", "Generative Models"], "task": ["Atari Games"], "method": ["Long Short-Term Memory", "Entropy Regularization", "Tanh Activation", "AutoEncoder", "LSTM", "PPO", "Proximal Policy Optimization", "Sigmoid Activation"], "dataset": ["Atari 2600 Seaquest", "Atari 2600 Breakout", "Atari 2600 Freeway", "Atari 2600 Bank Heist", "Atari 2600 Crazy Climber", "Atari 2600 Pong"], "metric": ["Score"], "title": "Smaller World Models for Reinforcement Learning"} {"abstract": "Acquiring sufficient ground-truth supervision to train deep visual models has been a bottleneck over the years due to the data-hungry nature of deep learning. This is exacerbated in some structured prediction tasks, such as semantic segmentation, which requires pixel-level annotations. This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation. We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths, which can be used for training more accurate segmentation models. In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes, and the underlying relations between a pair of images are characterized by an efficient co-attention mechanism. Moreover, in order to prevent the model from paying excessive attention to common semantics only, we further propose a graph dropout layer, encouraging the model to learn more accurate and complete object responses. The whole network is end-to-end trainable by iterative message passing, which propagates interaction cues over the images to progressively improve the performance. We conduct experiments on the popular PASCAL VOC 2012 and COCO benchmarks, and our model yields state-of-the-art performance. Our code is available at: https://github.com/Lixy1997/Group-WSSS.", "field": ["Regularization"], "task": ["Semantic Segmentation", "Structured Prediction", "Weakly-Supervised Semantic Segmentation"], "method": ["Dropout"], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val"], "metric": ["Mean IoU"], "title": "Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation"} {"abstract": "We propose the first direct end-to-end multi-person pose estimation framework, termed DirectPose. Inspired by recent anchor-free object detectors, which directly regress the two corners of target bounding-boxes, the proposed framework directly predicts instance-aware keypoints for all the instances from a raw input image, eliminating the need for heuristic grouping in bottom-up methods or bounding-box detection and RoI operations in top-down ones. We also propose a novel Keypoint Alignment (KPAlign) mechanism, which overcomes the main difficulty: lack of the alignment between the convolutional features and predictions in this end-to-end framework. KPAlign improves the framework's performance by a large margin while still keeping the framework end-to-end trainable. With the only postprocessing non-maximum suppression (NMS), our proposed framework can detect multi-person keypoints with or without bounding-boxes in a single shot. Experiments demonstrate that the end-to-end paradigm can achieve competitive or better performance than previous strong baselines, in both bottom-up and top-down methods. We hope that our end-to-end approach can provide a new perspective for the human pose estimation task.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Multi-Person Pose Estimation", "Pose Estimation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO test-dev"], "metric": ["APM", "AP75", "AP", "APL", "AP50"], "title": "DirectPose: Direct End-to-End Multi-Person Pose Estimation"} {"abstract": "Recent work has achieved great success in utilizing global contextual information for semantic segmentation, including increasing the receptive field and aggregating pyramid feature representations. In this paper, we go beyond global context and explore the fine-grained representation using co-occurrent features by introducing Co-occurrent Feature Model, which predicts the distribution of co-occurrent features for a given target. To leverage the semantic context in the co-occurrent features, we build an Aggregated Co-occurrent Feature (ACF) Module by aggregating the probability of the co-occurrent feature with the co-occurrent context. ACF Module learns a fine-grained spatial invariant representation to capture co-occurrent context information across the scene. Our approach significantly improves the segmentation results using FCN and achieves superior performance 54.0% mIoU on Pascal Context, 87.2% mIoU on Pascal VOC 2012 and 44.89% mIoU on ADE20K datasets. The source code and complete system will be publicly available upon publication.\r", "field": ["Initialization", "Semantic Segmentation Models", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Fully Convolutional Network", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "FCN"], "dataset": ["ADE20K", "PASCAL Context", "PASCAL VOC 2012 test", "ADE20K val"], "metric": ["Mean IoU", "Validation mIoU", "mIoU"], "title": "Co-Occurrent Features in Semantic Segmentation"} {"abstract": "We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a\nnatural generalization of convolutional neural networks that reduces sample\ncomplexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of\nlayer that enjoys a substantially higher degree of weight sharing than regular\nconvolution layers. G-convolutions increase the expressive capacity of the\nnetwork without increasing the number of parameters. Group convolution layers\nare easy to use and can be implemented with negligible computational overhead\nfor discrete groups generated by translations, reflections and rotations.\nG-CNNs achieve state of the art results on CIFAR10 and rotated MNIST.", "field": ["Convolutions"], "task": ["Breast Tumour Classification", "Colorectal Gland Segmentation:", "Multi-tissue Nucleus Segmentation", "Rotated MNIST"], "method": ["Convolution"], "dataset": ["CRAG", "Kumar", "PCam"], "metric": ["F1-score", "Hausdorff Distance (mm)", "AUC", "Dice"], "title": "Group Equivariant Convolutional Networks"} {"abstract": "This paper proposes a fast video salient object detection model, based on a novel recurrent network architecture, named Pyramid Dilated Bidirectional ConvLSTM (PDB-ConvLSTM). A Pyramid Dilated Convolution (PDC) module is first designed for simultaneously extracting spatial features at multiple scales. These spatial features are then concatenated and fed into an extended Deeper Bidirectional ConvLSTM (DB-ConvLSTM) to learn spatiotemporal information. Forward and backward ConvLSTM units are placed in two layers and connected in a cascaded way, encouraging information flow between the bi-directional streams and leading to deeper feature extraction. We further augment DB-ConvLSTM with a PDC-like structure, by adopting several dilated DB-ConvLSTMs to extract multi-scale spatiotemporal information. Extensive experimental results show that our method outperforms previous video saliency models in a large margin, with a real-time speed of 20 fps on a single GPU. With unsupervised video object segmentation as an example application, the proposed model (with a CRF-based post-process) achieves state-of-the-art results on two popular benchmarks, well demonstrating its superior performance and high applicability.", "field": ["Recurrent Neural Networks", "Activation Functions", "Convolutions"], "task": ["Object Detection", "RGB Salient Object Detection", "Salient Object Detection", "Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Salient Object Detection", "Video Semantic Segmentation"], "method": ["Dilated Convolution", "ConvLSTM", "Convolution", "Tanh Activation", "Sigmoid Activation"], "dataset": ["ViSal", "MCL", "DAVIS-2016", "DAVIS 2017 (test-dev)", "DAVIS 2017 (val)", "DAVSOD-Difficult20", "VOS-T", "DAVSOD-Normal25", "SegTrack v2", "UVSD", "DAVSOD-easy35", "DAVIS 2016", "FBMS-59"], "metric": ["max E-Measure", "F-measure (Decay)", "Jaccard (Mean)", "AVERAGE MAE", "S-Measure", "max F-Measure", "MAX F-MEASURE", "Average MAE", "F-measure (Recall)", "Jaccard (Decay)", "max E-measure", "Jaccard (Recall)", "F-measure (Mean)", "MAX E-MEASURE", "J&F"], "title": "Pyramid Dilated Deeper ConvLSTM for Video Salient Object Detection"} {"abstract": "In this paper, we study the task of selecting the optimal response given a user and system utterance history in retrieval-based multi-turn dialog systems. Recently, pre-trained language models (e.g., BERT, RoBERTa, and ELECTRA) showed significant improvements in various natural language processing tasks. This and similar response selection tasks can also be solved using such language models by formulating the tasks as dialog--response binary classification tasks. Although existing works using this approach successfully obtained state-of-the-art results, we observe that language models trained in this manner tend to make predictions based on the relatedness of history and candidates, ignoring the sequential nature of multi-turn dialog systems. This suggests that the response selection task alone is insufficient for learning temporal dependencies between utterances. To this end, we propose utterance manipulation strategies (UMS) to address this problem. Specifically, UMS consist of several strategies (i.e., insertion, deletion, and search), which aid the response selection model towards maintaining dialog coherence. Further, UMS are self-supervised methods that do not require additional annotation and thus can be easily incorporated into existing approaches. Extensive evaluation across multiple languages and models shows that UMS are highly effective in teaching dialog consistency, which leads to models pushing the state-of-the-art with significant margins on multiple public benchmark datasets.", "field": ["Regularization", "Attention Modules", "Learning Rate Schedules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Conversational Response Selection"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "RoBERTa", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R10@2"], "title": "Do Response Selection Models Really Know What's Next? Utterance Manipulation Strategies for Multi-turn Response Selection"} {"abstract": "Dual learning has attracted much attention in machine learning, computer vision and natural language processing communities. The core idea of dual learning is to leverage the duality between the primal task (mapping from domain X to domain Y) and dual task (mapping from domain Y to X) to boost the performances of both tasks. Existing dual learning framework forms a system with two agents (one primal model and one dual model) to utilize such duality. In this paper, we extend this framework by introducing multiple primal and dual models, and propose the multi-agent dual learning framework. Experiments on neural machine translation and image translation tasks demonstrate the effectiveness of the new framework. \nIn particular, we set a new record on IWSLT 2014 German-to-English translation with a 35.44 BLEU score, achieve a 31.03 BLEU score on WMT 2014 English-to-German translation with over 2.6 BLEU improvement over the strong Transformer baseline, and set a new record of 49.61 BLEU score on the recent WMT 2018 English-to-German translation.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["WMT2016 English-German"], "metric": ["BLEU score"], "title": "Multi-Agent Dual Learning"} {"abstract": "Almost all existing deep learning approaches for semantic segmentation tackle this task as a pixel-wise classification problem. Yet humans understand a scene not in terms of pixels, but by decomposing it into perceptual groups and structures that are the basic building blocks of recognition. This motivates us to propose an end-to-end pixel-wise metric learning approach that mimics this process. In our approach, the optimal visual representation determines the right segmentation within individual images and associates segments with the same semantic classes across images. The core visual learning problem is therefore to maximize the similarity within segments and minimize the similarity between segments. Given a model trained this way, inference is performed consistently by extracting pixel-wise embeddings and clustering, with the semantic label determined by the majority vote of its nearest neighbors from an annotated set. As a result, we present the SegSort, as a first attempt using deep learning for unsupervised semantic segmentation, achieving $76\\%$ performance of its supervised counterpart. When supervision is available, SegSort shows consistent improvements over conventional approaches based on pixel-wise softmax training. Additionally, our approach produces more precise boundaries and consistent region predictions. The proposed SegSort further produces an interpretable result, as each choice of label can be easily understood from the retrieved nearest segments.", "field": ["Output Functions"], "task": ["Metric Learning", "Semantic Segmentation", "Unsupervised Semantic Segmentation"], "method": ["Softmax"], "dataset": ["PASCAL VOC 2012 val"], "metric": ["mIoU", "mIoU (KMeans)", "Prior"], "title": "SegSort: Segmentation by Discriminative Sorting of Segments"} {"abstract": "Recurrent neural networks (RNNs) are notoriously difficult to train. When the\neigenvalues of the hidden to hidden weight matrix deviate from absolute value\n1, optimization becomes difficult due to the well studied issue of vanishing\nand exploding gradients, especially when trying to learn long-term\ndependencies. To circumvent this problem, we propose a new architecture that\nlearns a unitary weight matrix, with eigenvalues of absolute value exactly 1.\nThe challenge we address is that of parametrizing unitary matrices in a way\nthat does not require expensive computations (such as eigendecomposition) after\neach weight update. We construct an expressive unitary weight matrix by\ncomposing several structured matrices that act as building blocks with\nparameters to be learned. Optimization with this parameterization becomes\nfeasible only when considering hidden states in the complex domain. We\ndemonstrate the potential of this architecture by achieving state of the art\nresults in several hard tasks involving very long-term dependencies.", "field": ["Recurrent Neural Networks", "Activation Functions", "Stochastic Optimization"], "task": ["Sequential Image Classification"], "method": ["RMSProp", "modReLU", "Unitary RNN"], "dataset": ["Sequential MNIST"], "metric": ["Permuted Accuracy", "Unpermuted Accuracy"], "title": "Unitary Evolution Recurrent Neural Networks"} {"abstract": "Automated design of neural network architectures tailored for a specific task is an extremely promising, albeit inherently difficult, avenue to explore. While most results in this domain have been achieved on image classification and language modelling problems, here we concentrate on dense per-pixel tasks, in particular, semantic image segmentation using fully convolutional networks. In contrast to the aforementioned areas, the design choices of a fully convolutional network require several changes, ranging from the sort of operations that need to be used---e.g., dilated convolutions---to a solving of a more difficult optimisation problem. In this work, we are particularly interested in searching for high-performance compact segmentation architectures, able to run in real-time using limited resources. To achieve that, we intentionally over-parameterise the architecture during the training time via a set of auxiliary cells that provide an intermediate supervisory signal and can be omitted during the evaluation phase. The design of the auxiliary cell is emitted by a controller, a neural network with the fixed structure trained using reinforcement learning. More crucially, we demonstrate how to efficiently search for these architectures within limited time and computational budgets. In particular, we rely on a progressive strategy that terminates non-promising architectures from being further trained, and on Polyak averaging coupled with knowledge distillation to speed-up the convergence. Quantitatively, in 8 GPU-days our approach discovers a set of architectures performing on-par with state-of-the-art among compact models on the semantic segmentation, pose estimation and depth prediction tasks. Code will be made available here: https://github.com/drsleep/nas-segm-pytorch", "field": ["Stochastic Optimization"], "task": ["Depth Estimation", "Image Classification", "Knowledge Distillation", "Language Modelling", "Monocular Depth Estimation", "Neural Architecture Search", "Pose Estimation", "Semantic Segmentation"], "method": ["Polyak Averaging"], "dataset": ["NYU-Depth V2", "PASCAL VOC 2012 val"], "metric": ["RMSE", "mIoU"], "title": "Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells"} {"abstract": "Data augmentation is an essential technique for improving generalization ability of deep learning models. Recently, AutoAugment has been proposed as an algorithm to automatically search for augmentation policies from a dataset and has significantly enhanced performances on many image recognition tasks. However, its search method requires thousands of GPU hours even for a relatively small dataset. In this paper, we propose an algorithm called Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching. In comparison to AutoAugment, the proposed algorithm speeds up the search time by orders of magnitude while achieves comparable performances on image recognition tasks with various models and datasets including CIFAR-10, CIFAR-100, SVHN, and ImageNet.", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Convolutional Neural Networks", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Image Models", "Skip Connection Blocks"], "task": ["Data Augmentation", "Image Augmentation", "Image Classification"], "method": ["Weight Decay", "Average Pooling", "Cutout", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "ResNet", "AutoAugment", "Convolution", "ReLU", "Residual Connection", "Wide Residual Block", "Max Pooling", "Fast AutoAugment", "Batch Normalization", "Residual Network", "ColorJitter", "Kaiming Initialization", "Sigmoid Activation", "Color Jitter", "Bottleneck Residual Block", "Dropout", "LSTM", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "WideResNet"], "dataset": ["CIFAR-100", "SVHN", "ImageNet", "CIFAR-10"], "metric": ["Percentage error", "Top 5 Accuracy", "Percentage correct", "Top 1 Accuracy"], "title": "Fast AutoAugment"} {"abstract": "Extracting large amounts of data from biological samples is not feasible due to radiation issues, and image processing in the small-data regime is one of the critical challenges when working with a limited amount of data. In this work, we applied an existing algorithm named Variational Auto Encoder (VAE) that pre-trains a latent space representation of the data to capture the features in a lower-dimension for the small-data regime input. The fine-tuned latent space provides constant weights that are useful for classification. Here we will present the performance analysis of the VAE algorithm with different latent space sizes in the semi-supervised learning using the CIFAR-10 dataset.", "field": ["Generative Models"], "task": ["Small Data Image Classification"], "method": ["VAE", "Variational Autoencoder"], "dataset": ["cifar10, 10 labels"], "metric": ["% Test Accuracy"], "title": "Performance Analysis of Semi-supervised Learning in the Small-data Regime using VAEs"} {"abstract": "Diffusion is commonly used as a ranking or re-ranking method in retrieval\ntasks to achieve higher retrieval performance, and has attracted lots of\nattention in recent years. A downside to diffusion is that it performs slowly\nin comparison to the naive k-NN search, which causes a non-trivial online\ncomputational cost on large datasets. To overcome this weakness, we propose a\nnovel diffusion technique in this paper. In our work, instead of applying\ndiffusion to the query, we pre-compute the diffusion results of each element in\nthe database, making the online search a simple linear combination on top of\nthe k-NN search process. Our proposed method becomes 10~ times faster in terms\nof online search speed. Moreover, we propose to use late truncation instead of\nearly truncation in previous works to achieve better retrieval performance.", "field": ["Non-Parametric Classification"], "task": ["Image Retrieval"], "method": ["k-Nearest Neighbors", "k-NN"], "dataset": ["Par106k", "Par6k", "Oxf5k", "Oxf105k"], "metric": ["mAP", "MAP"], "title": "Efficient Image Retrieval via Decoupling Diffusion into Online and Offline Processing"} {"abstract": "Relation extraction is a type of information extraction task that recognizes semantic relationships between entities in a sentence. Many previous studies have focused on extracting only one semantic relation between two entities in a single sentence. However, multiple entities in a sentence are associated through various relations. To address this issue, we propose a relation extraction model based on a dual pointer network with a multi-head attention mechanism. The proposed model finds n-to-1 subject-object relations using a forward object decoder. Then, it finds 1-to-n subject-object relations using a backward subject decoder. Our experiments confirmed that the proposed model outperformed previous models, with an F1-score of 80.8% for the ACE-2005 corpus and an F1-score of 78.3% for the NYT corpus.", "field": ["Output Functions", "Attention Modules", "Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models", "Attention Mechanisms"], "task": ["Relation Extraction"], "method": ["Softmax", "Additive Attention", "Long Short-Term Memory", "Multi-Head Attention", "Pointer Network", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["ACE 2005"], "metric": ["Relation classification F1"], "title": "Dual Pointer Network for Fast Extraction of Multiple Relations in a Sentence"} {"abstract": "Many sequential processing tasks require complex nonlinear transition\nfunctions from one step to the next. However, recurrent neural networks with\n'deep' transition functions remain difficult to train, even when using Long\nShort-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of\nrecurrent networks based on Gersgorin's circle theorem that illuminates several\nmodeling and optimization issues and improves our understanding of the LSTM\ncell. Based on this analysis we propose Recurrent Highway Networks, which\nextend the LSTM architecture to allow step-to-step transition depths larger\nthan one. Several language modeling experiments demonstrate that the proposed\narchitecture results in powerful and efficient models. On the Penn Treebank\ncorpus, solely increasing the transition depth from 1 to 10 improves word-level\nperplexity from 90.6 to 65.4 using the same number of parameters. On the larger\nWikipedia datasets for character prediction (text8 and enwik8), RHNs outperform\nall previous results and achieve an entropy of 1.27 bits per character.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Language Modelling"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["Text8", "enwik8", "Penn Treebank (Word Level)", "Hutter Prize"], "metric": ["Number of params", "Bit per Character (BPC)", "Validation perplexity", "Test perplexity", "Params"], "title": "Recurrent Highway Networks"} {"abstract": "The Convolutional Neural Networks (CNNs) generate the feature representation of complex objects by collecting hierarchical and different parts of semantic sub-features. These sub-features can usually be distributed in grouped form in the feature vector of each layer, representing various semantic entities. However, the activation of these sub-features is often spatially affected by similar patterns and noisy backgrounds, resulting in erroneous localization and identification. We propose a Spatial Group-wise Enhance (SGE) module that can adjust the importance of each sub-feature by generating an attention factor for each spatial location in each semantic group, so that every individual group can autonomously enhance its learnt expression and suppress possible noise. The attention factors are only guided by the similarities between the global and local feature descriptors inside each group, thus the design of SGE module is extremely lightweight with \\emph{almost no extra parameters and calculations}. Despite being trained with only category supervisions, the SGE component is extremely effective in highlighting multiple active areas with various high-order semantics (such as the dog's eyes, nose, etc.). When integrated with popular CNN backbones, SGE can significantly boost the performance of image recognition tasks. Specifically, based on ResNet50 backbones, SGE achieves 1.2\\% Top-1 accuracy improvement on the ImageNet benchmark and 1.0$\\sim$2.0\\% AP gain on the COCO benchmark across a wide range of detectors (Faster/Mask/Cascade RCNN and RetinaNet). Codes and pretrained models are available at https://github.com/implus/PytorchInsight.", "field": ["Convolutional Neural Networks", "Feature Extractors", "Normalization", "Attention Mechanisms", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Region Proposal", "Object Detection Models", "Stochastic Optimization", "Loss Functions", "Skip Connection Blocks", "Image Data Augmentation", "Initialization", "Output Functions", "Learning Rate Schedules", "RoI Feature Extractors", "Instance Segmentation Models", "Skip Connections", "Image Model Blocks"], "task": ["Image Classification", "Object Detection"], "method": ["Weight Decay", "Average Pooling", "Faster R-CNN", "1x1 Convolution", "RoIAlign", "Spatial Group-wise Enhance", "Region Proposal Network", "ResNet", "Random Horizontal Flip", "RoIPool", "Convolution", "ReLU", "Residual Connection", "FPN", "RPN", "Focal Loss", "Random Resized Crop", "Batch Normalization", "Dot-Product Attention", "Residual Network", "Cascade R-CNN", "Kaiming Initialization", "Step Decay", "Sigmoid Activation", "SGD with Momentum", "Softmax", "Feature Pyramid Network", "Bottleneck Residual Block", "Mask R-CNN", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Spatial Group-wise Enhance: Improving Semantic Feature Learning in Convolutional Networks"} {"abstract": "We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and dueling agents (entropy reward and $\\epsilon$-greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.", "field": ["Q-Learning Networks", "Policy Gradient Methods", "Regularization", "Output Functions", "Off-Policy TD Control", "Randomized Value Functions", "Convolutions", "Feedforward Networks"], "task": ["Atari Games", "Efficient Exploration"], "method": ["Double Q-learning", "Q-Learning", "A3C", "Softmax", "Dueling Network", "Entropy Regularization", "NoisyNet-A3C", "NoisyNet-Dueling", "Noisy Linear Layer", "Convolution", "Deep Q-Network", "DQN", "Dense Connections", "NoisyNet-DQN"], "dataset": ["Atari 2600 Amidar", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Pitfall!", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 Defender", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Solaris", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Surround", "Atari 2600 Yars Revenge", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "Noisy Networks for Exploration"} {"abstract": "Transformers have been successfully applied to sequential, auto-regressive tasks despite being feedforward networks. Unlike recurrent neural networks, Transformers use attention to capture temporal relations while processing input tokens in parallel. While this parallelization makes them computationally efficient, it restricts the model from fully exploiting the sequential nature of the input. The representation at a given layer can only access representations from lower layers, rather than the higher level representations already available. In this work, we propose the Feedback Transformer architecture that exposes all previous representations to all future representations, meaning the lowest representation of the current timestep is formed from the highest-level abstract representation of the past. We demonstrate on a variety of benchmarks in language modeling, machine translation, and reinforcement learning that the increased representation capacity can create small, shallow models with much stronger performance than comparable Transformers.", "field": ["Policy Gradient Methods", "Attention Modules", "Regularization", "Stochastic Optimization", "Output Functions", "Recurrent Neural Networks", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Machine Translation"], "method": ["A2C", "RMSProp", "Long Short-Term Memory", "Adam", "Tanh Activation", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Transformer", "Feedback Transformer", "Residual Connection", "Dense Connections", "Layer Normalization", "Label Smoothing", "GELU", "Sigmoid Activation", "Byte Pair Encoding", "BPE", "Softmax", "Feedback Memory", "Multi-Head Attention", "LSTM", "Dropout"], "dataset": ["WikiText-103", "enwik8", "Penn Treebank (Character Level)"], "metric": ["Number of params", "Bit per Character (BPC)", "Validation perplexity", "Test perplexity"], "title": "Addressing Some Limitations of Transformers with Feedback Memory"} {"abstract": "We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.", "field": ["Generative Models", "Skip Connections", "Transformers"], "task": ["Abstractive Text Summarization", "Denoising", "Machine Translation", "Natural Language Inference", "Question Answering", "Text Generation", "Text Summarization"], "method": ["BART", "Residual Connection", "AutoEncoder", "Denoising Autoencoder"], "dataset": ["SQuAD1.1 dev", "X-Sum", "CNN / Daily Mail"], "metric": ["ROUGE-1", "ROUGE-2", "ROUGE-3", "F1", "ROUGE-L"], "title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension"} {"abstract": "Recurrent neural networks (RNNs) are challenging to train, let alone those with deep spatial structures. Architectures built upon highway connections such as Recurrent Highway Network (RHN) were developed to allow larger step-to-step transition depth, leading to more expressive models. However, problems that require capturing long-term dependencies still can not be well addressed by these models. Moreover, the ability to keep long-term memories tends to diminish when the spatial depth increases, since deeper structure may accelerate gradient vanishing. In this paper, we address these issues by proposing a novel RNN architecture based on RHN, namely the Recurrent Highway Network with Grouped Auxiliary Memory (GAM-RHN). The proposed architecture interconnects the RHN with a set of auxiliary memory units specifically for storing long-term information via reading and writing operations, which is analogous to Memory Augmented Neural Networks (MANNs). Experimental results on artificial long time lag tasks show that GAM-RHNs can be trained efficiently while being deep in both time and space. We also evaluate the proposed architecture on a variety of tasks, including language modeling, sequential image classification, and financial market forecasting. The potential of our approach is demonstrated by achieving state-of-the-art results on these tasks.", "field": ["Activation Functions", "Feedforward Networks", "Miscellaneous Components"], "task": ["Image Classification", "Language Modelling", "Sequential Image Classification", "Stock Trend Prediction"], "method": ["Highway Network", "Sigmoid Activation", "Highway Layer"], "dataset": ["Text8", "Penn Treebank (Character Level)", "Sequential MNIST", "FI-2010"], "metric": ["Number of params", "Bit per Character (BPC)", "Permuted Accuracy", "F1 (H50)", "Accuracy (H50)"], "title": "Recurrent Highway Networks with Grouped Auxiliary Memory"} {"abstract": "This paper presents a method for face detection in the wild, which integrates\na ConvNet and a 3D mean face model in an end-to-end multi-task discriminative\nlearning framework. The 3D mean face model is predefined and fixed (e.g., we\nused the one provided in the AFLW dataset). The ConvNet consists of two\ncomponents: (i) The face pro- posal component computes face bounding box\nproposals via estimating facial key-points and the 3D transformation (rotation\nand translation) parameters for each predicted key-point w.r.t. the 3D mean\nface model. (ii) The face verification component computes detection results by\nprun- ing and refining proposals based on facial key-points based configuration\npooling. The proposed method addresses two issues in adapting state- of-the-art\ngeneric object detection ConvNets (e.g., faster R-CNN) for face detection: (i)\nOne is to eliminate the heuristic design of prede- fined anchor boxes in the\nregion proposals network (RPN) by exploit- ing a 3D mean face model. (ii) The\nother is to replace the generic RoI (Region-of-Interest) pooling layer with a\nconfiguration pooling layer to respect underlying object structures. The\nmulti-task loss consists of three terms: the classification Softmax loss and\nthe location smooth l1 -losses [14] of both the facial key-points and the face\nbounding boxes. In ex- periments, our ConvNet is trained on the AFLW dataset\nonly and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark\nwithout fine-tuning. The proposed method obtains very competitive\nstate-of-the-art performance in the two benchmarks.", "field": ["Output Functions"], "task": ["Face Detection", "Face Model", "Face Verification", "Object Detection"], "method": ["Softmax"], "dataset": ["Annotated Faces in the Wild"], "metric": ["AP"], "title": "Face Detection with End-to-End Integration of a ConvNet and a 3D Model"} {"abstract": "The effort devoted to hand-crafting neural network image classifiers has\nmotivated the use of architecture search to discover them automatically.\nAlthough evolutionary algorithms have been repeatedly applied to neural network\ntopologies, the image classifiers thus discovered have remained inferior to\nhuman-crafted ones. Here, we evolve an image classifier---AmoebaNet-A---that\nsurpasses hand-designs for the first time. To do this, we modify the tournament\nselection evolutionary algorithm by introducing an age property to favor the\nyounger genotypes. Matching size, AmoebaNet-A has comparable accuracy to\ncurrent state-of-the-art ImageNet models discovered with more complex\narchitecture-search methods. Scaled to larger size, AmoebaNet-A sets a new\nstate-of-the-art 83.9% / 96.6% top-5 ImageNet accuracy. In a controlled\ncomparison against a well known reinforcement learning algorithm, we give\nevidence that evolution can obtain results faster with the same hardware,\nespecially at the earlier stages of the search. This is relevant when fewer\ncompute resources are available. Evolution is, thus, a simple method to\neffectively discover high-quality architectures.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Convolutional Neural Networks", "Convolutions", "Pooling Operations", "Neural Architecture Search"], "task": ["Image Classification", "Neural Architecture Search"], "method": ["Weight Decay", "AmoebaNet", "Aging Evolution", "Cosine Annealing", "SGD with Momentum", "RMSProp", "Softmax", "Average Pooling", "ScheduledDropPath", "Convolution", "Label Smoothing", "Dropout", "Spatially Separable Convolution", "Max Pooling"], "dataset": ["NAS-Bench-201, ImageNet-16-120", "CIFAR-10 Image Classification", "ImageNet"], "metric": ["Number of params", "Accuracy (Test)", "Top 1 Accuracy", "Percentage error", "Params", "Accuracy (val)", "Top 5 Accuracy", "Search time (s)"], "title": "Regularized Evolution for Image Classifier Architecture Search"} {"abstract": "The cosine-based softmax losses and their variants achieve great success in deep learning based face recognition. However, hyperparameter settings in these losses have significant influences on the optimization path as well as the final recognition performance. Manually tuning those hyperparameters heavily relies on user experience and requires many training tricks. In this paper, we investigate in depth the effects of two important hyperparameters of cosine-based softmax losses, the scale parameter and angular margin parameter, by analyzing how they modulate the predicted classification probability. Based on these analysis, we propose a novel cosine-based softmax loss, AdaCos, which is hyperparameter-free and leverages an adaptive scale parameter to automatically strengthen the training supervisions during the training process. We apply the proposed AdaCos loss to large-scale face verification and identification datasets, including LFW, MegaFace, and IJB-C 1:1 Verification. Our results show that training deep neural networks with the AdaCos loss is stable and able to achieve high face recognition accuracy. Our method outperforms state-of-the-art softmax losses on all the three datasets.", "field": ["Output Functions"], "task": ["Face Recognition", "Face Verification"], "method": ["Softmax"], "dataset": ["MegaFace", "Labeled Faces in the Wild"], "metric": ["Accuracy"], "title": "AdaCos: Adaptively Scaling Cosine Logits for Effectively Learning Deep Face Representations"} {"abstract": "Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. Code is available at https://github.com/google-research/uda.", "field": ["Initialization", "Output Functions", "Regularization", "Stochastic Optimization", "Attention Modules", "Learning Rate Schedules", "Activation Functions", "Convolutional Neural Networks", "Normalization", "Subword Segmentation", "Language Models", "Convolutions", "Feedforward Networks", "Pooling Operations", "Attention Mechanisms", "Skip Connections", "Skip Connection Blocks"], "task": ["Data Augmentation", "Image Augmentation", "Image Classification", "Semi-Supervised Image Classification", "Text Classification", "Transfer Learning"], "method": ["Weight Decay", "Average Pooling", "Adam", "1x1 Convolution", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "ResNet", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Layer Normalization", "Batch Normalization", "Residual Network", "GELU", "Kaiming Initialization", "WordPiece", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Bottleneck Residual Block", "Dropout", "BERT", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Yelp Fine-grained classification", "Amazon Review Polarity", "Yelp Binary classification", "Yelp-2", "Amazon-5", "DBpedia", "Yelp-5", "ImageNet - 10% labeled data", "Amazon Review Full", "IMDb", "SVHN, 1000 labels", "Amazon-2", "CIFAR-10, 4000 Labels", "ImageNet"], "metric": ["Accuracy (2 classes)", "Top 1 Accuracy", "Error", "Accuracy", "Top 5 Accuracy", "Accuracy (10 classes)"], "title": "Unsupervised Data Augmentation for Consistency Training"} {"abstract": "Conventional physically-based methods for relighting portrait images need to solve an inverse rendering problem, estimating face geometry, reflectance and lighting. However, the inaccurate estimation of face components can cause strong artifacts in relighting, leading to unsatisfactory results. In this work, we apply a physically-based portrait relighting method to generate a large scale, high quality, \"in the wild\" portrait relighting dataset (DPR). A deep Convolutional Neural Network (CNN) is then trained using this dataset to generate a relit portrait image by using a source image and a target lighting as input. The training procedure regularizes the generated results, removing the artifacts caused by physically-based relighting methods. A GAN loss is further applied to improve the quality of the relit portrait image. Our trained network can relight portrait images with resolutions as high as 1024 x 1024. We evaluate the proposed method on the proposed DPR datset, Flickr portrait dataset and Multi-PIE dataset both qualitatively and quantitatively. Our experiments demonstrate that the proposed method achieves state-of-the-art results. Please refer to https://zhhoper.github.io/dpr.html for dataset and code.\r", "field": ["Generative Models", "Convolutions"], "task": ["Single-Image Portrait Relighting"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["Multi-PIE"], "metric": ["Si-MSE", "Si-L2"], "title": "Deep Single-Image Portrait Relighting"} {"abstract": "Depth (disparity) estimation from 4D Light Field (LF) images has been a research topic for the last couple of years. Most studies have focused on depth estimation from static 4D LF images while not considering temporal information, i.e., LF videos. This paper proposes an end-to-end neural network architecture for depth estimation from 4D LF videos. This study also constructs a medium-scale synthetic 4D LF video dataset that can be used for training deep learning-based methods. Experimental results using synthetic and real-world 4D LF videos show that temporal information contributes to the improvement of depth estimation accuracy in noisy regions. Dataset and code is available at: https://mediaeng-lfv.github.io/LFV_Disparity_Estimation", "field": ["Recurrent Neural Networks", "Convolutions"], "task": ["Depth Estimation", "Disparity Estimation"], "method": ["3D Convolution", "Convolution", "ConvLSTM"], "dataset": ["Sintel 4D LFV - shaman2", "Sintel 4D LFV - bamboo3", "Sintel 4D LFV - ambushfight5", "Sintel 4D LFV - thebigfight2"], "metric": ["BadPix(0.05)", "BadPix(0.03)", "BadPix(0.07)", "MSE*100", "BadPix(0.01)"], "title": "Depth estimation from 4D light field videos"} {"abstract": "The Transformer model is widely successful on many natural language processing tasks. However, the quadratic complexity of self-attention limit its application on long text. In this paper, adopting a fine-to-coarse attention mechanism on multi-scale spans via binary partitioning (BP), we propose BP-Transformer (BPT for short). BPT yields $O(k\\cdot n\\log (n/k))$ connections where $k$ is a hyperparameter to control the density of attention. BPT has a good balance between computation complexity and model capacity. A series of experiments on text classification, machine translation and language modeling shows BPT has a superior performance for long text than previous self-attention models. Our code, hyperparameters and CUDA kernels for sparse attention are available in PyTorch.", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Machine Translation", "Sentiment Analysis", "Text Classification"], "method": ["Graph Self-Attention", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "BP-Transformer", "Multi-Head Attention", "Transformer", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["enwik8", "Text8", "IMDb", "SST-5 Fine-grained classification", "IWSLT2015 Chinese-English"], "metric": ["Bit per Character (BPC)", "BLEU", "Accuracy", "Number of params"], "title": "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"} {"abstract": "In this paper, we address video-based person re-identification with competitive snippet-similarity aggregation and co-attentive snippet embedding. Our approach divides long person sequences into multiple short video snippets and aggregates the top-ranked snippet similarities for sequence-similarity estimation. With this strategy, the intra-person visual variation of each sample could be minimized for similarity estimation, while the diverse appearance and temporal information are maintained. The snippet similarities are estimated by a deep neural network with a novel temporal co-attention for snippet embedding. The attention weights are obtained based on a query feature, which is learned from the whole probe snippet by an LSTM network, making the resulting embeddings less affected by noisy frames. The gallery snippet shares the same query feature with the probe snippet. Thus the embedding of gallery snippet can present more relevant features to compare with the probe snippet, yielding more accurate snippet similarity. Extensive ablation studies verify the effectiveness of competitive snippet-similarity aggregation as well as the temporal co-attentive embedding. Our method significantly outperforms the current state-of-the-art approaches on multiple datasets.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Person Re-Identification", "Video-Based Person Re-Identification"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["PRID2011"], "metric": ["Rank-1", "Rank-20", "Rank-5"], "title": "Video Person Re-Identification With Competitive Snippet-Similarity Aggregation and Co-Attentive Snippet Embedding"} {"abstract": "Estimating the head pose of a person is a crucial problem that has a large\namount of applications such as aiding in gaze estimation, modeling attention,\nfitting 3D models to video and performing face alignment. Traditionally head\npose is computed by estimating some keypoints from the target face and solving\nthe 2D to 3D correspondence problem with a mean human head model. We argue that\nthis is a fragile method because it relies entirely on landmark detection\nperformance, the extraneous head model and an ad-hoc fitting step. We present\nan elegant and robust way to determine pose by training a multi-loss\nconvolutional neural network on 300W-LP, a large synthetically expanded\ndataset, to predict intrinsic Euler angles (yaw, pitch and roll) directly from\nimage intensities through joint binned pose classification and regression. We\npresent empirical tests on common in-the-wild pose benchmark datasets which\nshow state-of-the-art results. Additionally we test our method on a dataset\nusually used for pose estimation using depth and start to close the gap with\nstate-of-the-art depth pose methods. We open-source our training and testing\ncode as well as release our pre-trained models.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Face Alignment", "Gaze Estimation", "Head Pose Estimation", "Pose Estimation", "Regression"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["AFLW2000", "AFLW", "BIWI"], "metric": ["MAE", "MAE (trained with BIWI data)"], "title": "Fine-Grained Head Pose Estimation Without Keypoints"} {"abstract": "Contextualized word embeddings (CWE) such as provided by ELMo (Peters et al., 2018), Flair NLP (Akbik et al., 2018), or BERT (Devlin et al., 2019) are a major recent innovation in NLP. CWEs provide semantic vector representations of words depending on their respective context. Their advantage over static word embeddings has been shown for a number of tasks, such as text classification, sequence tagging, or machine translation. Since vectors of the same word type can vary depending on the respective context, they implicitly provide a model for word sense disambiguation (WSD). We introduce a simple but effective approach to WSD using a nearest neighbor classification on CWEs. We compare the performance of different CWE models for the task and can report improvements above the current state of the art for two standard WSD benchmark datasets. We further show that the pre-trained BERT model is able to place polysemic words into distinct 'sense' regions of the embedding space, while ELMo and Flair NLP do not seem to possess this ability.", "field": ["Regularization", "Attention Modules", "Learning Rate Schedules", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Output Functions", "Subword Segmentation", "Word Embeddings", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections", "Bidirectional Recurrent Neural Networks"], "task": ["Word Sense Disambiguation"], "method": ["Weight Decay", "Adam", "Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Bidirectional LSTM", "Residual Connection", "Dense Connections", "ELMo", "Layer Normalization", "GELU", "Sigmoid Activation", "WordPiece", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "LSTM", "Dropout", "BERT"], "dataset": ["SensEval 2 Lexical Sample", "SensEval 3 Lexical Sample", "SemEval 2007 Task 17", "SemEval 2007 Task 7"], "metric": ["F1"], "title": "Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings"} {"abstract": "Unsupervised embedding learning aims at extracting low-dimensional visually meaningful representations from large-scale unlabeled images, which can then be directly used for similarity-based search. This task faces two major challenges: 1) mining positive supervision from highly similar fine-grained classes and 2) generating to unseen testing categories. To tackle these issues, this paper proposes a probabilistic structural latent representation (PSLR), which incorporates an adaptable softmax embedding to approximate the positive concentrated and negative instance separated properties in the graph latent space. It improves the discriminability by enlarging the positive/negative difference without introducing any additional computational cost while maintaining high learning efficiency. To address the limited supervision using data augmentation, a smooth variational reconstruction loss is introduced by modeling the intra-instance variance, which improves the robustness. Extensive experiments demonstrate the superiority of PSLR over state-of-the-art unsupervised methods on both seen and unseen categories with cosine similarity. Code is available at https://github.com/mangye16/PSLR\r", "field": ["Output Functions"], "task": ["Data Augmentation", "Image Classification"], "method": ["Softmax"], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Probabilistic Structural Latent Representation for Unsupervised Embedding"} {"abstract": "Intent recognition is one of the most crucial tasks in NLUsystems, which are nowadays especially important for design-ing intelligent conversation. We propose a novel approach to intent recognition which involves combining transformer architecture with capsule networks. Our results show that such architecture performs better than original capsule-NLU net-work implementations and achieves state-of-the-art results on datasets such as ATIS, AskUbuntu , and WebApp.", "field": ["Regularization", "Attention Modules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Intent Detection"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections", "Rectified Linear Units"], "dataset": ["ATIS"], "metric": ["Accuracy"], "title": "Transformer-Capsule Model for Intent Detection"} {"abstract": " Standard convolution is inherently limited for semantic segmentation of point cloud due to its isotropy about features. It neglects the structure of an object, results in poor object delineation and small spurious regions in the segmentation result. This paper proposes a novel graph attention convolution (GAC), whose kernels can be dynamically carved into specific shapes to adapt to the structure of an object. Specifically, by assigning proper attentional weights to different neighboring points, GAC is designed to selectively focus on the most relevant part of them according to their dynamically learned features. The shape of the convolution kernel is then determined by the learned distribution of the attentional weights. Though simple, GAC can capture the structured features of point clouds for fine-grained segmentation and avoid feature contamination between objects. Theoretically, we provided a thorough analysis on the expressive capabilities of GAC to show how it can learn about the features of point clouds. Empirically, we evaluated the proposed GAC on challenging indoor and outdoor datasets and achieved the state-of-the-art results in both scenarios.\r", "field": ["Convolutions"], "task": ["Semantic Segmentation"], "method": ["Convolution"], "dataset": ["Semantic3D"], "metric": ["mIoU"], "title": "Graph Attention Convolution for Point Cloud Semantic Segmentation"} {"abstract": "We present in this paper ByteCover, which is a new feature learning method for cover song identification (CSI). ByteCover is built based on the classical ResNet model, and two major improvements are designed to further enhance the capability of the model for CSI. In the first improvement, we introduce the integration of instance normalization (IN) and batch normalization (BN) to build IBN blocks, which are major components of our ResNet-IBN model. With the help of the IBN blocks, our CSI model can learn features that are invariant to the changes of musical attributes such as key, tempo, timbre and genre, while preserving the version information. In the second improvement, we employ the BNNeck method to allow a multi-loss training and encourage our method to jointly optimize a classification loss and a triplet loss, and by this means, the inter-class discrimination and intra-class compactness of cover songs, can be ensured at the same time. A set of experiments demonstrated the effectiveness and efficiency of ByteCover on multiple datasets, and in the Da-TACOS dataset, ByteCover outperformed the best competitive system by 20.9\\%.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Cover song identification"], "method": ["ResNet", "Instance Normalization", "Average Pooling", "Residual Block", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Covers80", "YouTube350", "Da-TACOS", "SHS100K-TEST"], "metric": ["mAP", "MAP"], "title": "ByteCover: Cover Song Identification via Multi-Loss Training"} {"abstract": "Few prior works study deep learning on point sets. PointNet by Qi et al. is a\npioneer in this direction. However, by design PointNet does not capture local\nstructures induced by the metric space points live in, limiting its ability to\nrecognize fine-grained patterns and generalizability to complex scenes. In this\nwork, we introduce a hierarchical neural network that applies PointNet\nrecursively on a nested partitioning of the input point set. By exploiting\nmetric space distances, our network is able to learn local features with\nincreasing contextual scales. With further observation that point sets are\nusually sampled with varying densities, which results in greatly decreased\nperformance for networks trained on uniform densities, we propose novel set\nlearning layers to adaptively combine features from multiple scales.\nExperiments show that our network called PointNet++ is able to learn deep point\nset features efficiently and robustly. In particular, results significantly\nbetter than state-of-the-art have been obtained on challenging benchmarks of 3D\npoint clouds.", "field": ["3D Representations"], "task": ["3D Part Segmentation", "3D Point Cloud Classification", "3D Semantic Segmentation", "Person Re-Identification", "Semantic Segmentation"], "method": ["PointNet"], "dataset": ["SemanticKITTI", "ShapeNet", "ShapeNet-Part", "ModelNet40", "ScanNet", "DukeMTMC-reID"], "metric": ["Overall Accuracy", "3DIoU", "Mean IoU", "Class Average IoU", "Instance Average IoU", "mIoU", "MAP", "Rank-1"], "title": "PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space"} {"abstract": "We propose Dual Attention Networks (DANs) which jointly leverage visual and\ntextual attention mechanisms to capture fine-grained interplay between vision\nand language. DANs attend to specific regions in images and words in text\nthrough multiple steps and gather essential information from both modalities.\nBased on this framework, we introduce two types of DANs for multimodal\nreasoning and matching, respectively. The reasoning model allows visual and\ntextual attentions to steer each other during collaborative inference, which is\nuseful for tasks such as Visual Question Answering (VQA). In addition, the\nmatching model exploits the two attention mechanisms to estimate the similarity\nbetween images and sentences by focusing on their shared semantics. Our\nextensive experiments validate the effectiveness of DANs in combining vision\nand language, achieving the state-of-the-art performance on public benchmarks\nfor VQA and image-text matching.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Question Answering", "Text Matching", "Visual Question Answering"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Flickr30K 1K test", "VQA v1 test-dev"], "metric": ["R@10", "R@1", "R@5", "Accuracy"], "title": "Dual Attention Networks for Multimodal Reasoning and Matching"} {"abstract": "We propose a novel, projection based way to incorporate the conditional\ninformation into the discriminator of GANs that respects the role of the\nconditional information in the underlining probabilistic model. This approach\nis in contrast with most frameworks of conditional GANs used in application\ntoday, which use the conditional information by concatenating the (embedded)\nconditional vector to the feature vectors. With this modification, we were able\nto significantly improve the quality of the class conditional image generation\non ILSVRC2012 (ImageNet) 1000-class image dataset from the current\nstate-of-the-art result, and we achieved this with a single pair of a\ndiscriminator and a generator. We were also able to extend the application to\nsuper-resolution and succeeded in producing highly discriminative\nsuper-resolution images. This new structure also enabled high quality category\ntransformation based on parametric functional transformation of conditional\nbatch normalization layers in the generator.", "field": ["Discriminators", "Initialization", "Convolutional Neural Networks", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Conditional Image Generation", "Image Generation", "Super-Resolution"], "method": ["ResNet", "Average Pooling", "Spectral Normalization", "Adam", "Projection Discriminator", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet 128x128", "CIFAR-10"], "metric": ["Inception score", "FID"], "title": "cGANs with Projection Discriminator"} {"abstract": "Most convolutional network (CNN)-based inpainting methods adopt standard convolution to indistinguishably treat valid pixels and holes, making them limited in handling irregular holes and more likely to generate inpainting results with color discrepancy and blurriness. Partial convolution has been suggested to address this issue, but it adopts handcrafted feature re-normalization, and only considers forward mask-updating. In this paper, we present a learnable attention map module for learning feature renormalization and mask-updating in an end-to-end manner, which is effective in adapting to irregular holes and propagation of convolution layers. Furthermore, learnable reverse attention maps are introduced to allow the decoder of U-Net to concentrate on filling in irregular holes instead of reconstructing both holes and known regions, resulting in our learnable bidirectional attention maps. Qualitative and quantitative experiments show that our method performs favorably against state-of-the-arts in generating sharper, more coherent and visually plausible inpainting results. The source code and pre-trained models will be available.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Image Inpainting"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["Paris StreetView"], "metric": ["40-50% Mask PSNR", "20-30% Mask PSNR", "30-40% Mask PSNR", "10-20% Mask PSNR"], "title": "Image Inpainting with Learnable Bidirectional Attention Maps"} {"abstract": "Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn supervised models like convolutional neural networks. However, models trained on one data domain may not generalize well to other domains without annotations for model finetuning. To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain. We propose to learn discriminative feature representations of patches in the source domain by discovering multiple modes of patch-wise output distribution through the construction of a clustered space. With such representations as guidance, we use an adversarial learning scheme to push the feature representations of target patches in the clustered space closer to the distributions of source patches. In addition, we show that our framework is complementary to existing domain adaptation techniques and achieves consistent improvements on semantic segmentation. Extensive ablations and results are demonstrated on numerous benchmark datasets with various settings, such as synthetic-to-real and cross-city scenarios.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Synthetic-to-Real Translation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "Domain Adaptation for Structured Output via Discriminative Patch Representations"} {"abstract": "The celebrated Sequence to Sequence learning (Seq2Seq) technique and its\nnumerous variants achieve excellent performance on many tasks. However, many\nmachine learning tasks have inputs naturally represented as graphs; existing\nSeq2Seq models face a significant challenge in achieving accurate conversion\nfrom graph form to the appropriate sequence. To address this challenge, we\nintroduce a novel general end-to-end graph-to-sequence neural encoder-decoder\nmodel that maps an input graph to a sequence of vectors and uses an\nattention-based LSTM method to decode the target sequence from these vectors.\nOur method first generates the node and graph embeddings using an improved\ngraph-based neural network with a novel aggregation strategy to incorporate\nedge direction information in the node embeddings. We further introduce an\nattention mechanism that aligns node embeddings and the decoding sequence to\nbetter cope with large graphs. Experimental results on bAbI, Shortest Path, and\nNatural Language Generation tasks demonstrate that our model achieves\nstate-of-the-art performance and significantly outperforms existing graph\nneural networks, Seq2Seq, and Tree2Seq models; using the proposed\nbi-directional node embedding aggregation strategy, the model can converge\nrapidly to the optimal performance.", "field": ["Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models"], "task": ["Graph-to-Sequence", "SQL-to-Text", "Text Generation"], "method": ["Long Short-Term Memory", "Tanh Activation", "Sequence to Sequence", "LSTM", "Seq2Seq", "Sigmoid Activation"], "dataset": ["WikiSQL"], "metric": ["BLEU-4"], "title": "Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks"} {"abstract": "Recurrent neural networks (RNNs) have been widely used for processing\nsequential data. However, RNNs are commonly difficult to train due to the\nwell-known gradient vanishing and exploding problems and hard to learn\nlong-term patterns. Long short-term memory (LSTM) and gated recurrent unit\n(GRU) were developed to address these problems, but the use of hyperbolic\ntangent and the sigmoid action functions results in gradient decay over layers.\nConsequently, construction of an efficiently trainable deep network is\nchallenging. In addition, all the neurons in an RNN layer are entangled\ntogether and their behaviour is hard to interpret. To address these problems, a\nnew type of RNN, referred to as independently recurrent neural network\n(IndRNN), is proposed in this paper, where neurons in the same layer are\nindependent of each other and they are connected across layers. We have shown\nthat an IndRNN can be easily regulated to prevent the gradient exploding and\nvanishing problems while allowing the network to learn long-term dependencies.\nMoreover, an IndRNN can work with non-saturated activation functions such as\nrelu (rectified linear unit) and be still trained robustly. Multiple IndRNNs\ncan be stacked to construct a network that is deeper than the existing RNNs.\nExperimental results have shown that the proposed IndRNN is able to process\nvery long sequences (over 5000 time steps), can be used to construct very deep\nnetworks (21 layers used in the experiment) and still be trained robustly.\nBetter performances have been achieved on various tasks by using IndRNNs\ncompared with the traditional RNN and LSTM. The code is available at\nhttps://github.com/Sunnydreamrain/IndRNN_Theano_Lasagne.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Language Modelling", "Sequential Image Classification", "Skeleton Based Action Recognition"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["NTU RGB+D", "Penn Treebank (Character Level)", "Sequential MNIST"], "metric": ["Accuracy (CS)", "Unpermuted Accuracy", "Bit per Character (BPC)", "Permuted Accuracy", "Accuracy (CV)"], "title": "Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN"} {"abstract": "Graph Neural Networks (GNNs), which generalize deep neural networks to graph-structured data, have drawn considerable attention and achieved state-of-the-art performance in numerous graph related tasks. However, existing GNN models mainly focus on designing graph convolution operations. The graph pooling (or downsampling) operations, that play an important role in learning hierarchical representations, are usually overlooked. In this paper, we propose a novel graph pooling operator, called Hierarchical Graph Pooling with Structure Learning (HGP-SL), which can be integrated into various graph neural network architectures. HGP-SL incorporates graph pooling and structure learning into a unified module to generate hierarchical representations of graphs. More specifically, the graph pooling operation adaptively selects a subset of nodes to form an induced subgraph for the subsequent layers. To preserve the integrity of graph's topological information, we further introduce a structure learning mechanism to learn a refined graph structure for the pooled graph at each layer. By combining HGP-SL operator with graph neural networks, we perform graph level representation learning with focus on graph classification task. Experimental results on six widely used benchmarks demonstrate the effectiveness of our proposed model.", "field": ["Convolutions"], "task": ["Graph Classification", "Representation Learning"], "method": ["Convolution"], "dataset": ["NCI109", "ENZYMES", "PROTEINS", "D&D", "NCI1", "Mutagenicity"], "metric": ["Accuracy"], "title": "Hierarchical Graph Pooling with Structure Learning"} {"abstract": "In this paper, we explore the space-time video super-resolution task, which aims to generate a high-resolution (HR) slow-motion video from a low frame rate (LFR), low-resolution (LR) video. A simple solution is to split it into two sub-tasks: video frame interpolation (VFI) and video super-resolution (VSR). However, temporal interpolation and spatial super-resolution are intra-related in this task. Two-stage methods cannot fully take advantage of the natural property. In addition, state-of-the-art VFI or VSR networks require a large frame-synthesis or reconstruction module for predicting high-quality video frames, which makes the two-stage methods have large model sizes and thus be time-consuming. To overcome the problems, we propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video. Rather than synthesizing missing LR video frames as VFI networks do, we firstly temporally interpolate LR frame features in missing LR video frames capturing local temporal contexts by the proposed feature temporal interpolation network. Then, we propose a deformable ConvLSTM to align and aggregate temporal information simultaneously for better leveraging global temporal contexts. Finally, a deep reconstruction network is adopted to predict HR slow-motion video frames. Extensive experiments on benchmark datasets demonstrate that the proposed method not only achieves better quantitative and qualitative performance but also is more than three times faster than recent two-stage state-of-the-art methods, e.g., DAIN+EDVR and DAIN+RBPN.", "field": ["Convolutions", "Activation Functions", "Recurrent Neural Networks"], "task": ["Super-Resolution", "Video Frame Interpolation", "Video Super-Resolution"], "method": ["Tanh Activation", "ConvLSTM", "Sigmoid Activation", "Convolution"], "dataset": ["Vid4 - 4x upscaling"], "metric": ["runtime (s)", "SSIM", "Parameters", "PSNR"], "title": "Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution"} {"abstract": "The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the \"Squeeze-and-Excitation\" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at https://github.com/hujie-frank/SENet.", "field": ["Image Data Augmentation", "Initialization", "Output Functions", "Convolutional Neural Networks", "Stochastic Optimization", "Learning Rate Schedules", "Activation Functions", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks"], "task": ["Image Classification"], "method": ["Squeeze-and-Excitation Block", "SGD with Momentum", "Average Pooling", "Softmax", "Random Horizontal Flip", "Random Resized Crop", "Sigmoid Activation", "Step Decay", "Convolution", "Rectified Linear Units", "ReLU", "SENet", "Kaiming Initialization", "Global Average Pooling", "Dense Connections", "Max Pooling"], "dataset": ["CIFAR-100", "CIFAR-10"], "metric": ["Percentage correct"], "title": "Squeeze-and-Excitation Networks"} {"abstract": "Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality. However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques. Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations. Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training.", "field": ["Generative Models", "Convolutions", "Normalization"], "task": ["Image Generation"], "method": ["Weight Normalization", "Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["CIFAR-10"], "metric": ["Inception score", "FID"], "title": "Adversarial Lipschitz Regularization"} {"abstract": "Semi-structured text generation is a non-trivial problem. Although last years have brought lots of improvements in natural language generation, thanks to the development of neural models trained on large scale datasets, these approaches still struggle with producing structured, context- and commonsense-aware texts. Moreover, it is not clear how to evaluate the quality of generated texts. To address these problems, we introduce RecipeNLG - a novel dataset of cooking recipes. We discuss the data collection process and the relation between the semi-structured texts and cooking recipes. We use the dataset to approach the problem of generating recipes. Finally, we make use of multiple metrics to evaluate the generated recipes.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Attention Mechanisms", "Feedforward Networks", "Transformers", "Fine-Tuning", "Skip Connections"], "task": ["Named Entity Recognition", "Recipe Generation", "Text Generation"], "method": ["Weight Decay", "Cosine Annealing", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Discriminative Fine-Tuning", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Cosine Annealing", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "GPT-2", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["RecipeNLG"], "metric": ["BLEU", "Word Error Rate (WER)", "GLEU"], "title": "RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation"} {"abstract": "Flow-based generative models are composed of invertible transformations between two random variables of the same dimension. Therefore, flow-based models cannot be adequately trained if the dimension of the data distribution does not match that of the underlying target distribution. In this paper, we propose SoftFlow, a probabilistic framework for training normalizing flows on manifolds. To sidestep the dimension mismatch problem, SoftFlow estimates a conditional distribution of the perturbed input data instead of learning the data distribution directly. We experimentally show that SoftFlow can capture the innate structure of the manifold data and generate high-quality samples unlike the conventional flow-based models. Furthermore, we apply the proposed framework to 3D point clouds to alleviate the difficulty of forming thin structures for flow-based models. The proposed model for 3D point clouds, namely SoftPointFlow, can estimate the distribution of various shapes more accurately and achieves state-of-the-art performance in point cloud generation.", "field": ["Distribution Approximation"], "task": ["Point Cloud Generation"], "method": ["Normalizing Flows"], "dataset": ["ShapeNet Chair", "ShapeNet Car", "ShapeNet Airplane"], "metric": ["1-NNA-CD"], "title": "SoftFlow: Probabilistic Framework for Normalizing Flow on Manifolds"} {"abstract": "We present a new, embarrassingly simple approach to instance segmentation in images. Compared to many other dense prediction tasks, e.g., semantic segmentation, it is the arbitrary number of instances that have made instance segmentation much more challenging. In order to predict a mask for each instance, mainstream approaches either follow the 'detect-thensegment' strategy as used by Mask R-CNN, or predict category masks first then use clustering techniques to group pixels into individual instances. We view the task of instance segmentation from a completely new perspective by introducing the notion of \"instance categories\", which assigns categories to each pixel within an instance according to the instance's location and size, thus nicely converting instance mask segmentation into a classification-solvable problem. Now instance segmentation is decomposed into two classification tasks. We demonstrate a much simpler and flexible instance segmentation framework with strong performance, achieving on par accuracy with Mask R-CNN and outperforming recent singleshot instance segmenters in accuracy. We hope that this very simple and strong framework can serve as a baseline for many instance-level recognition tasks besides instance segmentation.", "field": ["Initialization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "RoIAlign", "Mask R-CNN", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO test-dev"], "metric": ["APM", "AP75", "APS", "APL", "AP50", "mask AP"], "title": "SOLO: Segmenting Objects by Locations"} {"abstract": "Deep learning with 3D data has progressed significantly since the introduction of convolutional neural networks that can handle point order ambiguity in point cloud data. While being able to achieve good accuracies in various scene understanding tasks, previous methods often have low training speed and complex network architecture. In this paper, we address these problems by proposing an efficient end-to-end permutation invariant convolution for point cloud deep learning. Our simple yet effective convolution operator named ShellConv uses statistics from concentric spherical shells to define representative features and resolve the point order ambiguity, allowing traditional convolution to perform on such features. Based on ShellConv we further build an efficient neural network named ShellNet to directly consume the point clouds with larger receptive fields while maintaining less layers. We demonstrate the efficacy of ShellNet by producing state-of-the-art results on object classification, object part segmentation, and semantic scene segmentation while keeping the network very fast to train.", "field": ["Convolutions"], "task": ["Semantic Segmentation"], "method": ["Convolution"], "dataset": ["Semantic3D", "S3DIS"], "metric": ["Mean IoU", "oAcc", "mIoU"], "title": "ShellNet: Efficient Point Cloud Convolutional Neural Networks using Concentric Shells Statistics"} {"abstract": "Recurrent neural networks (RNNs) sequentially process data by updating their\nstate with each new data point, and have long been the de facto choice for\nsequence modeling tasks. However, their inherently sequential computation makes\nthem slow to train. Feed-forward and convolutional architectures have recently\nbeen shown to achieve superior results on some sequence modeling tasks such as\nmachine translation, with the added advantage that they concurrently process\nall inputs in the sequence, leading to easy parallelization and faster training\ntimes. Despite these successes, however, popular feed-forward sequence models\nlike the Transformer fail to generalize in many simple tasks that recurrent\nmodels handle with ease, e.g. copying strings or even simple logical inference\nwhen the string or formula lengths exceed those observed at training time. We\npropose the Universal Transformer (UT), a parallel-in-time self-attentive\nrecurrent sequence model which can be cast as a generalization of the\nTransformer model and which addresses these issues. UTs combine the\nparallelizability and global receptive field of feed-forward sequence models\nlike the Transformer with the recurrent inductive bias of RNNs. We also add a\ndynamic per-position halting mechanism and find that it improves accuracy on\nseveral tasks. In contrast to the standard Transformer, under certain\nassumptions, UTs can be shown to be Turing-complete. Our experiments show that\nUTs outperform standard Transformers on a wide range of algorithmic and\nlanguage understanding tasks, including the challenging LAMBADA language\nmodeling task where UTs achieve a new state of the art, and machine translation\nwhere UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De\ndataset.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Learning to Execute", "Machine Translation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Attention Dropout", "Rectified Linear Units", "ReLU", "Residual Connection", "Universal Transformer", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["LAMBADA", "WMT2014 English-German"], "metric": ["BLEU score", "Accuracy"], "title": "Universal Transformers"} {"abstract": "State-of-the-art neural network architectures such as ResNet, MobileNet, and DenseNet have achieved outstanding accuracy over low MACs and small model size counterparts. However, these metrics might not be accurate for predicting the inference time. We suggest that memory traffic for accessing intermediate feature maps can be a factor dominating the inference latency, especially in such tasks as real-time object detection and semantic segmentation of high-resolution video. We propose a Harmonic Densely Connected Network to achieve high efficiency in terms of both low MACs and memory traffic. The new network achieves 35%, 36%, 30%, 32%, and 45% inference time reduction compared with FC-DenseNet-103, DenseNet-264, ResNet-50, ResNet-152, and SSD-VGG, respectively. We use tools including Nvidia profiler and ARM Scale-Sim to measure the memory traffic and verify that the inference latency is indeed proportional to the memory traffic consumption and the proposed network consumes low memory traffic. We conclude that one should take memory traffic into consideration when designing neural network architectures for high-resolution applications at the edge.", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Output Functions", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Object Detection", "Real-Time Object Detection", "Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": ["Average Pooling", "1x1 Convolution", "ResNet", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Dense Block", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Softmax", "Concatenated Skip Connection", "Bottleneck Residual Block", "Dropout", "DenseNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO", "Cityscapes test"], "metric": ["Time (ms)", "FPS", "MAP", "mIoU", "inference time (ms)", "Frame (fps)"], "title": "HarDNet: A Low Memory Traffic Network"} {"abstract": "We focus on the task of amodal 3D object detection in RGB-D images, which\naims to produce a 3D bounding box of an object in metric form at its full\nextent. We introduce Deep Sliding Shapes, a 3D ConvNet formulation that takes a\n3D volumetric scene from a RGB-D image as input and outputs 3D object bounding\nboxes. In our approach, we propose the first 3D Region Proposal Network (RPN)\nto learn objectness from geometric shapes and the first joint Object\nRecognition Network (ORN) to extract geometric features in 3D and color\nfeatures in 2D. In particular, we handle objects of various sizes by training\nan amodal RPN at two different scales and an ORN to regress 3D bounding boxes.\nExperiments show that our algorithm outperforms the state-of-the-art by 13.8 in\nmAP and is 200x faster than the original Sliding Shapes. All source code and\npre-trained models will be available at GitHub.", "field": ["Region Proposal"], "task": ["3D Object Detection", "Object Detection", "Object Recognition", "Region Proposal"], "method": ["Region Proposal Network", "RPN"], "dataset": ["SUN-RGBD val"], "metric": ["MAP"], "title": "Deep Sliding Shapes for Amodal 3D Object Detection in RGB-D Images"} {"abstract": "Neural network models for many NLP tasks have grown increasingly complex in recent years, making training and deployment more difficult. A number of recent papers have questioned the necessity of such architectures and found that well-executed, simpler models are quite effective. We show that this is also the case for document classification: in a large-scale reproducibility study of several recent neural models, we find that a simple BiLSTM architecture with appropriate regularization yields accuracy and F1 that are either competitive or exceed the state of the art on four standard benchmark datasets. Surprisingly, our simple model is able to achieve these results without attention mechanisms. While these regularization techniques, borrowed from language modeling, are not novel, to our knowledge we are the first to apply them in this context. Our work provides an open-source platform and the foundation for future work in document classification.", "field": ["Recurrent Neural Networks", "Activation Functions", "Bidirectional Recurrent Neural Networks"], "task": ["Document Classification", "Language Modelling"], "method": ["Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Bidirectional LSTM", "LSTM", "Sigmoid Activation"], "dataset": ["IMDb-M", "Yelp-5", "Reuters-21578"], "metric": ["F1", "Accuracy"], "title": "Rethinking Complex Neural Network Architectures for Document Classification"} {"abstract": "We present a method for simultaneously estimating 3D human pose and body\nshape from a sparse set of wide-baseline camera views. We train a symmetric\nconvolutional autoencoder with a dual loss that enforces learning of a latent\nrepresentation that encodes skeletal joint positions, and at the same time\nlearns a deep representation of volumetric body shape. We harness the latter to\nup-scale input volumetric data by a factor of $4 \\times$, whilst recovering a\n3D estimate of joint positions with equal or greater accuracy than the state of\nthe art. Inference runs in real-time (25 fps) and has the potential for passive\nhuman behaviour monitoring where there is a requirement for high fidelity\nestimation of human body shape and pose.", "field": ["Generative Models"], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": ["AutoEncoder"], "dataset": ["Total Capture"], "metric": ["Average MPJPE (mm)"], "title": "Deep Autoencoder for Combined Human Pose Estimation and body Model Upscaling"} {"abstract": "Graph Convolutional Networks (GCNs) have been widely studied for graph data\nrepresentation and learning tasks. Existing GCNs generally use a fixed single\ngraph which may lead to weak suboptimal for data representation/learning and\nare also hard to deal with multiple graphs. To address these issues, we propose\na novel Graph Optimized Convolutional Network (GOCN) for graph data\nrepresentation and learning. Our GOCN is motivated based on our\nre-interpretation of graph convolution from a regularization/optimization\nframework. The core idea of GOCN is to formulate graph optimization and graph\nconvolutional representation into a unified framework and thus conducts both of\nthem cooperatively to boost their respective performance in GCN learning\nscheme. Moreover, based on the proposed unified graph optimization-convolution\nframework, we propose a novel Multiple Graph Optimized Convolutional Network\n(M-GOCN) to naturally address the data with multiple graphs. Experimental\nresults demonstrate the effectiveness and benefit of the proposed GOCN and\nM-GOCN.", "field": ["Convolutions", "Graph Models"], "task": ["Node Classification", "Representation Learning"], "method": ["Graph Convolutional Network", "GCN", "Convolution"], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Graph Optimized Convolutional Networks"} {"abstract": "Efficient exploration in complex environments remains a major challenge for\nreinforcement learning. We propose bootstrapped DQN, a simple algorithm that\nexplores in a computationally and statistically efficient manner through use of\nrandomized value functions. Unlike dithering strategies such as epsilon-greedy\nexploration, bootstrapped DQN carries out temporally-extended (or deep)\nexploration; this can lead to exponentially faster learning. We demonstrate\nthese benefits in complex stochastic MDPs and in the large-scale Arcade\nLearning Environment. Bootstrapped DQN substantially improves learning times\nand performance across most Atari games.", "field": ["Q-Learning Networks", "Convolutions", "Feedforward Networks", "Off-Policy TD Control"], "task": ["Atari Games", "Efficient Exploration"], "method": ["Q-Learning", "Convolution", "DQN", "Dense Connections", "Deep Q-Network"], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "Deep Exploration via Bootstrapped DQN"} {"abstract": "Self-supervised learning has become increasingly important to leverage the abundance of unlabeled data available on platforms like YouTube. Whereas most existing approaches learn low-level representations, we propose a joint visual-linguistic model to learn high-level features without any explicit supervision. In particular, inspired by its recent success in language modeling, we build upon the BERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens, derived from vector quantization of video data and off-the-shelf speech recognition outputs, respectively. We use VideoBERT in numerous tasks, including action classification and video captioning. We show that it can be applied directly to open-vocabulary classification, and confirm that large amounts of training data and cross-modal information are critical to performance. Furthermore, we outperform the state-of-the-art on video captioning, and quantitative results verify that the model learns high-level semantic features.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Action Classification", "Action Classification ", "Language Modelling", "Quantization", "Representation Learning", "Self-Supervised Learning", "Speech Recognition", "Video Captioning"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "VideoBERT", "Gaussian Linear Error Units"], "dataset": ["YouCook2"], "metric": ["METEOR", "Object Top-1 Accuracy", "Object Top 5 Accuracy", "CIDEr", "BLEU-3", "Verb Top-5 Accuracy", "Verb Top-1 Accuracy", "ROUGE-L", "BLEU-4"], "title": "VideoBERT: A Joint Model for Video and Language Representation Learning"} {"abstract": "The recently proposed deep clustering framework represents a significant step towards solv-ing the cocktail party problem. This study proposes and compares a variety of alternativeobjective functions for training deep clustering networks. In addition, whereas the originaldeep clustering work relied on k-means clustering for test-time inference, here we investigateinference methods that are matched to the training objective. Furthermore, we explore theuse of an improved chimera network architecture for speech separation, which combines deepclustering with mask-inference networks in a multiobjective training scheme. The deep clus-tering loss acts as a regularizer while training the end-to-end mask inference network for bestseparation. With further iterative phase reconstruction, our best proposed method achievesa state-of-the-art 11.5 dB signal-to-distortion ratio (SDR) result on the publicly availablewsj0-2mix dataset, with a much simpler architecture than the previous best approach.", "field": ["Clustering"], "task": ["Deep Clustering", "Speech Separation"], "method": ["k-Means Clustering"], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "Alternative Objective Functions for Deep Clustering"} {"abstract": "Boundary and edge cues are highly beneficial in improving a wide variety of\nvision tasks such as semantic segmentation, object recognition, stereo, and\nobject proposal generation. Recently, the problem of edge detection has been\nrevisited and significant progress has been made with deep learning. While\nclassical edge detection is a challenging binary problem in itself, the\ncategory-aware semantic edge detection by nature is an even more challenging\nmulti-label problem. We model the problem such that each edge pixel can be\nassociated with more than one class as they appear in contours or junctions\nbelonging to two or more semantic classes. To this end, we propose a novel\nend-to-end deep semantic edge learning architecture based on ResNet and a new\nskip-layer architecture where category-wise edge activations at the top\nconvolution layer share and are fused with the same set of bottom layer\nfeatures. We then propose a multi-label loss function to supervise the fused\nactivations. We show that our proposed architecture benefits this problem with\nbetter performance, and we outperform the current state-of-the-art semantic\nedge detection methods by a large margin on standard data sets such as SBD and\nCityscapes.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Edge Detection", "Object Proposal Generation", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["SBD", "Cityscapes test"], "metric": ["Maximum F-measure", "AP"], "title": "CASENet: Deep Category-Aware Semantic Edge Detection"} {"abstract": "Learning representation on graph plays a crucial role in numerous tasks of\npattern recognition. Different from grid-shaped images/videos, on which local\nconvolution kernels can be lattices, however, graphs are fully coordinate-free\non vertices and edges. In this work, we propose a Gaussian-induced convolution\n(GIC) framework to conduct local convolution filtering on irregular graphs.\nSpecifically, an edge-induced Gaussian mixture model is designed to encode\nvariations of subgraph region by integrating edge information into weighted\nGaussian models, each of which implicitly characterizes one component of\nsubgraph variations. In order to coarsen a graph, we derive a vertex-induced\nGaussian mixture model to cluster vertices dynamically according to the\nconnection of edges, which is approximately equivalent to the weighted graph\ncut. We conduct our multi-layer graph convolution network on several public\ndatasets of graph classification. The extensive experiments demonstrate that\nour GIC is effective and can achieve the state-of-the-art results.", "field": ["Convolutions"], "task": ["Graph Classification", "Learning Representation On Graph"], "method": ["Convolution"], "dataset": ["NCI109", "ENZYMES", "PROTEINS", "NCI1", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "Gaussian-Induced Convolution for Graphs"} {"abstract": "Optimization methods (optimizers) get special attention for the efficient training of neural networks in the field of deep learning. In literature there are many papers that compare neural models trained with the use of different optimizers. Each paper demonstrates that for a particular problem an optimizer is better than the others but as the problem changes this type of result is no longer valid and we have to start from scratch. In our paper we propose to use the combination of two very different optimizers but when used simultaneously they can overcome the performances of the single optimizers in very different problems. We propose a new optimizer called MAS (Mixing ADAM and SGD) that integrates SGD and ADAM simultaneously by weighing the contributions of both through the assignment of constant weights. Rather than trying to improve SGD or ADAM we exploit both at the same time by taking the best of both. We have conducted several experiments on images and text document classification, using various CNNs, and we demonstrated by experiments that the proposed MAS optimizer produces better performance than the single SGD or ADAM optimizers. The source code and all the results of the experiments are available online at the following link https://gitlab.com/nicolalandro/multi\\_optimizer", "field": ["Stochastic Optimization"], "task": ["Document Classification", "Stochastic Optimization"], "method": ["Stochastic Gradient Descent", "Mixing Adam and SGD", "Adam", "MAS", "SGD"], "dataset": ["CoLA", "AG News", "CIFAR-100", "CIFAR-10"], "metric": ["Accuracy (max)", "Accuracy (mean)"], "title": "Mixing ADAM and SGD: a Combined Optimization Method"} {"abstract": "ICD coding from electronic clinical records is a manual, time-consuming and expensive process. Code assignment is, however, an important task for billing purposes and database organization. While many works have studied the problem of automated ICD coding from free text using machine learning techniques, most use records in the English language, especially from the MIMIC-III public dataset. This work presents results for a dataset with Brazilian Portuguese clinical notes. We develop and optimize a Logistic Regression model, a Convolutional Neural Network (CNN), a Gated Recurrent Unit Neural Network and a CNN with Attention (CNN-Att) for prediction of diagnosis ICD codes. We also report our results for the MIMIC-III dataset, which outperform previous work among models of the same families, as well as the state of the art. Compared to MIMIC-III, the Brazilian Portuguese dataset contains far fewer words per document, when only discharge summaries are used. We experiment concatenating additional documents available in this dataset, achieving a great boost in performance. The CNN-Att model achieves the best results on both datasets, with micro-averaged F1 score of 0.537 on MIMIC-III and 0.485 on our dataset with additional documents.", "field": ["Generalized Linear Models"], "task": ["Multi-Label Classification Of Biomedical Texts", "Regression"], "method": ["Logistic Regression"], "dataset": ["MIMIC-III"], "metric": ["Micro F1"], "title": "Predicting Multiple ICD-10 Codes from Brazilian-Portuguese Clinical Notes"} {"abstract": "Accurate computer-aided polyp detection and segmentation during colonoscopy examinations can help endoscopists resect abnormal tissue and thereby decrease chances of polyps growing into cancer. Towards developing a fully automated model for pixel-wise polyp segmentation, we propose ResUNet++, which is an improved ResUNet architecture for colonoscopic image segmentation. Our experimental evaluations show that the suggested architecture produces good segmentation results on publicly available datasets. Furthermore, ResUNet++ significantly outperforms U-Net and ResUNet, two key state-of-the-art deep learning architectures, by achieving high evaluation scores with a dice coefficient of 81.33%, and a mean Intersection over Union (mIoU) of 79.27% for the Kvasir-SEG dataset and a dice coefficient of 79.55%, and a mIoU of 79.62% with CVC-612 dataset.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Medical Image Segmentation", "Semantic Segmentation"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["ETIS-LARIBPOLYPDB", "Kvasir-SEG", "CVC-VideoClinicDB", "CVC-ColonDB", "CVC-ClinicDB", "ASU-Mayo Clinic dataset "], "metric": ["Recall", "mean Dice", "precision", "Precision", "mIoU", "Dice", "DSC"], "title": "ResUNet++: An Advanced Architecture for Medical Image Segmentation"} {"abstract": "We consider learning representations of entities and relations in KBs using\nthe neural-embedding approach. We show that most existing models, including NTN\n(Socher et al., 2013) and TransE (Bordes et al., 2013b), can be generalized\nunder a unified learning framework, where entities are low-dimensional vectors\nlearned from a neural network and relations are bilinear and/or linear mapping\nfunctions. Under this framework, we compare a variety of embedding models on\nthe link prediction task. We show that a simple bilinear formulation achieves\nnew state-of-the-art results for the task (achieving a top-10 accuracy of 73.2%\nvs. 54.7% by TransE on Freebase). Furthermore, we introduce a novel approach\nthat utilizes the learned relation embeddings to mine logical rules such as\n\"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that\nembeddings learned from the bilinear objective are particularly good at\ncapturing relational semantics and that the composition of relations is\ncharacterized by matrix multiplication. More interestingly, we demonstrate that\nour embedding-based rule extraction approach successfully outperforms a\nstate-of-the-art confidence-based rule mining approach in mining Horn rules\nthat involve compositional reasoning.", "field": ["Graph Embeddings"], "task": ["Link Prediction"], "method": ["TransE"], "dataset": ["WN18RR", "WN18", "FB15k-237"], "metric": ["Hits@3", "Hits@1", "MR", "MRR", "Hits@10"], "title": "Embedding Entities and Relations for Learning and Inference in Knowledge Bases"} {"abstract": "Sparsity in Deep Neural Networks (DNNs) is studied extensively with the focus of maximizing prediction accuracy given an overall parameter budget. Existing methods rely on uniform or heuristic non-uniform sparsity budgets which have sub-optimal layer-wise parameter allocation resulting in a) lower prediction accuracy or b) higher inference cost (FLOPs). This work proposes Soft Threshold Reparameterization (STR), a novel use of the soft-threshold operator on DNN weights. STR smoothly induces sparsity while learning pruning thresholds thereby obtaining a non-uniform sparsity budget. Our method achieves state-of-the-art accuracy for unstructured sparsity in CNNs (ResNet50 and MobileNetV1 on ImageNet-1K), and, additionally, learns non-uniform budgets that empirically reduce the FLOPs by up to 50%. Notably, STR boosts the accuracy over existing results by up to 10% in the ultra sparse (99%) regime and can also be used to induce low-rank (structured sparsity) in RNNs. In short, STR is a simple mechanism which learns effective sparsity budgets that contrast with popular heuristics. Code, pretrained models and sparsity budgets are at https://github.com/RAIVNLab/STR.", "field": ["Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations"], "task": ["Network Pruning"], "method": ["Depthwise Convolution", "MobileNetV1", "Average Pooling", "Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Depthwise Separable Convolution", "Pointwise Convolution", "Dense Connections", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["ImageNet - ResNet 50 - 90% sparsity"], "metric": ["Top-1 Accuracy"], "title": "Soft Threshold Weight Reparameterization for Learnable Sparsity"} {"abstract": "Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code of this paper can be obtained from https://github.com/thunlp/ERNIE.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Entity Linking", "Entity Typing", "Knowledge Graphs", "Linguistic Acceptability", "Natural Language Inference", "Paraphrase Identification", "Relation Extraction", "Semantic Textual Similarity", "Sentiment Analysis"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["MultiNLI", "FewRel", "TACRED", "SST-2 Binary classification", "RTE", "MRPC", "STS Benchmark", "FIGER", "CoLA", "QNLI", " Open Entity", "Quora Question Pairs"], "metric": ["Pearson Correlation", "Macro F1", "Recall", "Precision", "Matched", "F1", "Accuracy", "Mismatched", "Micro F1"], "title": "ERNIE: Enhanced Language Representation with Informative Entities"} {"abstract": "Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model, named GaussianFace, to enrich the diversity of training data. In comparison to existing methods, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. Extensive experiments demonstrate the effectiveness of the proposed model in learning from diverse data sources and generalize to unseen domain. Specifically, the accuracy of our algorithm achieves an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.", "field": ["Non-Parametric Classification"], "task": ["Face Recognition", "Face Verification", "Multi-Task Learning"], "method": ["Gaussian Process"], "dataset": ["Labeled Faces in the Wild"], "metric": ["Accuracy"], "title": "Surpassing Human-Level Face Verification Performance on LFW with GaussianFace"} {"abstract": "We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al., 2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks: Part-of-speech tagging, Named-entity recognition and text classification. We release BERTweet under the MIT License to facilitate future research and applications on Tweet data. Our BERTweet is available at https://github.com/VinAIResearch/BERTweet", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Output Functions", "Learning Rate Schedules", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Named Entity Recognition", "Part-Of-Speech Tagging", "Text Classification"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "RoBERTa", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["Ritter", "Tweebank"], "metric": ["Acc"], "title": "BERTweet: A pre-trained language model for English Tweets"} {"abstract": "This work presents an analysis of the discriminators used in Generative Adversarial Networks (GANs) for Video. We show that unconstrained video discriminator architectures induce a loss surface with high curvature which make optimisation difficult. We also show that this curvature becomes more extreme as the maximal kernel dimension of video discriminators increases. With these observations in hand, we propose a family of efficient Lower-Dimensional Video Discriminators for GANs (LDVD GANs). The proposed family of discriminators improve the performance of video GAN models they are applied to and demonstrate good performance on complex and diverse datasets such as UCF-101. In particular, we show that they can double the performance of Temporal-GANs and provide for state-of-the-art performance on a single GPU.", "field": ["Generative Models", "Convolutions"], "task": ["Video Generation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["UCF-101 16 frames, 64x64, Unconditional", "UCF-101 16 frames, Unconditional, Single GPU", "UCF-101 16 frames, 128x128, Unconditional"], "metric": ["Inception Score"], "title": "Lower Dimensional Kernels for Video Discriminators"} {"abstract": "We present state-of-the-art automatic speech recognition (ASR) systems employing a standard hybrid DNN/HMM architecture compared to an attention-based encoder-decoder design for the LibriSpeech task. Detailed descriptions of the system development, including model design, pretraining schemes, training schedules, and optimization approaches are provided for both system architectures. Both hybrid DNN/HMM and attention-based systems employ bi-directional LSTMs for acoustic modeling/encoding. For language modeling, we employ both LSTM and Transformer based architectures. All our systems are built using RWTHs open-source toolkits RASR and RETURNN. To the best knowledge of the authors, the results obtained when training on the full LibriSpeech training set, are the best published currently, both for the hybrid DNN/HMM and the attention-based systems. Our single hybrid system even outperforms previous results obtained from combining eight single systems. Our comparison shows that on the LibriSpeech 960h task, the hybrid DNN/HMM system outperforms the attention-based system by 15% relative on the clean and 40% relative on the other test sets in terms of word error rate. Moreover, experiments on a reduced 100h-subset of the LibriSpeech training corpus even show a more pronounced margin between the hybrid DNN/HMM and attention-based architectures.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Recurrent Neural Networks", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["End-To-End Speech Recognition", "Language Modelling", "Speech Recognition"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Long Short-Term Memory", "Multi-Head Attention", "Transformer", "Tanh Activation", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "LSTM", "Dense Connections", "Sigmoid Activation"], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "RWTH ASR Systems for LibriSpeech: Hybrid vs Attention -- w/o Data Augmentation"} {"abstract": "Knowledge graph embedding aims to embed entities and relations of knowledge\ngraphs into low-dimensional vector spaces. Translating embedding methods regard\nrelations as the translation from head entities to tail entities, which achieve\nthe state-of-the-art results among knowledge graph embedding methods. However,\na major limitation of these methods is the time consuming training process,\nwhich may take several days or even weeks for large knowledge graphs, and\nresult in great difficulty in practical applications. In this paper, we propose\nan efficient parallel framework for translating embedding methods, called\nParTrans-X, which enables the methods to be paralleled without locks by\nutilizing the distinguished structures of knowledge graphs. Experiments on two\ndatasets with three typical translating embedding methods, i.e., TransE [3],\nTransH [17], and a more efficient variant TransE- AdaGrad [10] validate that\nParTrans-X can speed up the training process by more than an order of\nmagnitude.", "field": ["Graph Embeddings", "Stochastic Optimization"], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Knowledge Graphs", "Link Prediction"], "method": ["AdaGrad", "TransE"], "dataset": ["FB15k", "FB15k (filtered)", "WN18", "WN18 (filtered)"], "metric": ["Hits@10", "MR"], "title": "Efficient Parallel Translating Embedding For Knowledge Graphs"} {"abstract": "Accurate prediction of molecular properties is important for new compound design, which is a crucial step in drug discovery. In this paper, molecular graph data is utilized for property prediction based on graph convolution neural networks. In addition, a convolution spatial graph embedding layer (C-SGEL) is introduced to retain the spatial connection information on molecules. And, multiple C-SGELs are stacked to construct a convolution spatial graph embedding network (C-SGEN) for end-to-end representation learning. In order to enhance the robustness of the network, molecular fingerprints are also combined with C-SGEN to build a composite model for predicting molecular properties. Our comparative experiments have shown that our method is accurate and achieves the best results on some open benchmark datasets.", "field": ["Convolutions"], "task": ["Drug Discovery", "Graph Embedding", "Graph Regression", "Representation Learning"], "method": ["Convolution"], "dataset": ["Lipophilicity "], "metric": ["RMSE"], "title": "Molecule Property Prediction Based on Spatial Graph Embedding"} {"abstract": "LSTMs and other RNN variants have shown strong performance on character-level\nlanguage modeling. These models are typically trained using truncated\nbackpropagation through time, and it is common to assume that their success\nstems from their ability to remember long-term contexts. In this paper, we show\nthat a deep (64-layer) transformer model with fixed context outperforms RNN\nvariants by a large margin, achieving state of the art on two popular\nbenchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good\nresults at this depth, we show that it is important to add auxiliary losses,\nboth at intermediate network layers and intermediate sequence positions.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["Text8", "enwik8", "Hutter Prize"], "metric": ["Number of params", "Bit per Character (BPC)"], "title": "Character-Level Language Modeling with Deeper Self-Attention"} {"abstract": "Compared with model architectures, the training process, which is also\ncrucial to the success of detectors, has received relatively less attention in\nobject detection. In this work, we carefully revisit the standard training\npractice of detectors, and find that the detection performance is often limited\nby the imbalance during the training process, which generally consists in three\nlevels - sample level, feature level, and objective level. To mitigate the\nadverse effects caused thereby, we propose Libra R-CNN, a simple but effective\nframework towards balanced learning for object detection. It integrates three\nnovel components: IoU-balanced sampling, balanced feature pyramid, and balanced\nL1 loss, respectively for reducing the imbalance at sample, feature, and\nobjective level. Benefitted from the overall balanced design, Libra R-CNN\nsignificantly improves the detection performance. Without bells and whistles,\nit achieves 2.5 points and 2.0 points higher Average Precision (AP) than FPN\nFaster R-CNN and RetinaNet respectively on MSCOCO.", "field": ["Feature Pyramid Blocks", "Object Detection Models", "Initialization", "Convolutional Neural Networks", "Learning Rate Schedules", "Feature Extractors", "Activation Functions", "Skip Connections", "Loss Functions", "Normalization", "Convolutions", "Pooling Operations", "Affinity Functions", "Prioritized Sampling", "Image Feature Extractors", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Object Detection"], "method": ["Average Pooling", "IoU-Balanced Sampling", "Balanced Feature Pyramid", "Embedded Gaussian Affinity", "1x1 Convolution", "Libra R-CNN", "Convolution", "ReLU", "Balanced L1 Loss", "FPN", "Residual Connection", "Grouped Convolution", "Focal Loss", "Non-Local Operation", "Batch Normalization", "Non-Local Block", "Kaiming Initialization", "Step Decay", "ResNeXt Block", "ResNeXt", "Feature Pyramid Network", "Bottleneck Residual Block", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Libra R-CNN: Towards Balanced Learning for Object Detection"} {"abstract": "In late fusion, each modality is processed in a separate unimodal Convolutional Neural Network (CNN) stream and the scores of each modality are fused at the end. Due to its simplicity late fusion is still the predominant approach in many state-of-the-art multimodal applications. In this paper, we present a simple neural network module for leveraging the knowledge from multiple modalities in convolutional neural networks. The propose unit, named Multimodal Transfer Module (MMTM), can be added at different levels of the feature hierarchy, enabling slow modality fusion. Using squeeze and excitation operations, MMTM utilizes the knowledge of multiple modalities to recalibrate the channel-wise features in each CNN stream. Despite other intermediate fusion methods, the proposed module could be used for feature modality fusion in convolution layers with different spatial dimensions. Another advantage of the proposed method is that it could be added among unimodal branches with minimum changes in the their network architectures, allowing each branch to be initialized with existing pretrained weights. Experimental results show that our framework improves the recognition accuracy of well-known multimodal networks. We demonstrate state-of-the-art or competitive performance on four datasets that span the task domains of dynamic hand gesture recognition, speech enhancement, and action recognition with RGB and body joints.", "field": ["Convolutions"], "task": ["Action Recognition", "Gesture Recognition", "Hand Gesture Recognition", "Hand-Gesture Recognition", "Speech Enhancement"], "method": ["Convolution"], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)"], "title": "MMTM: Multimodal Transfer Module for CNN Fusion"} {"abstract": "Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.", "field": ["Normalization", "Attention Mechanisms", "Generative Adversarial Networks", "Discriminators", "Attention Modules", "Regularization", "Activation Functions", "Latent Variable Sampling", "Convolutions", "Image Feature Extractors", "Stochastic Optimization", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Image Data Augmentation", "Output Functions", "Generative Models", "Skip Connections", "Image Model Blocks"], "task": ["Conditional Image Generation", "Image Generation"], "method": ["Truncation Trick", "Off-Diagonal Orthogonal Regularization", "Spectral Normalization", "Self-Attention GAN", "Adam", "Projection Discriminator", "Early Stopping", "RandAugment", "GAN Hinge Loss", "1x1 Convolution", "Path Length Regularization", "StyleGAN2", "SAGAN Self-Attention Module", "Convolution", "ReLU", "Residual Connection", "Linear Layer", "Leaky ReLU", "Dense Connections", "Feedforward Network", "Conditional Batch Normalization", "Non-Local Operation", "Batch Normalization", "Dot-Product Attention", "SAGAN", "Weight Demodulation", "Non-Local Block", "Softmax", "BigGAN", "R1 Regularization", "Residual Block", "Rectified Linear Units"], "dataset": ["FFHQ 1024 x 1024", "FFHQ 256 x 256", "CIFAR-10"], "metric": ["Inception score", "FID"], "title": "Training Generative Adversarial Networks with Limited Data"} {"abstract": "Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets, large computing systems, and better neural network models, natural language processing (NLP) technology has made significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test set. The SqueezeBERT code will be released.", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Output Functions", "Learning Rate Schedules", "Activation Functions", "Normalization", "Subword Segmentation", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Linguistic Acceptability", "Natural Language Inference", "Sentiment Analysis", "Transfer Learning"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["MultiNLI", "SST-2 Binary classification", "CoLA", "RTE"], "metric": ["Mismatched", "Matched", "Accuracy"], "title": "SqueezeBERT: What can computer vision teach NLP about efficient neural networks?"} {"abstract": "AutoAugment has been a powerful algorithm that improves the accuracy of many vision tasks, yet it is sensitive to the operator space as well as hyper-parameters, and an improper setting may degenerate network optimization. This paper delves deep into the working mechanism, and reveals that AutoAugment may remove part of discriminative information from the training image and so insisting on the ground-truth label is no longer the best option. To relieve the inaccuracy of supervision, we make use of knowledge distillation that refers to the output of a teacher model to guide network training. Experiments are performed in standard image classification benchmarks, and demonstrate the effectiveness of our approach in suppressing noise of data augmentation and stabilizing training. Upon the cooperation of knowledge distillation and AutoAugment, we claim the new state-of-the-art on ImageNet classification with a top-1 accuracy of 85.8%.", "field": ["Image Data Augmentation", "Image Scaling Strategies", "Regularization", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["Data Augmentation", "Image Classification", "Knowledge Distillation"], "method": ["Depthwise Convolution", "Weight Decay", "Average Pooling", "EfficientNet", "RMSProp", "Long Short-Term Memory", "RandAugment", "Tanh Activation", "1x1 Convolution", "AutoAugment", "Convolution", "ReLU", "ShakeDrop", "Dense Connections", "Swish", "FixRes", "Batch Normalization", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Sigmoid Activation", "Inverted Residual Block", "LSTM", "Dropout", "Depthwise Separable Convolution", "Rectified Linear Units"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 1 Accuracy"], "title": "Circumventing Outliers of AutoAugment with Knowledge Distillation"} {"abstract": "We propose a new network architecture, Gated Attention Networks (GaAN), for\nlearning on graphs. Unlike the traditional multi-head attention mechanism,\nwhich equally consumes all attention heads, GaAN uses a convolutional\nsub-network to control each attention head's importance. We demonstrate the\neffectiveness of GaAN on the inductive node classification problem. Moreover,\nwith GaAN as a building block, we construct the Graph Gated Recurrent Unit\n(GGRU) to address the traffic speed forecasting problem. Extensive experiments\non three real-world datasets show that our GaAN framework achieves\nstate-of-the-art results on both tasks.", "field": ["Attention Modules"], "task": ["Node Classification"], "method": ["Multi-Head Attention"], "dataset": ["PPI"], "metric": ["F1"], "title": "GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs"} {"abstract": "We propose Efficient Neural Architecture Search (ENAS), a fast and\ninexpensive approach for automatic model design. In ENAS, a controller learns\nto discover neural network architectures by searching for an optimal subgraph\nwithin a large computational graph. The controller is trained with policy\ngradient to select a subgraph that maximizes the expected reward on the\nvalidation set. Meanwhile the model corresponding to the selected subgraph is\ntrained to minimize a canonical cross entropy loss. Thanks to parameter sharing\nbetween child models, ENAS is fast: it delivers strong empirical performances\nusing much fewer GPU-hours than all existing automatic model design approaches,\nand notably, 1000x less expensive than standard Neural Architecture Search. On\nthe Penn Treebank dataset, ENAS discovers a novel architecture that achieves a\ntest perplexity of 55.8, establishing a new state-of-the-art among all methods\nwithout post-training processing. On the CIFAR-10 dataset, ENAS designs novel\narchitectures that achieve a test error of 2.89%, which is on par with NASNet\n(Zoph et al., 2018), whose test error is 2.65%.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions"], "task": ["Language Modelling", "Neural Architecture Search"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["Penn Treebank (Word Level)", "CIFAR-10 Image Classification"], "metric": ["Percentage error", "Validation perplexity", "Test perplexity", "Params"], "title": "Efficient Neural Architecture Search via Parameter Sharing"} {"abstract": "Feature learning on point clouds has shown great promise, with the\nintroduction of effective and generalizable deep learning frameworks such as\npointnet++. Thus far, however, point features have been abstracted in an\nindependent and isolated manner, ignoring the relative layout of neighboring\npoints as well as their features. In the present article, we propose to\novercome this limitation by using spectral graph convolution on a local graph,\ncombined with a novel graph pooling strategy. In our approach, graph\nconvolution is carried out on a nearest neighbor graph constructed from a\npoint's neighborhood, such that features are jointly learned. We replace the\nstandard max pooling step with a recursive clustering and pooling strategy,\ndevised to aggregate information from within clusters of nodes that are close\nto one another in their spectral coordinates, leading to richer overall feature\ndescriptors. Through extensive experiments on diverse datasets, we show a\nconsistent demonstrable advantage for the tasks of both point set\nclassification and segmentation.", "field": ["Convolutions", "Pooling Operations"], "task": ["3D Point Cloud Classification"], "method": ["Max Pooling", "Convolution"], "dataset": ["ModelNet40"], "metric": ["Overall Accuracy"], "title": "Local Spectral Graph Convolution for Point Set Feature Learning"} {"abstract": "Many previous studies on relation extrac-tion have been focused on finding only one relation between two entities in a single sentence. However, we can easily find the fact that multiple entities exist in a single sentence and the entities form multiple relations. To resolve this prob-lem, we propose a relation extraction model based on a dual pointer network with a multi-head attention mechanism. The proposed model finds n-to-1 subject-object relations by using a forward de-coder called an object decoder. Then, it finds 1-to-n subject-object relations by using a backward decoder called a sub-ject decoder. In the experiments with the ACE-05 dataset and the NYT dataset, the proposed model achieved the state-of-the-art performances (F1-score of 80.5{\\%} in the ACE-05 dataset, F1-score of 78.3{\\%} in the NYT dataset)", "field": ["Output Functions", "Attention Modules", "Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models", "Attention Mechanisms"], "task": ["Relation Extraction"], "method": ["Softmax", "Additive Attention", "Long Short-Term Memory", "Pointer Network", "Multi-Head Attention", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["ACE 2005"], "metric": ["Relation classification F1"], "title": "Relation Extraction among Multiple Entities Using a Dual Pointer Network with a Multi-Head Attention Mechanism"} {"abstract": "Differentiable Architecture Search (DARTS) is now a widely disseminated weight-sharing neural architecture search method. However, it suffers from well-known performance collapse due to an inevitable aggregation of skip connections. In this paper, we first disclose that its root cause lies in an unfair advantage in exclusive competition. Through experiments, we show that if either of two conditions is broken, the collapse disappears. Thereby, we present a novel approach called Fair DARTS where the exclusive competition is relaxed to be collaborative. Specifically, we let each operation's architectural weight be independent of others. Yet there is still an important issue of discretization discrepancy. We then propose a zero-one loss to push architectural weights towards zero or one, which approximates an expected multi-hot solution. Our experiments are performed on two mainstream search spaces, and we derive new state-of-the-art results on CIFAR-10 and ImageNet. Our code is available on https://github.com/xiaomi-automl/fairdarts .", "field": ["Recurrent Neural Networks", "Activation Functions", "Neural Architecture Search", "Output Functions"], "task": ["AutoML", "Neural Architecture Search"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "Differentiable Architecture Search", "LSTM", "DARTS", "Sigmoid Activation"], "dataset": ["CIFAR-10"], "metric": ["Top-1 Error Rate", "Parameters", "Search Time (GPU days)", "FLOPS"], "title": "Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search"} {"abstract": "Recurrent neural networks (RNNs), such as long short-term memory networks\n(LSTMs), serve as a fundamental building block for many sequence learning\ntasks, including machine translation, language modeling, and question\nanswering. In this paper, we consider the specific problem of word-level\nlanguage modeling and investigate strategies for regularizing and optimizing\nLSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on\nhidden-to-hidden weights as a form of recurrent regularization. Further, we\nintroduce NT-ASGD, a variant of the averaged stochastic gradient method,\nwherein the averaging trigger is determined using a non-monotonic condition as\nopposed to being tuned by the user. Using these and other regularization\nstrategies, we achieve state-of-the-art word level perplexities on two data\nsets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the\neffectiveness of a neural cache in conjunction with our proposed model, we\nachieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and\n52.0 on WikiText-2.", "field": ["Regularization", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Language Model Components", "Parameter Sharing"], "task": ["Language Modelling"], "method": ["Activation Regularization", "AWD-LSTM", "Variational Dropout", "Non-monotonically Triggered ASGD", "ASGD Weight-Dropped LSTM", "Temporal Activation Regularization", "Long Short-Term Memory", "Neural Cache", "Tanh Activation", "NT-ASGD", "DropConnect", "Weight Tying", "LSTM", "Dropout", "Embedding Dropout", "Sigmoid Activation"], "dataset": ["Penn Treebank (Word Level)", "WikiText-2"], "metric": ["Number of params", "Validation perplexity", "Test perplexity", "Params"], "title": "Regularizing and Optimizing LSTM Language Models"} {"abstract": "We propose a novel method called deep convolutional decision jungle (CDJ) and\nits learning algorithm for image classification. The CDJ maintains the\nstructure of standard convolutional neural networks (CNNs), i.e. multiple\nlayers of multiple response maps fully connected. Each response map-or node-in\nboth the convolutional and fully-connected layers selectively respond to class\nlabels s.t. each data sample travels via a specific soft route of those\nactivated nodes. The proposed method CDJ automatically learns features, whereas\ndecision forests and jungles require pre-defined feature sets. Compared to\nCNNs, the method embeds the benefits of using data-dependent discriminative\nfunctions, which better handles multi-modal/heterogeneous data; further,the\nmethod offers more diverse sparse network responses, which in turn can be used\nfor cost-effective learning/classification. The network is learnt by combining\nconventional softmax and proposed entropy losses in each layer. The entropy\nloss,as used in decision tree growing, measures the purity of data activation\naccording to the class label distribution. The back-propagation rule for the\nproposed loss function is derived from stochastic gradient descent (SGD)\noptimization of CNNs. We show that our proposed method outperforms\nstate-of-the-art methods on three public image classification benchmarks and\none face verification dataset. We also demonstrate the use of auxiliary data\nlabels, when available, which helps our method to learn more discriminative\nrouting and representations and leads to improved classification.", "field": ["Output Functions"], "task": ["Face Verification", "Image Classification"], "method": ["Softmax"], "dataset": ["CIFAR-100"], "metric": ["Percentage correct"], "title": "Deep Convolutional Decision Jungle for Image Classification"} {"abstract": "We present the first deep learning model to successfully learn control\npolicies directly from high-dimensional sensory input using reinforcement\nlearning. The model is a convolutional neural network, trained with a variant\nof Q-learning, whose input is raw pixels and whose output is a value function\nestimating future rewards. We apply our method to seven Atari 2600 games from\nthe Arcade Learning Environment, with no adjustment of the architecture or\nlearning algorithm. We find that it outperforms all previous approaches on six\nof the games and surpasses a human expert on three of them.", "field": ["Q-Learning Networks", "Off-Policy TD Control", "Convolutions", "Feedforward Networks", "Behaviour Policies", "Replay Memory"], "task": ["Atari Games", "Q-Learning"], "method": ["Q-Learning", "Epsilon Greedy Exploration", "Convolution", "Experience Replay", "DQN", "Dense Connections", "Deep Q-Network"], "dataset": ["Atari 2600 Beam Rider", "Atari 2600 Enduro", "Atari 2600 Seaquest", "Atari 2600 Breakout", "Atari 2600 Space Invaders", "Atari 2600 Pong", "Atari 2600 Q*Bert"], "metric": ["Score"], "title": "Playing Atari with Deep Reinforcement Learning"} {"abstract": "This work is about recognizing human activities occurring in videos at\ndistinct semantic levels, including individual actions, interactions, and group\nactivities. The recognition is realized using a two-level hierarchy of Long\nShort-Term Memory (LSTM) networks, forming a feed-forward deep architecture,\nwhich can be trained end-to-end. In comparison with existing architectures of\nLSTMs, we make two key contributions giving the name to our approach as\nConfidence-Energy Recurrent Network -- CERN. First, instead of using the common\nsoftmax layer for prediction, we specify a novel energy layer (EL) for\nestimating the energy of our predictions. Second, rather than finding the\ncommon minimum-energy class assignment, which may be numerically unstable under\nuncertainty, we specify that the EL additionally computes the p-values of the\nsolutions, and in this way estimates the most confident energy minimum. The\nevaluation on the Collective Activity and Volleyball datasets demonstrates: (i)\nadvantages of our two contributions relative to the common softmax and\nenergy-minimization formulations and (ii) a superior performance relative to\nthe state-of-the-art approaches.", "field": ["Output Functions"], "task": ["Activity Recognition", "Group Activity Recognition"], "method": ["Softmax"], "dataset": ["Volleyball"], "metric": ["Accuracy"], "title": "CERN: Confidence-Energy Recurrent Network for Group Activity Recognition"} {"abstract": "While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Discrete flows have numerous applications. We consider two flow architectures: discrete autoregressive flows that enable bidirectionality, allowing, for example, tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows that enable efficient non-autoregressive generation as in RealNVP. Empirically, we find that discrete autoregressive flows outperform autoregressive baselines on synthetic discrete distributions, an addition task, and Potts models; and bipartite flows can obtain competitive performance with autoregressive baselines on character-level language modeling for Penn Tree Bank and text8.", "field": ["Generative Models", "Distribution Approximation", "Normalization", "Bijective Transformation"], "task": ["Language Modelling"], "method": ["RealNVP", "Batch Normalization", "Normalizing Flows", "Affine Coupling"], "dataset": ["Text8", "Penn Treebank (Character Level)"], "metric": ["Bit per Character (BPC)"], "title": "Discrete Flows: Invertible Generative Models of Discrete Data"} {"abstract": "This paper presents Point Convolutional Neural Networks (PCNN): a novel\nframework for applying convolutional neural networks to point clouds. The\nframework consists of two operators: extension and restriction, mapping point\ncloud functions to volumetric functions and vise-versa. A point cloud\nconvolution is defined by pull-back of the Euclidean volumetric convolution via\nan extension-restriction mechanism.\n The point cloud convolution is computationally efficient, invariant to the\norder of points in the point cloud, robust to different samplings and varying\ndensities, and translation invariant, that is the same convolution kernel is\nused at all points. PCNN generalizes image CNNs and allows readily adapting\ntheir architectures to the point cloud setting.\n Evaluation of PCNN on three central point cloud learning benchmarks\nconvincingly outperform competing point cloud learning methods, and the vast\nmajority of methods working with more informative shape representations such as\nsurfaces and/or normals.", "field": ["Convolutions"], "task": ["3D Part Segmentation", "3D Point Cloud Classification", "Classify 3D Point Clouds"], "method": ["Convolution"], "dataset": ["ShapeNet-Part", "ModelNet40"], "metric": ["Overall Accuracy", "Instance Average IoU"], "title": "Point Convolutional Neural Networks by Extension Operators"} {"abstract": "Sequence-to-sequence models have been widely used in end-to-end speech processing, for example, automatic speech recognition (ASR), speech translation (ST), and text-to-speech (TTS). This paper focuses on an emergent sequence-to-sequence model called Transformer, which achieves state-of-the-art performance in neural machine translation and other natural language processing applications. We undertook intensive studies in which we experimentally compared and analyzed Transformer and conventional recurrent neural networks (RNN) in a total of 15 ASR, one multilingual ASR, one ST, and two TTS benchmarks. Our experiments revealed various training tips and significant performance benefits obtained with Transformer for each task including the surprising superiority of Transformer in 13/15 ASR benchmarks in comparison with RNN. We are preparing to release Kaldi-style reproducible recipes using open source and publicly available datasets for all the ASR, ST, and TTS tasks for the community to succeed our exciting outcomes.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation", "Speech Recognition"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["AISHELL-1", "LibriSpeech test-clean", "LibriSpeech test-other"], "metric": ["Word Error Rate (WER)"], "title": "A Comparative Study on Transformer vs RNN in Speech Applications"} {"abstract": "Incidental scene text spotting is considered one of the most difficult and\nvaluable challenges in the document analysis community. Most existing methods\ntreat text detection and recognition as separate tasks. In this work, we\npropose a unified end-to-end trainable Fast Oriented Text Spotting (FOTS)\nnetwork for simultaneous detection and recognition, sharing computation and\nvisual information among the two complementary tasks. Specially, RoIRotate is\nintroduced to share convolutional features between detection and recognition.\nBenefiting from convolution sharing strategy, our FOTS has little computation\noverhead compared to baseline text detection network, and the joint training\nmethod learns more generic features to make our method perform better than\nthese two-stage methods. Experiments on ICDAR 2015, ICDAR 2017 MLT, and ICDAR\n2013 datasets demonstrate that the proposed method outperforms state-of-the-art\nmethods significantly, which further allows us to develop the first real-time\noriented text spotting system which surpasses all previous state-of-the-art\nresults by more than 5% on ICDAR 2015 text spotting task while keeping 22.6\nfps.", "field": ["Convolutions"], "task": ["Scene Text", "Scene Text Detection", "Scene Text Recognition", "Text Spotting"], "method": ["Convolution"], "dataset": ["ICDAR 2017 MLT", "ICDAR 2015"], "metric": ["F-Measure", "Recall", "Precision"], "title": "FOTS: Fast Oriented Text Spotting with a Unified Network"} {"abstract": "Conventional neural architectures for sequential data present important limitations. Recurrent networks suffer from exploding and vanishing gradients, small effective memory horizons, and must be trained sequentially. Convolutional networks are unable to handle sequences of unknown size and their memory horizon must be defined a priori. In this work, we show that all these problems can be solved by formulating convolutional kernels in CNNs as continuous functions. The resulting Continuous Kernel Convolution (CKConv) allows us to model arbitrarily long sequences in a parallel manner, within a single operation, and without relying on any form of recurrence. We show that Continuous Kernel Convolutional Networks (CKCNNs) obtain state-of-the-art results in multiple datasets, e.g., permuted MNIST, and, thanks to their continuous nature, are able to handle non-uniformly sampled datasets and irregularly-sampled data natively. CKCNNs match or perform better than neural ODEs designed for these purposes in a much faster and simpler manner.", "field": ["Convolutions"], "task": ["Irregular Time Series", "Sequential Image Classification", "Time Series"], "method": ["Convolution"], "dataset": ["Sequential CIFAR-10", "Sequential MNIST", "Speech Commands"], "metric": ["Permuted Accuracy", "Unpermuted Accuracy", "% Test Accuracy"], "title": "CKConv: Continuous Kernel Convolution For Sequential Data"} {"abstract": "We present a simple, highly modularized network architecture for image\nclassification. Our network is constructed by repeating a building block that\naggregates a set of transformations with the same topology. Our simple design\nresults in a homogeneous, multi-branch architecture that has only a few\nhyper-parameters to set. This strategy exposes a new dimension, which we call\n\"cardinality\" (the size of the set of transformations), as an essential factor\nin addition to the dimensions of depth and width. On the ImageNet-1K dataset,\nwe empirically show that even under the restricted condition of maintaining\ncomplexity, increasing cardinality is able to improve classification accuracy.\nMoreover, increasing cardinality is more effective than going deeper or wider\nwhen we increase the capacity. Our models, named ResNeXt, are the foundations\nof our entry to the ILSVRC 2016 classification task in which we secured 2nd\nplace. We further investigate ResNeXt on an ImageNet-5K set and the COCO\ndetection set, also showing better results than its ResNet counterpart. The\ncode and models are publicly available online.", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification"], "method": ["Weight Decay", "Average Pooling", "1x1 Convolution", "ResNet", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "Grouped Convolution", "Random Resized Crop", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Step Decay", "ResNeXt Block", "SGD with Momentum", "ResNeXt", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 5 Accuracy", "Top 1 Accuracy"], "title": "Aggregated Residual Transformations for Deep Neural Networks"} {"abstract": "Very deep convolutional networks with hundreds of layers have led to\nsignificant reductions in error on competitive benchmarks. Although the\nunmatched expressiveness of the many layers can be highly desirable at test\ntime, training very deep networks comes with its own set of challenges. The\ngradients can vanish, the forward flow often diminishes, and the training time\ncan be painfully slow. To address these problems, we propose stochastic depth,\na training procedure that enables the seemingly contradictory setup to train\nshort networks and use deep networks at test time. We start with very deep\nnetworks but during training, for each mini-batch, randomly drop a subset of\nlayers and bypass them with the identity function. This simple approach\ncomplements the recent success of residual networks. It reduces training time\nsubstantially and improves the test error significantly on almost all data sets\nthat we used for evaluation. With stochastic depth we can increase the depth of\nresidual networks even beyond 1200 layers and still yield meaningful\nimprovements in test error (4.91% on CIFAR-10).", "field": ["Regularization"], "task": ["Image Classification"], "method": ["Stochastic Depth"], "dataset": ["SVHN", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Deep Networks with Stochastic Depth"} {"abstract": "For person re-identification, existing deep networks often focus on representation learning. However, without transfer learning, the learned model is fixed as is, which is not adaptable for handling various unseen scenarios. In this paper, beyond representation learning, we consider how to formulate person image matching directly in deep feature maps. We treat image matching as finding local correspondences in feature maps, and construct query-adaptive convolution kernels on the fly to achieve local matching. In this way, the matching process and results are interpretable, and this explicit matching is more generalizable than representation features to unseen scenarios, such as unknown misalignments, pose or viewpoint changes. To facilitate end-to-end training of this architecture, we further build a class memory module to cache feature maps of the most recent samples of each class, so as to compute image matching losses for metric learning. Through direct cross-dataset evaluation, the proposed Query-Adaptive Convolution (QAConv) method gains large improvements over popular learning methods (about 10%+ mAP), and achieves comparable results to many transfer learning methods. Besides, a model-free temporal cooccurrence based score weighting method called TLift is proposed, which improves the performance to a further extent, achieving state-of-the-art results in cross-dataset person re-identification. Code is available at https://github.com/ShengcaiLiao/QAConv.", "field": ["Convolutions"], "task": ["Domain Generalization", "Generalizable Person Re-identification", "Metric Learning", "Person Re-Identification"], "method": ["Convolution"], "dataset": ["MSMT17->DukeMTMC-reID", "Market-1501->MSMT17", "DukeMTMC-reID->MSMT17", "DukeMTMC-reID->Market-1501", "MSMT17->Market-1501", "Market-1501->DukeMTMC-reID"], "metric": ["Rank-1", "mAP"], "title": "Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting"} {"abstract": "This work proposes a novel Graph-based neural ArchiTecture Encoding Scheme, a.k.a. GATES, to improve the predictor-based neural architecture search. Specifically, different from existing graph-based schemes, GATES models the operations as the transformation of the propagating information, which mimics the actual data processing of neural architecture. GATES is a more reasonable modeling of the neural architectures, and can encode architectures from both the \"operation on node\" and \"operation on edge\" cell search spaces consistently. Experimental results on various search spaces confirm GATES's effectiveness in improving the performance predictor. Furthermore, equipped with the improved performance predictor, the sample efficiency of the predictor-based neural architecture search (NAS) flow is boosted. Codes are available at https://github.com/walkerning/aw_nas.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions"], "task": ["Neural Architecture Search"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["CIFAR-10 Image Classification", "ImageNet"], "metric": ["Percentage error", "Top-1 Error Rate", "Accuracy", "Params"], "title": "A Generic Graph-based Neural Architecture Encoding Scheme for Predictor-based NAS"} {"abstract": "We propose a new shared task of semantic retrieval from legal texts, in which a so-called contract discovery is to be performed, where legal clauses are extracted from documents, given a few examples of similar clauses from other legal acts. The task differs substantially from conventional NLI and shared tasks on legal information extraction (e.g., one has to identify text span instead of a single document, page, or paragraph). The specification of the proposed task is followed by an evaluation of multiple solutions within the unified framework proposed for this branch of methods. It is shown that state-of-the-art pretrained encoders fail to provide satisfactory results on the task proposed. In contrast, Language Model-based solutions perform better, especially when unsupervised fine-tuning is applied. Besides the ablation studies, we addressed questions regarding detection accuracy for relevant text fragments depending on the number of examples available. In addition to the dataset and reference results, LMs specialized in the legal domain were made publicly available.", "field": ["Non-Parametric Classification"], "task": ["Few-Shot Learning", "Language Modelling", "Semantic Retrieval", "Semantic Similarity"], "method": ["k-Nearest Neighbors", "k-NN"], "dataset": ["Contract Discovery"], "metric": ["Soft-F1"], "title": "Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines"} {"abstract": "Finding out the computational redundant part of a trained Deep Neural Network (DNN) is the key question that pruning algorithms target on. Many algorithms try to predict model performance of the pruned sub-nets by introducing various evaluation methods. But they are either inaccurate or very complicated for general application. In this work, we present a pruning method called EagleEye, in which a simple yet efficient evaluation component based on adaptive batch normalization is applied to unveil a strong correlation between different pruned DNN structures and their final settled accuracy. This strong correlation allows us to fast spot the pruned candidates with highest potential accuracy without actually fine-tuning them. This module is also general to plug-in and improve some existing pruning algorithms. EagleEye achieves better pruning performance than all of the studied pruning algorithms in our experiments. Concretely, to prune MobileNet V1 and ResNet-50, EagleEye outperforms all compared methods by up to 3.8%. Even in the more challenging experiments of pruning the compact model of MobileNet V1, EagleEye achieves the highest accuracy of 70.9% with an overall 50% operations (FLOPs) pruned. All accuracy results are Top-1 ImageNet classification accuracy. Source code and models are accessible to open-source community https://github.com/anonymous47823493/EagleEye .", "field": ["Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections"], "task": ["Network Pruning"], "method": ["Depthwise Convolution", "MobileNetV1", "Average Pooling", "Softmax", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Rectified Linear Units", "Residual Connection", "Depthwise Separable Convolution", "Pointwise Convolution", "Global Average Pooling", "Dense Connections"], "dataset": ["ImageNet"], "metric": ["Accuracy"], "title": "EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning"} {"abstract": "Deep complex U-Net structure and convolutional recurrent network (CRN) structure achieve state-of-the-art performance for monaural speech enhancement. Both deep complex U-Net and CRN are encoder and decoder structures with skip connections, which heavily rely on the representation power of the complex-valued convolutional layers. In this paper, we propose a complex convolutional block attention module (CCBAM) to boost the representation power of the complex-valued convolutional layers by constructing more informative features. The CCBAM is a lightweight and general module which can be easily integrated into any complex-valued convolutional layers. We integrate CCBAM with the deep complex U-Net and CRN to enhance their performance for speech enhancement. We further propose a mixed loss function to jointly optimize the complex models in both time-frequency (TF) domain and time domain. By integrating CCBAM and the mixed loss, we form a new end-to-end (E2E) complex speech enhancement framework. Ablation experiments and objective evaluations show the superior performance of the proposed approaches.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Speech Enhancement"], "method": ["U-Net", "Average Pooling", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["DNS Challenge", "WSJ0 + DEMAND + RNNoise"], "metric": ["PESQ-NB"], "title": "Monaural Speech Enhancement with Complex Convolutional Block Attention Module and Joint Time Frequency Losses"} {"abstract": "Multimodal Machine Translation (MMT) aims to introduce information from other modality, generally static images, to improve the translation quality. Previous works propose various incorporation methods, but most of them do not consider the relative importance of multiple modalities. Equally treating all modalities may encode too much useless information from less important modalities. In this paper, we introduce the multimodal self-attention in Transformer to solve the issues above in MMT. The proposed method learns the representation of images based on the text, which avoids encoding irrelevant information in images. Experiments and visualization analysis demonstrate that our model benefits from visual information and substantially outperforms previous works and competitive baselines in terms of various metrics.", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Attention Modules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Normalization", "Word Embeddings", "Subword Segmentation", "Convolutions", "Feedforward Networks", "Pooling Operations", "Transformers", "Attention Mechanisms", "Skip Connections", "Skip Connection Blocks"], "task": ["Machine Translation", "Multimodal Machine Translation"], "method": ["Average Pooling", "Adam", "1x1 Convolution", "GloVe Embeddings", "Scaled Dot-Product Attention", "ResNet", "Transformer", "Convolution", "ReLU", "Residual Connection", "GloVe", "Dense Connections", "Layer Normalization", "Batch Normalization", "Residual Network", "Label Smoothing", "Kaiming Initialization", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Bottleneck Residual Block", "Dropout", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Multi30K"], "metric": ["Meteor (EN-DE)", "BLEU (EN-DE)"], "title": "Multimodal Transformer for Multimodal Machine Translation"} {"abstract": "Over the long history of machine learning, which dates back several decades, recurrent neural networks (RNNs) have been used mainly for sequential data and time series and generally with 1D information. Even in some rare studies on 2D images, these networks are used merely to learn and generate data sequentially rather than for image recognition tasks. In this study, we propose integrating an RNN as an additional layer when designing image recognition models. We also develop end-to-end multimodel ensembles that produce expert predictions using several models. In addition, we extend the training strategy so that our model performs comparably to leading models and can even match the state-of-the-art models on several challenging datasets (e.g., SVHN (0.99), Cifar-100 (0.9027) and Cifar-10 (0.9852)). Moreover, our model sets a new record on the Surrey dataset (0.949). The source code of the methods provided in this article is available at https://github.com/leonlha/e2e-3m and http://nguyenhuuphong.me.", "field": ["Output Functions"], "task": ["Image Classification", "Time Series"], "method": ["Softmax"], "dataset": ["Surrey ASL", "Fashion-MNIST", "iCassava'19", "CIFAR-10"], "metric": ["Percentage error", "Accuracy (%)", "Percentage correct", "Top-1 Accuracy"], "title": "Rethinking Recurrent Neural Networks and Other Improvements for Image Classification"} {"abstract": "Neural networks for image recognition have evolved through extensive manual\ndesign from simple chain-like models to structures with multiple wiring paths.\nThe success of ResNets and DenseNets is due in large part to their innovative\nwiring plans. Now, neural architecture search (NAS) studies are exploring the\njoint optimization of wiring and operation types, however, the space of\npossible wirings is constrained and still driven by manual design despite being\nsearched. In this paper, we explore a more diverse set of connectivity patterns\nthrough the lens of randomly wired neural networks. To do this, we first define\nthe concept of a stochastic network generator that encapsulates the entire\nnetwork generation process. Encapsulation provides a unified view of NAS and\nrandomly wired networks. Then, we use three classical random graph models to\ngenerate randomly wired graphs for networks. The results are surprising:\nseveral variants of these random generators yield network instances that have\ncompetitive accuracy on the ImageNet benchmark. These results suggest that new\nefforts focusing on designing better network generators may lead to new\nbreakthroughs by exploring less constrained search spaces with more room for\nnovel design.", "field": ["Image Data Augmentation", "Output Functions", "Regularization", "Stochastic Optimization", "Learning Rate Schedules", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations"], "task": ["Image Classification", "Neural Architecture Search"], "method": ["Weight Decay", "SGD with Momentum", "Average Pooling", "Cosine Annealing", "Softmax", "Random Horizontal Flip", "Random Resized Crop", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "RandWire", "Label Smoothing", "Dense Connections", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Exploring Randomly Wired Neural Networks for Image Recognition"} {"abstract": "CRF has been used as a powerful model for statistical sequence labeling. For neural sequence labeling, however, BiLSTM-CRF does not always lead to better results compared with BiLSTM-softmax local classification. This can be because the simple Markov label transition model of CRF does not give much information gain over strong neural encoding. For better representing label sequences, we investigate a hierarchically-refined label attention network, which explicitly leverages label embeddings and captures potential long-term label dependency by giving each word incrementally refined label distributions with hierarchical attention. Results on POS tagging, NER and CCG supertagging show that the proposed model not only improves the overall tagging accuracy with similar number of parameters, but also significantly speeds up the training and testing compared to BiLSTM-CRF.", "field": ["Structured Prediction"], "task": ["CCG Supertagging", "Named Entity Recognition", "Part-Of-Speech Tagging"], "method": ["Conditional Random Field", "CRF"], "dataset": ["CCGBank", "Ontonotes v5 (English)", "Penn Treebank", "UD"], "metric": ["Avg accuracy", "F1", "Accuracy"], "title": "Hierarchically-Refined Label Attention Network for Sequence Labeling"} {"abstract": "Supervised deep learning has gained significant attention for speech enhancement recently. The state-of-the-art deep learning methods perform the task by learning a ratio/binary mask that is applied to the mixture in the time-frequency domain to produce the clean speech. Despite the great performance in the single-channel setting, these frameworks lag in performance in the multichannel setting as the majority of these methods a) fail to exploit the available spatial information fully, and b) still treat the deep architecture as a black box which may not be well-suited for multichannel audio processing. This paper addresses these drawbacks, a) by utilizing complex ratio masking instead of masking on the magnitude of the spectrogram, and more importantly, b) by introducing a channel-attention mechanism inside the deep architecture to mimic beamforming. We propose Channel-Attention Dense U-Net, in which we apply the channel-attention unit recursively on feature maps at every layer of the network, enabling the network to perform non-linear beamforming. We demonstrate the superior performance of the network against the state-of-the-art approaches on the CHiME-3 dataset.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Speech Enhancement"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["CHiME-3"], "metric": ["\u0394PESQ", "SDR", "PESQ"], "title": "Channel-Attention Dense U-Net for Multichannel Speech Enhancement"} {"abstract": "Standard convolutional neural networks assume a grid structured input is available and exploit discrete convolutions as their fundamental building blocks. This limits their applicability to many real-world applications. In this paper we propose Parametric Continuous Convolution, a new learnable operator that operates over non-grid structured data. The key idea is to exploit parameterized kernel functions that span the full continuous vector space. This generalization allows us to learn over arbitrary data structures as long as their support relationship is computable. Our experiments show significant improvement over the state-of-the-art in point cloud segmentation of indoor and outdoor scenes, and lidar motion estimation of driving scenes.", "field": ["Convolutions"], "task": ["Motion Estimation"], "method": ["Convolution"], "dataset": ["ShapeNet-Part", "S3DIS Area5"], "metric": ["Instance Average IoU", "mAcc", "Class Average IoU", "mIoU"], "title": "Deep Parametric Continuous Convolutional Neural Networks"} {"abstract": "In the time-series analysis, the time series motifs and the order patterns in time series can reveal general temporal patterns and dynamic features. Triadic Motif Field (TMF) is a simple and effective time-series image encoding method based on triadic time series motifs. Electrocardiography (ECG) signals are time-series data widely used to diagnose various cardiac anomalies. The TMF images contain the features characterizing the normal and Atrial Fibrillation (AF) ECG signals. Considering the quasi-periodic characteristics of ECG signals, the dynamic features can be extracted from the TMF images with the transfer learning pre-trained convolutional neural network (CNN) models. With the extracted features, the simple classifiers, such as the Multi-Layer Perceptron (MLP), the logistic regression, and the random forest, can be applied for accurate anomaly detection. With the test dataset of the PhysioNet Challenge 2017 database, the TMF classification model with the VGG16 transfer learning model and MLP classifier demonstrates the best performance with the 95.50% ROC-AUC and 88.43% F1 score in the AF classification. Besides, the TMF classification model can identify AF patients in the test dataset with high precision. The feature vectors extracted from the TMF images show clear patient-wise clustering with the t-distributed Stochastic Neighbor Embedding technique. Above all, the TMF classification model has very good clinical interpretability. The patterns revealed by symmetrized Gradient-weighted Class Activation Mapping have a clear clinical interpretation at the beat and rhythm levels.", "field": ["Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Models"], "task": ["Anomaly Detection", "Atrial Fibrillation Detection", "ECG Classification", "Electrocardiography (ECG)", "Interpretable Machine Learning", "Regression", "Time Series", "Time Series Analysis", "Time Series Classification", "Transfer Learning"], "method": ["Average Pooling", "VGG", "Softmax", "Convolution", "ReLU", "Interpretability", "Dropout", "Dense Connections", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["PhysioNet Challenge 2017"], "metric": ["ROC-AUC", "PR-AUC", "F1"], "title": "Anomaly Detection in Time Series with Triadic Motif Fields and Application in Atrial Fibrillation ECG Classification"} {"abstract": "We consider the problem of learning to repair programs from diagnostic feedback (e.g., compiler error messages). Program repair is challenging for two reasons: First, it requires reasoning and tracking symbols across source code and diagnostic feedback. Second, labeled datasets available for program repair are relatively small. In this work, we propose novel solutions to these two challenges. First, we introduce a program-feedback graph, which connects symbols relevant to program repair in source code and diagnostic feedback, and then apply a graph neural network on top to model the reasoning process. Second, we present a self-supervised learning paradigm for program repair that leverages unlabeled programs available online to create a large amount of extra program repair examples, which we use to pre-train our models. We evaluate our proposed approach on two applications: correcting introductory programming assignments (DeepFix dataset) and correcting the outputs of program synthesis (SPoC dataset). Our final system, DrRepair, significantly outperforms prior work, achieving 68.2% full repair rate on DeepFix (+22.9% over the prior best), and 48.4% synthesis success rate on SPoC (+3.7% over the prior best).", "field": ["Transformers", "Attention Modules", "Graph Models"], "task": ["Code Generation", "Graph Learning", "Program Repair", "Program Synthesis", "Self-Supervised Learning"], "method": ["Multi-Head Attention", "Graph Attention Network", "GAT", "Transformer"], "dataset": ["SPoC TestP", "DeepFix", "SPoC TestW"], "metric": ["Success rate @budget 100", "Average Success Rate"], "title": "Graph-based, Self-Supervised Program Repair from Diagnostic Feedback"} {"abstract": "Semantic image segmentation is the process of labeling each pixel of an image with its corresponding class. An encoder-decoder based approach, like U-Net and its variants, is a popular strategy for solving medical image segmentation tasks. To improve the performance of U-Net on various segmentation tasks, we propose a novel architecture called DoubleU-Net, which is a combination of two U-Net architectures stacked on top of each other. The first U-Net uses a pre-trained VGG-19 as the encoder, which has already learned features from ImageNet and can be transferred to another task easily. To capture more semantic information efficiently, we added another U-Net at the bottom. We also adopt Atrous Spatial Pyramid Pooling (ASPP) to capture contextual information within the network. We have evaluated DoubleU-Net using four medical segmentation datasets, covering various imaging modalities such as colonoscopy, dermoscopy, and microscopy. Experiments on the MICCAI 2015 segmentation challenge, the CVC-ClinicDB, the 2018 Data Science Bowl challenge, and the Lesion boundary segmentation datasets demonstrate that the DoubleU-Net outperforms U-Net and the baseline models. Moreover, DoubleU-Net produces more accurate segmentation masks, especially in the case of the CVC-ClinicDB and MICCAI 2015 segmentation challenge datasets, which have challenging images such as smaller and flat polyps. These results show the improvement over the existing U-Net model. The encouraging results, produced on various medical image segmentation datasets, show that DoubleU-Net can be used as a strong baseline for both medical image segmentation and cross-dataset evaluation testing to measure the generalizability of Deep Learning (DL) models.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Lesion Segmentation", "Medical Image Segmentation", "Semantic Segmentation"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling", "Spatial Pyramid Pooling"], "dataset": ["2018 Data Science Bowl", "2015 MICCAI Polyp Detection", "ISIC 2018", "Kvasir-Instrument", "CVC-ClinicDB"], "metric": ["mean Dice", "mIoU", "Dice Score", "Dice", "DSC"], "title": "DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation"} {"abstract": "Recent state-of-the-art performance on human-body pose estimation has been\nachieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet\narchitectures include pooling and sub-sampling layers which reduce\ncomputational requirements, introduce invariance and prevent over-training.\nThese benefits of pooling come at the cost of reduced localization accuracy. We\nintroduce a novel architecture which includes an efficient `position\nrefinement' model that is trained to estimate the joint offset location within\na small region of the image. This refinement model is jointly trained in\ncascade with a state-of-the-art ConvNet model to achieve improved accuracy in\nhuman joint location estimation. We show that the variance of our detector\napproaches the variance of human annotations on the FLIC dataset and\noutperforms all existing approaches on the MPII-human-pose dataset.", "field": ["Regularization"], "task": ["Object Localization", "Pose Estimation"], "method": ["SpatialDropout"], "dataset": ["MPII Human Pose"], "metric": ["PCKh-0.5"], "title": "Efficient Object Localization Using Convolutional Networks"} {"abstract": "The horizon line is an important geometric feature for many image processing and scene understanding tasks in computer vision. For instance, in navigation of autonomous vehicles or driver assistance, it can be used to improve 3D reconstruction as well as for semantic interpretation of dynamic environments. While both algorithms and datasets exist for single images, the problem of horizon line estimation from video sequences has not gained attention. In this paper, we show how convolutional neural networks are able to utilise the temporal consistency imposed by video sequences in order to increase the accuracy and reduce the variance of horizon line estimates. A novel CNN architecture with an improved residual convolutional LSTM is presented for temporally consistent horizon line estimation. We propose an adaptive loss function that ensures stable training as well as accurate results. Furthermore, we introduce an extension of the KITTI dataset which contains precise horizon line labels for 43699 images across 72 video sequences. A comprehensive evaluation shows that the proposed approach consistently achieves superior performance compared with existing methods.", "field": ["Recurrent Neural Networks", "Activation Functions", "Graph Embeddings"], "task": ["3D Reconstruction", "Autonomous Vehicles", "Horizon Line Estimation", "Scene Understanding"], "method": ["LINE", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Large-scale Information Network Embedding", "Sigmoid Activation"], "dataset": ["KITTI Horizon"], "metric": ["ATV", "MSE", "AUC"], "title": "Temporally Consistent Horizon Lines"} {"abstract": "The goal of this paper to generate a visually appealing video that responds to music with a neural network so that each frame of the video reflects the musical characteristics of the corresponding audio clip. To achieve the goal, we propose a neural music visualizer directly mapping deep music embeddings to style embeddings of StyleGAN, named Tr\\\"aumerAI, which consists of a music auto-tagging model using short-chunk CNN and StyleGAN2 pre-trained on WikiArt dataset. Rather than establishing an objective metric between musical and visual semantics, we manually labeled the pairs in a subjective manner. An annotator listened to 100 music clips of 10 seconds long and selected an image that suits the music among the 200 StyleGAN-generated examples. Based on the collected data, we trained a simple transfer function that converts an audio embedding to a style embedding. The generated examples show that the mapping between audio and video makes a certain level of intra-segment similarity and inter-segment dissimilarity.", "field": ["Regularization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Generative Models"], "task": ["Music Auto-Tagging"], "method": ["Feedforward Network", "Convolution", "Weight Demodulation", "Adaptive Instance Normalization", "Leaky ReLU", "R1 Regularization", "StyleGAN", "Path Length Regularization", "Dense Connections", "StyleGAN2"], "dataset": ["TimeTravel"], "metric": ["0..5sec"], "title": "Tr\u00e4umerAI: Dreaming Music with StyleGAN"} {"abstract": "In this work, we propose to employ information-geometric tools to optimize a graph neural network architecture such as the graph convolutional networks. More specifically, we develop optimization algorithms for the graph-based semi-supervised learning by employing the natural gradient information in the optimization process. This allows us to efficiently exploit the geometry of the underlying statistical model or parameter space for optimization and inference. To the best of our knowledge, this is the first work that has utilized the natural gradient for the optimization of graph neural networks that can be extended to other semi-supervised problems. Efficient computations algorithms are developed and extensive numerical studies are conducted to demonstrate the superior performance of our algorithms over existing algorithms such as ADAM and SGD.", "field": ["Stochastic Optimization"], "task": ["Node Classification"], "method": ["Stochastic Gradient Descent", "Adam", "SGD with Momentum", "SGD"], "dataset": ["Cora", "Citeseer", "Cora with Public Split: fixed 20 nodes per class", "Pubmed", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Optimization of Graph Neural Networks with Natural Gradient Descent"} {"abstract": "We introduce a new way of learning to encode position information for non-recurrent models, such as Transformer models. Unlike RNN and LSTM, which contain inductive bias by loading the input tokens sequentially, non-recurrent models are less sensitive to position. The main reason is that position information among input units is not inherently encoded, i.e., the models are permutation equivalent; this problem justifies why all of the existing models are accompanied by a sinusoidal encoding/embedding layer at the input. However, this solution has clear limitations: the sinusoidal encoding is not flexible enough as it is manually designed and does not contain any learnable parameters, whereas the position embedding restricts the maximum length of input sequences. It is thus desirable to design a new position layer that contains learnable parameters to adjust to different datasets and different architectures. At the same time, we would also like the encodings to extrapolate in accordance with the variable length of inputs. In our proposed solution, we borrow from the recent Neural ODE approach, which may be viewed as a versatile continuous version of a ResNet. This model is capable of modeling many kinds of dynamical systems. We model the evolution of encoded results along position index by such a dynamical system, thereby overcoming the above limitations of existing methods. We evaluate our new position layers on a variety of neural machine translation and language understanding tasks, the experimental results show consistent improvements over the baselines.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Convolutions", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Linguistic Acceptability", "Machine Translation", "Semantic Textual Similarity", "Sentiment Analysis"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Convolution", "1x1 Convolution", "ReLU", "Rectified Linear Units", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["SST-2 Binary classification", "WMT2014 English-German", "MRPC", "WMT2014 English-French", "CoLA"], "metric": ["BLEU score", "Accuracy"], "title": "Learning to Encode Position for Transformer with Continuous Dynamical Model"} {"abstract": "Recent deep learning based salient object detection methods achieve gratifying performance built upon Fully Convolutional Neural Networks (FCNs). However, most of them have suffered from the boundary challenge. The state-of-the-art methods employ feature aggregation tech- nique and can precisely find out wherein the salient object, but they often fail to segment out the entire object with fine boundaries, especially those raised narrow stripes. So there is still a large room for improvement over the FCN based models. In this paper, we design the Attentive Feedback Modules (AFMs) to better explore the structure of objects. A Boundary-Enhanced Loss (BEL) is further employed for learning exquisite boundaries. Our proposed deep model produces satisfying results on the object boundaries and achieves state-of-the-art performance on five widely tested salient object detection benchmarks. The network is in a fully convolutional fashion running at a speed of 26 FPS and does not need any post-processing.\r", "field": ["Convolutions", "Pooling Operations", "Semantic Segmentation Models"], "task": ["Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": ["Fully Convolutional Network", "FCN", "Max Pooling", "Convolution"], "dataset": ["SOC"], "metric": ["Average MAE", "mean E-Measure", "S-Measure"], "title": "Attentive Feedback Network for Boundary-Aware Salient Object Detection"} {"abstract": "In sequence to sequence learning, the self-attention mechanism proves to be highly effective, and achieves significant improvements in many tasks. However, the self-attention mechanism is not without its own flaws. Although self-attention can model extremely long dependencies, the attention in deep layers tends to overconcentrate on a single token, leading to insufficient use of local information and difficultly in representing long sequences. In this work, we explore parallel multi-scale representation learning on sequence data, striving to capture both long-range and short-range language structures. To this end, we propose the Parallel MUlti-Scale attEntion (MUSE) and MUSE-simple. MUSE-simple contains the basic idea of parallel multi-scale sequence representation learning, and it encodes the sequence in parallel, in terms of different scales with the help from self-attention, and pointwise transformation. MUSE builds on MUSE-simple and explores combining convolution and self-attention for learning sequence representations from more different scales. We focus on machine translation and the proposed approach achieves substantial performance improvements over Transformer, especially on long sequences. More importantly, we find that although conceptually simple, its success in practice requires intricate considerations, and the multi-scale attention must build on unified semantic space. Under common setting, the proposed model achieves substantial performance and outperforms all previous models on three main machine translation tasks. In addition, MUSE has potential for accelerating inference due to its parallelism. Code will be available at https://github.com/lancopku/MUSE", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Convolutions", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation", "Representation Learning"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Convolution", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections", "Rectified Linear Units"], "dataset": ["WMT2014 English-French", "WMT2014 English-German", "IWSLT2014 German-English"], "metric": ["BLEU score"], "title": "MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning"} {"abstract": "Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale. We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of substantially higher complexity and fidelity than previous work. Our proposed model, Dual Video Discriminator GAN (DVD-GAN), scales to longer and higher resolution videos by leveraging a computationally efficient decomposition of its discriminator. We evaluate on the related tasks of video synthesis and video prediction, and achieve new state-of-the-art Fr\\'echet Inception Distance for prediction for Kinetics-600, as well as state-of-the-art Inception Score for synthesis on the UCF-101 dataset, alongside establishing a strong baseline for synthesis on Kinetics-600.", "field": ["Normalization", "Optimization", "Attention Mechanisms", "Generative Adversarial Networks", "Discriminators", "Attention Modules", "Regularization", "Activation Functions", "Latent Variable Sampling", "Convolutions", "Pooling Operations", "Image Feature Extractors", "Stochastic Optimization", "Recurrent Neural Networks", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Output Functions", "Generative Models", "Skip Connections", "Image Model Blocks"], "task": ["Video Generation", "Video Prediction"], "method": ["Gated Recurrent Unit", "Truncation Trick", "TTUR", "DVD-GAN", "Off-Diagonal Orthogonal Regularization", "Average Pooling", "Spectral Normalization", "Self-Attention GAN", "Adam", "Orthogonal Regularization", "Projection Discriminator", "Early Stopping", "GAN Hinge Loss", "1x1 Convolution", "CGRU", "3D Convolution", "SAGAN Self-Attention Module", "Convolution", "DVD-GAN GBlock", "ReLU", "Residual Connection", "Linear Layer", "Leaky ReLU", "Two Time-scale Update Rule", "Dense Connections", "Convolutional GRU", "Feedforward Network", "Conditional Batch Normalization", "Non-Local Operation", "Batch Normalization", "Dot-Product Attention", "SAGAN", "GRU", "Non-Local Block", "Sigmoid Activation", "DVD-GAN DBlock", "Softmax", "BigGAN", "Residual Block", "Rectified Linear Units"], "dataset": ["Kinetics-600 48 frames, 64x64", "Kinetics-600 12 frames, 128x128", "Kinetics-600 12 frames, 64x64"], "metric": ["Inception Score", "FID"], "title": "Adversarial Video Generation on Complex Datasets"} {"abstract": "Designing accurate and efficient ConvNets for mobile devices is challenging because the design space is combinatorially large. Due to this, previous neural architecture search (NAS) methods are computationally expensive. ConvNet architecture optimality depends on factors such as input resolution and target devices. However, existing approaches are too expensive for case-by-case redesigns. Also, previous work focuses primarily on reducing FLOPs, but FLOP count does not always reflect actual latency. To address these, we propose a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. FBNets, a family of models discovered by DNAS surpass state-of-the-art models both designed manually and generated automatically. FBNet-B achieves 74.1% top-1 accuracy on ImageNet with 295M FLOPs and 23.1 ms latency on a Samsung S8 phone, 2.4x smaller and 1.5x faster than MobileNetV2-1.3 with similar accuracy. Despite higher accuracy and lower latency than MnasNet, we estimate FBNet-B's search cost is 420x smaller than MnasNet's, at only 216 GPU-hours. Searched for different resolutions and channel sizes, FBNets achieve 1.5% to 6.4% higher accuracy than MobileNetV2. The smallest FBNet achieves 50.2% accuracy and 2.9 ms latency (345 frames per second) on a Samsung S8. Over a Samsung-optimized FBNet, the iPhone-X-optimized model achieves a 1.4x speedup on an iPhone X.", "field": ["Neural Architecture Search", "Image Data Augmentation", "Initialization", "Regularization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Distributions", "Skip Connections", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["Image Classification", "Neural Architecture Search"], "method": ["Depthwise Convolution", "Weight Decay", "FBNet", "Cosine Annealing", "Average Pooling", "Adam", "1x1 Convolution", "Gumbel Softmax", "Differentiable Neural Architecture Search", "MobileNetV2", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "DNAS", "Grouped Convolution", "Random Resized Crop", "Batch Normalization", "Pointwise Convolution", "Kaiming Initialization", "SGD with Momentum", "Inverted Residual Block", "FBNet Block", "Dropout", "Depthwise Separable Convolution", "Residual Block", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 1 Accuracy"], "title": "FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search"} {"abstract": "We investigate conditional adversarial networks as a general-purpose solution\nto image-to-image translation problems. These networks not only learn the\nmapping from input image to output image, but also learn a loss function to\ntrain this mapping. This makes it possible to apply the same generic approach\nto problems that traditionally would require very different loss formulations.\nWe demonstrate that this approach is effective at synthesizing photos from\nlabel maps, reconstructing objects from edge maps, and colorizing images, among\nother tasks. Indeed, since the release of the pix2pix software associated with\nthis paper, a large number of internet users (many of them artists) have posted\ntheir own experiments with our system, further demonstrating its wide\napplicability and ease of adoption without the need for parameter tweaking. As\na community, we no longer hand-engineer our mapping functions, and this work\nsuggests we can achieve reasonable results without hand-engineering our loss\nfunctions either.", "field": ["Discriminators", "Image Data Augmentation", "Regularization", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Generative Models", "Skip Connections"], "task": ["Cross-View Image-to-Image Translation", "Fundus to Angiography Generation", "Image-to-Image Translation", "Nuclear Segmentation"], "method": ["Color Jitter", "PatchGAN", "Adam", "Random Resized Crop", "Concatenated Skip Connection", "Convolution", "Batch Normalization", "ReLU", "Pix2Pix", "Dropout", "Leaky ReLU", "ColorJitter", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["Edge-to-Shoes", "Fundus Fluorescein Angiogram Photographs & Colour Fundus Images of Diabetic Patients", "Cityscapes Photo-to-Labels", "cvusa", "Edge-to-Handbags", "Dayton (256\u00d7256) - ground-to-aerial", "Dayton (64x64) - ground-to-aerial", "Dayton (64\u00d764) - aerial-to-ground", "Dayton (256\u00d7256) - aerial-to-ground", "Ego2Top", "Aerial-to-Map", "Cityscapes Labels-to-Photo", "Cell17"], "metric": ["Hausdorff", "FID", "Per-pixel Accuracy", "Class IOU", "Dice", "LPIPS", "Per-class Accuracy", "F1-score", "SSIM"], "title": "Image-to-Image Translation with Conditional Adversarial Networks"} {"abstract": "Inductive transfer learning has taken the entire NLP field by storm, with models such as BERT and BART setting new state of the art on countless NLU tasks. However, most of the available models and research have been conducted for English. In this work, we introduce BARThez, the first large-scale pretrained seq2seq model for French. Being based on BART, BARThez is particularly well-suited for generative tasks. We evaluate BARThez on five discriminative tasks from the FLUE benchmark and two generative tasks from a novel summarization dataset, OrangeSum, that we created for this research. We show BARThez to be very competitive with state-of-the-art BERT-based French language models such as CamemBERT and FlauBERT. We also continue the pretraining of a multilingual BART on BARThez' corpus, and show our resulting model, mBARThez, to significantly boost BARThez' generative performance. Code, data and models are publicly available.", "field": ["Output Functions", "Attention Modules", "Stochastic Optimization", "Regularization", "Recurrent Neural Networks", "Activation Functions", "Learning Rate Schedules", "Subword Segmentation", "Sequence To Sequence Models", "Normalization", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Natural Language Understanding", "Self-Supervised Learning", "Text Summarization", "Transfer Learning"], "method": ["Weight Decay", "Long Short-Term Memory", "Adam", "Tanh Activation", "BERT", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Residual Connection", "Seq2Seq", "Dense Connections", "Layer Normalization", "Sequence to Sequence", "GELU", "Sigmoid Activation", "WordPiece", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "LSTM", "Dropout", "BART"], "dataset": ["OrangeSum"], "metric": ["ROUGE-1"], "title": "BARThez: a Skilled Pretrained French Sequence-to-Sequence Model"} {"abstract": "We introduce Trankit, a light-weight Transformer-based Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 pretrained pipelines for 56 languages. Built on a state-of-the-art pretrained language model, Trankit significantly outperforms prior multilingual NLP pipelines over sentence segmentation, part-of-speech tagging, morphological feature tagging, and dependency parsing while maintaining competitive performance for tokenization, multi-word token expansion, and lemmatization over 90 Universal Dependencies treebanks. Despite the use of a large pretrained transformer, our toolkit is still efficient in memory usage and speed. This is achieved by our novel plug-and-play mechanism with Adapters where a multilingual pretrained transformer is shared across pipelines for different languages. Our toolkit along with pretrained models and code are publicly available at: https://github.com/nlp-uoregon/trankit. A demo website for our toolkit is also available at: http://nlp.uoregon.edu/trankit. Finally, we create a demo video for Trankit at: https://youtu.be/q0KGP3zGjGc.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Dependency Parsing", "Language Modelling", "Lemmatization", "Morphological Tagging", "Named Entity Recognition", "Part-Of-Speech Tagging", "Sentence segmentation", "Sequential sentence segmentation", "Tokenization"], "method": ["Adapter", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["UD2.5 test"], "metric": ["Macro-averaged F1"], "title": "Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing"} {"abstract": "We present, to our knowledge, the first application of BERT to document classification. A few characteristics of the task might lead one to think that BERT is not the most appropriate model: syntactic structures matter less for content categories, documents can often be longer than typical BERT input, and documents often have multiple labels. Nevertheless, we show that a straightforward classification model using BERT is able to achieve the state of the art across four popular datasets. To address the computational expense associated with BERT inference, we distill knowledge from BERT-large to small bidirectional LSTMs, reaching BERT-base parity on multiple datasets using 30x fewer parameters. The primary contribution of our paper is improved baselines that can provide the foundation for future work.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Document Classification", "Sentiment Analysis", "Text Classification"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["AAPD", "IMDb", "Reuters-21578", "Yelp-14"], "metric": ["Accuracy (2 classes)", "Accuracy (10 classes)", "F1", "Accuracy"], "title": "DocBERT: BERT for Document Classification"} {"abstract": "Recent research on image denoising has progressed with the development of deep learning architectures, especially convolutional neural networks. However, real-world image denoising is still very challenging because it is not possible to obtain ideal pairs of ground-truth images and real-world noisy images. Owing to the recent release of benchmark datasets, the interest of the image denoising community is now moving toward the real-world denoising problem. In this paper, we propose a grouped residual dense network (GRDN), which is an extended and generalized architecture of the state-of-the-art residual dense network (RDN). The core part of RDN is defined as grouped residual dense block (GRDB) and used as a building module of GRDN. We experimentally show that the image denoising performance can be significantly improved by cascading GRDBs. In addition to the network architecture design, we also develop a new generative adversarial network-based real-world noise modeling method. We demonstrate the superiority of the proposed methods by achieving the highest score in terms of both the peak signal-to-noise ratio and the structural similarity in the NTIRE2019 Real Image Denoising Challenge - Track 2:sRGB.", "field": ["Activation Functions", "Normalization", "Convolutions", "Skip Connections", "Image Model Blocks"], "task": ["Denoising", "Image Denoising"], "method": ["Dense Block", "Concatenated Skip Connection", "Batch Normalization", "Convolution", "ReLU", "Rectified Linear Units"], "dataset": ["NTIRE 2019 Real Image Denoising Challenge (sRGB)"], "metric": ["SSIM", "PSNR"], "title": "GRDN:Grouped Residual Dense Network for Real Image Denoising and GAN-based Real-world Noise Modeling"} {"abstract": "Recently, very deep convolutional neural networks (CNNs) have shown\noutstanding performance in object recognition and have also been the first\nchoice for dense classification problems such as semantic segmentation.\nHowever, repeated subsampling operations like pooling or convolution striding\nin deep CNNs lead to a significant decrease in the initial image resolution.\nHere, we present RefineNet, a generic multi-path refinement network that\nexplicitly exploits all the information available along the down-sampling\nprocess to enable high-resolution prediction using long-range residual\nconnections. In this way, the deeper layers that capture high-level semantic\nfeatures can be directly refined using fine-grained features from earlier\nconvolutions. The individual components of RefineNet employ residual\nconnections following the identity mapping mindset, which allows for effective\nend-to-end training. Further, we introduce chained residual pooling, which\ncaptures rich background context in an efficient manner. We carry out\ncomprehensive experiments and set new state-of-the-art results on seven public\ndatasets. In particular, we achieve an intersection-over-union score of 83.4 on\nthe challenging PASCAL VOC 2012 dataset, which is the best reported result to\ndate.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["3D Absolute Human Pose Estimation", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["PASCAL VOC 2012 test", "ADE20K", "ADE20K val", "COCO-Stuff test", "NYU Depth v2", "PASCAL Context", "Cityscapes test"], "metric": ["Mean IoU", "Mean IoU (class)", "Validation mIoU", "mIoU"], "title": "RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation"} {"abstract": "There has been significant progress on pose estimation and increasing\ninterests on pose tracking in recent years. At the same time, the overall\nalgorithm and system complexity increases as well, making the algorithm\nanalysis and comparison more difficult. This work provides simple and effective\nbaseline methods. They are helpful for inspiring and evaluating new ideas for\nthe field. State-of-the-art results are achieved on challenging benchmarks. The\ncode will be available at https://github.com/leoxiaobin/pose.pytorch.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Keypoint Detection", "Pose Estimation", "Pose Tracking"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO", "PoseTrack2018", "COCO test-challenge", "PoseTrack2017", "COCO test-dev"], "metric": ["ARM", "MOTA", "Validation AP", "APM", "AR75", "AR50", "ARL", "AP75", "AP", "APL", "AP50", "AR"], "title": "Simple Baselines for Human Pose Estimation and Tracking"} {"abstract": "In this work, we combine 3D convolution with late temporal modeling for action recognition. For this aim, we replace the conventional Temporal Global Average Pooling (TGAP) layer at the end of 3D convolutional architecture with the Bidirectional Encoder Representations from Transformers (BERT) layer in order to better utilize the temporal information with BERT's attention mechanism. We show that this replacement improves the performances of many popular 3D convolution architectures for action recognition, including ResNeXt, I3D, SlowFast and R(2+1)D. Moreover, we provide the-state-of-the-art results on both HMDB51 and UCF101 datasets with 85.10% and 98.69% top-1 accuracy, respectively. The code is publicly available.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Action Recognition"], "method": ["R(2+1)D", "ResNeXt Block", "Average Pooling", "Grouped Convolution", "Global Average Pooling", "ResNeXt", "3D Convolution", "Convolution", "Batch Normalization", "Rectified Linear Units", "ReLU", "1x1 Convolution", "Residual Connection", "Kaiming Initialization", "(2+1)D Convolution", "Dense Connections"], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "Late Temporal Modeling in 3D CNN Architectures with BERT for Action Recognition"} {"abstract": "Recent work has shown that convolutional networks can be substantially\ndeeper, more accurate, and efficient to train if they contain shorter\nconnections between layers close to the input and those close to the output. In\nthis paper, we embrace this observation and introduce the Dense Convolutional\nNetwork (DenseNet), which connects each layer to every other layer in a\nfeed-forward fashion. Whereas traditional convolutional networks with L layers\nhave L connections - one between each layer and its subsequent layer - our\nnetwork has L(L+1)/2 direct connections. For each layer, the feature-maps of\nall preceding layers are used as inputs, and its own feature-maps are used as\ninputs into all subsequent layers. DenseNets have several compelling\nadvantages: they alleviate the vanishing-gradient problem, strengthen feature\npropagation, encourage feature reuse, and substantially reduce the number of\nparameters. We evaluate our proposed architecture on four highly competitive\nobject recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet).\nDenseNets obtain significant improvements over the state-of-the-art on most of\nthem, whilst requiring less computation to achieve high performance. Code and\npre-trained models are available at https://github.com/liuzhuang13/DenseNet .", "field": ["Initialization", "Regularization", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks"], "task": ["Breast Tumour Classification", "Crowd Counting", "Image Classification", "Object Recognition", "Person Re-Identification"], "method": ["Weight Decay", "Dense Block", "Average Pooling", "Softmax", "Concatenated Skip Connection", "Step Decay", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Dropout", "DenseNet", "Nesterov Accelerated Gradient", "Kaiming Initialization", "Dense Connections", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["PCam", "MSMT17", "UCF-QNRF", "CIFAR-100", "CIFAR-10", "SVHN", "ImageNet"], "metric": ["mAP", "Top 1 Accuracy", "Percentage error", "Percentage correct", "MAE", "AUC", "Top 5 Accuracy"], "title": "Densely Connected Convolutional Networks"} {"abstract": "Generative adversarial networks (GANs) are a powerful approach to unsupervised learning. They have achieved state-of-the-art performance in the image domain. However, GANs are limited in two ways. They often learn distributions with low support---a phenomenon known as mode collapse---and they do not guarantee the existence of a probability density, which makes evaluating generalization using predictive log-likelihood impossible. In this paper, we develop the prescribed GAN (PresGAN) to address these shortcomings. PresGANs add noise to the output of a density network and optimize an entropy-regularized adversarial loss. The added noise renders tractable approximations of the predictive log-likelihood and stabilizes the training procedure. The entropy regularizer encourages PresGANs to capture all the modes of the data distribution. Fitting PresGANs involves computing the intractable gradients of the entropy regularization term; PresGANs sidestep this intractability using unbiased stochastic estimates. We evaluate PresGANs on several datasets and found they mitigate mode collapse and generate samples with high perceptual quality. We further found that PresGANs reduce the gap in performance in terms of predictive log-likelihood between traditional GANs and variational autoencoders (VAEs).", "field": ["Regularization", "Activation Functions", "Normalization", "Convolutions", "Generative Models"], "task": ["Image Generation"], "method": ["Prescribed Generative Adversarial Network", "Generative Adversarial Network", "Entropy Regularization", "GAN", "Batch Normalization", "PresGAN", "Convolution", "ReLU", "DCGAN", "Deep Convolutional GAN", "Leaky ReLU", "Rectified Linear Units"], "dataset": ["MNIST", "Stacked MNIST", "CelebA 128 x 128", "CIFAR-10"], "metric": ["FID"], "title": "Prescribed Generative Adversarial Networks"} {"abstract": "Many tasks, including language generation, benefit from learning the structure of the output space, particularly when the space of output labels is large and the data is sparse. State-of-the-art neural language models indirectly capture the output space structure in their classifier weights since they lack parameter sharing across output labels. Learning shared output label mappings helps, but existing methods have limited expressivity and are prone to overfitting. In this paper, we investigate the usefulness of more powerful shared mappings for output labels, and propose a deep residual output mapping with dropout between layers to better capture the structure of the output space and avoid overfitting. Evaluations on three language generation tasks show that our output label mapping can match or improve state-of-the-art recurrent and self-attention architectures, and suggest that the classifier does not necessarily need to be high-rank to better model natural language if it is better at capturing the structure of the output space.", "field": ["Regularization"], "task": ["Language Modelling", "Machine Translation", "Text Generation"], "method": ["Dropout"], "dataset": ["Penn Treebank (Word Level)", "WikiText-2", "WMT2014 English-German"], "metric": ["Number of params", "Validation perplexity", "Test perplexity", "Params", "BLEU score"], "title": "Deep Residual Output Layers for Neural Language Generation"} {"abstract": "Changes in neural architectures have fostered significant breakthroughs in language modeling and computer vision. Unfortunately, novel architectures often require re-thinking the choice of hyperparameters (e.g., learning rate, warmup schedule, and momentum coefficients) to maintain stability of the optimizer. This optimizer instability is often the result of poor parameter initialization, and can be avoided by architecture-specific initialization schemes. In this paper, we present GradInit, an automated and architecture agnostic method for initializing neural networks. GradInit is based on a simple heuristic; the variance of each network layer is adjusted so that a single step of SGD or Adam results in the smallest possible loss value. This adjustment is done by introducing a scalar multiplier variable in front of each parameter block, and then optimizing these variables using a simple numerical scheme. GradInit accelerates the convergence and test performance of many convolutional architectures, both with or without skip connections, and even without normalization layers. It also enables training the original Post-LN Transformer for machine translation without learning rate warmup under a wide range of learning rates and momentum coefficients. Code is available at https://github.com/zhuchen03/gradinit.", "field": ["Regularization", "Output Functions", "Attention Modules", "Stochastic Optimization", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Image Classification", "Language Modelling", "Machine Translation"], "method": ["Stochastic Gradient Descent", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "SGD", "Residual Connection", "Label Smoothing", "Scaled Dot-Product Attention", "Dropout", "Dense Connections"], "dataset": ["CIFAR-10"], "metric": ["PARAMS", "Percentage correct"], "title": "GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training"} {"abstract": "Dynamic Scene deblurring is a challenging low-level vision task where spatially variant blur is caused by many factors, e.g., camera shake and object motion. Recent study has made significant progress. Compared with the parameter independence scheme [19] and parameter sharing scheme [33], we develop the general principle for constraining the deblurring network structure by proposing the generic and effective selective sharing scheme. Inside the subnetwork of each scale, we propose a nested skip connection structure for the nonlinear transformation modules to replace stacked convolution layers or residual blocks. Besides, we build a new large dataset of blurred/sharp image pairs towards better restoration quality. Comprehensive experimental results show that our parameter selective sharing scheme, nested skip connection structure, and the new dataset are all significant to set a new state-of-the-art in dynamic scene deblurring.\r", "field": ["Convolutions"], "task": ["Deblurring"], "method": ["Convolution"], "dataset": ["GoPro"], "metric": ["SSIM", "PSNR"], "title": "Dynamic Scene Deblurring With Parameter Selective Sharing and Nested Skip Connections"} {"abstract": "In this paper, we aim to address the problem of human interaction recognition\nin videos by exploring the long-term inter-related dynamics among multiple\npersons. Recently, Long Short-Term Memory (LSTM) has become a popular choice to\nmodel individual dynamic for single-person action recognition due to its\nability of capturing the temporal motion information in a range. However,\nexisting RNN models focus only on capturing the dynamics of human interaction\nby simply combining all dynamics of individuals or modeling them as a whole.\nSuch models neglect the inter-related dynamics of how human interactions change\nover time. To this end, we propose a novel Hierarchical Long Short-Term\nConcurrent Memory (H-LSTCM) to model the long-term inter-related dynamics among\na group of persons for recognizing the human interactions. Specifically, we\nfirst feed each person's static features into a Single-Person LSTM to learn the\nsingle-person dynamic. Subsequently, the outputs of all Single-Person LSTM\nunits are fed into a novel Concurrent LSTM (Co-LSTM) unit, which mainly\nconsists of multiple sub-memory units, a new cell gate and a new co-memory\ncell. In a Co-LSTM unit, each sub-memory unit stores individual motion\ninformation, while this Co-LSTM unit selectively integrates and stores\ninter-related motion information between multiple interacting persons from\nmultiple sub-memory units via the cell gate and co-memory cell, respectively.\nExtensive experiments on four public datasets validate the effectiveness of the\nproposed H-LSTCM by comparing against baseline and state-of-the-art methods.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Action Recognition", "Human Interaction Recognition", "Temporal Action Localization"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["Collective Activity", "UT", "BIT", "Volleyball"], "metric": ["Accuracy"], "title": "Hierarchical Long Short-Term Concurrent Memory for Human Interaction Recognition"} {"abstract": "Graph Convolutional Networks (GCNs) have already demonstrated their powerful ability to model the irregular data, e.g., skeletal data in human action recognition, providing an exciting new way to fuse rich structural information for nodes residing in different parts of a graph. In human action recognition, current works introduce a dynamic graph generation mechanism to better capture the underlying semantic skeleton connections and thus improves the performance. In this paper, we provide an orthogonal way to explore the underlying connections. Instead of introducing an expensive dynamic graph generation paradigm, we build a more efficient GCN on a Riemann manifold, which we think is a more suitable space to model the graph data, to make the extracted representations fit the embedding matrix. Specifically, we present a novel spatial-temporal GCN (ST-GCN) architecture which is defined via the Poincar\\'e geometry such that it is able to better model the latent anatomy of the structure data. To further explore the optimal projection dimension in the Riemann space, we mix different dimensions on the manifold and provide an efficient way to explore the dimension for each ST-GCN layer. With the final resulted architecture, we evaluate our method on two current largest scale 3D datasets, i.e., NTU RGB+D and NTU RGB+D 120. The comparison results show that the model could achieve a superior performance under any given evaluation metrics with only 40\\% model size when compared with the previous best GCN method, which proves the effectiveness of our model.", "field": ["Graph Models"], "task": ["Action Recognition", "Graph Generation", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["NTU RGB+D", "NTU RGB+D 120"], "metric": ["Accuracy (Cross-Subject)", "Accuracy (Cross-Setup)", "Accuracy (CV)", "Accuracy (CS)"], "title": "Mix Dimension in Poincar\u00e9 Geometry for 3D Skeleton-based Action Recognition"} {"abstract": "We consider Large-Scale Multi-Label Text Classification (LMTC) in the legal domain. We release a new dataset of 57k legislative documents from EURLEX, annotated with ~4.3k EUROVOC labels, which is suitable for LMTC, few- and zero-shot learning. Experimenting with several neural classifiers, we show that BIGRUs with label-wise attention perform better than other current state of the art methods. Domain-specific WORD2VEC and context-sensitive ELMO embeddings further improve performance. We also find that considering only particular zones of the documents is sufficient. This allows us to bypass BERT's maximum text length limit and fine-tune BERT, obtaining the best results in all but zero-shot learning cases.", "field": ["Regularization", "Attention Modules", "Stochastic Optimization", "Learning Rate Schedules", "Recurrent Neural Networks", "Activation Functions", "Output Functions", "Normalization", "Subword Segmentation", "Language Models", "Word Embeddings", "Feedforward Networks", "Attention Mechanisms", "Skip Connections", "Bidirectional Recurrent Neural Networks"], "task": ["Multi-Label Text Classification", "Text Classification", "Zero-Shot Learning"], "method": ["Weight Decay", "Adam", "Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Bidirectional LSTM", "Residual Connection", "Dense Connections", "ELMo", "Layer Normalization", "GELU", "Sigmoid Activation", "WordPiece", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "LSTM", "Dropout", "BERT"], "dataset": ["EUR-Lex"], "metric": ["RP@5", "P@5", "nDCG@5", "Micro F1"], "title": "Large-Scale Multi-Label Text Classification on EU Legislation"} {"abstract": "Deep Neural Networks (DNNs) are powerful models that have achieved excellent\nperformance on difficult learning tasks. Although DNNs work well whenever large\nlabeled training sets are available, they cannot be used to map sequences to\nsequences. In this paper, we present a general end-to-end approach to sequence\nlearning that makes minimal assumptions on the sequence structure. Our method\nuses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to\na vector of a fixed dimensionality, and then another deep LSTM to decode the\ntarget sequence from the vector. Our main result is that on an English to\nFrench translation task from the WMT'14 dataset, the translations produced by\nthe LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's\nBLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did\nnot have difficulty on long sentences. For comparison, a phrase-based SMT\nsystem achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM\nto rerank the 1000 hypotheses produced by the aforementioned SMT system, its\nBLEU score increases to 36.5, which is close to the previous best result on\nthis task. The LSTM also learned sensible phrase and sentence representations\nthat are sensitive to word order and are relatively invariant to the active and\nthe passive voice. Finally, we found that reversing the order of the words in\nall source sentences (but not target sentences) improved the LSTM's performance\nmarkedly, because doing so introduced many short term dependencies between the\nsource and the target sentence which made the optimization problem easier.", "field": ["Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models"], "task": ["Machine Translation", "Traffic Prediction"], "method": ["Long Short-Term Memory", "Tanh Activation", "Sequence to Sequence", "LSTM", "Seq2Seq", "Sigmoid Activation"], "dataset": ["PeMS-M", "WMT2014 English-French"], "metric": ["BLEU score", "MAE (60 min)"], "title": "Sequence to Sequence Learning with Neural Networks"} {"abstract": "Beyond depth estimation from a single image, the monocular cue is useful in a broader range of depth inference applications and settings---such as when one can leverage other available depth cues for improved accuracy. Currently, different applications, with different inference tasks and combinations of depth cues, are solved via different specialized networks---trained separately for each application. Instead, we propose a versatile task-agnostic monocular model that outputs a probability distribution over scene depth given an input color image, as a sample approximation of outputs from a patch-wise conditional VAE. We show that this distributional output can be used to enable a variety of inference tasks in different settings, without needing to retrain for each application. Across a diverse set of applications (depth completion, user guided estimation, etc.), our common model yields results with high accuracy---comparable to or surpassing that of state-of-the-art methods dependent on application-specific networks.", "field": ["Generative Models"], "task": ["Depth Completion", "Depth Estimation", "Monocular Depth Estimation"], "method": ["VAE", "Variational Autoencoder"], "dataset": ["NYU-Depth V2"], "metric": ["RMSE"], "title": "Generating and Exploiting Probabilistic Monocular Depth Estimates"} {"abstract": "Machine learning algorithms frequently require careful tuning of model\nhyperparameters, regularization terms, and optimization parameters.\nUnfortunately, this tuning is often a \"black art\" that requires expert\nexperience, unwritten rules of thumb, or sometimes brute-force search. Much\nmore appealing is the idea of developing automatic approaches which can\noptimize the performance of a given learning algorithm to the task at hand. In\nthis work, we consider the automatic tuning problem within the framework of\nBayesian optimization, in which a learning algorithm's generalization\nperformance is modeled as a sample from a Gaussian process (GP). The tractable\nposterior distribution induced by the GP leads to efficient use of the\ninformation gathered by previous experiments, enabling optimal choices about\nwhat parameters to try next. Here we show how the effects of the Gaussian\nprocess prior and the associated inference procedure can have a large impact on\nthe success or failure of Bayesian optimization. We show that thoughtful\nchoices can lead to results that exceed expert-level performance in tuning\nmachine learning algorithms. We also describe new algorithms that take into\naccount the variable cost (duration) of learning experiments and that can\nleverage the presence of multiple cores for parallel experimentation. We show\nthat these proposed algorithms improve on previous automatic procedures and can\nreach or surpass human expert-level optimization on a diverse set of\ncontemporary algorithms including latent Dirichlet allocation, structured SVMs\nand convolutional neural networks.", "field": ["Non-Parametric Classification"], "task": ["Hyperparameter Optimization"], "method": ["Gaussian Process"], "dataset": ["CIFAR-10"], "metric": ["Percentage correct"], "title": "Practical Bayesian Optimization of Machine Learning Algorithms"} {"abstract": "Messages in human conversations inherently convey emotions. The task of detecting emotions in textual conversations leads to a wide range of applications such as opinion mining in social networks. However, enabling machines to analyze emotions in conversations is challenging, partly because humans often rely on the context and commonsense knowledge to express emotions. In this paper, we address these challenges by proposing a Knowledge-Enriched Transformer (KET), where contextual utterances are interpreted using hierarchical self-attention and external commonsense knowledge is dynamically leveraged using a context-aware affective graph attention mechanism. Experiments on multiple textual conversation datasets demonstrate that both context and commonsense knowledge are consistently beneficial to the emotion detection performance. In addition, the experimental results show that our KET model outperforms the state-of-the-art models on most of the tested datasets in F1 score.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Emotion Recognition in Conversation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["EC", "DailyDialog", "IEMOCAP", "MELD", "EmoryNLP"], "metric": ["Weighted Macro-F1", "F1", "Micro-F1"], "title": "Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations"} {"abstract": "This paper explores how to harvest precise object segmentation masks while minimizing the human interaction cost. To achieve this, we propose an Inside-Outside Guidance (IOG) approach in this work. Concretely, we leverage an inside point that is clicked near the object center and two outside points at the symmetrical corner locations (top-left and bottom-right or top-right and bottom-left) of a tight bounding box that encloses the target object. This results in a total of one foreground click and four background clicks for segmentation. The advantages of our IOG is four-fold: 1) the two outside points can help to remove distractions from other objects or background; 2) the inside point can help to eliminate the unrelated regions inside the bounding box; 3) the inside and outside points are easily identified, reducing the confusion raised by the state-of-the-art DEXTR in labeling some extreme samples; 4) our approach naturally supports additional clicks annotations for further correction. Despite its simplicity, our IOG not only achieves state-of-the-art performance on several popular benchmarks, but also demonstrates strong generalization capability across different domains such as street scenes, aerial imagery and medical images, without fine-tuning. In addition, we also propose a simple two-stage solution that enables our IOG to produce high quality instance segmentation masks from existing datasets with off-the-shelf bounding boxes such as ImageNet and Open Images, demonstrating the superiority of our IOG as an annotation tool.\r", "field": ["Initialization", "Semantic Segmentation Modules", "Convolutional Neural Networks", "Image Segmentation Models", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Interactive Segmentation", "Semantic Segmentation"], "method": ["ResNet", "Dilated Convolution", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "Residual Network", "ReLU", "Residual Connection", "Bottleneck Residual Block", "DEXTR", "Kaiming Initialization", "Residual Block", "Deep Extreme Cut", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "Pyramid Pooling Module"], "dataset": ["Cityscapes val", "COCO", "PASCAL2COCO(Unseen)", "Rooftop", "ssTEM"], "metric": ["Instance Average IoU"], "title": "Interactive Object Segmentation With Inside-Outside Guidance"} {"abstract": "Recent studies in image classification have demonstrated a variety of techniques for improving the performance of Convolutional Neural Networks (CNNs). However, attempts to combine existing techniques to create a practical model are still uncommon. In this study, we carry out extensive experiments to validate that carefully assembling these techniques and applying them to basic CNN models (e.g. ResNet and MobileNet) can improve the accuracy and robustness of the models while minimizing the loss of throughput. Our proposed assembled ResNet-50 shows improvements in top-1 accuracy from 76.3\\% to 82.78\\%, mCE from 76.0\\% to 48.9\\% and mFR from 57.7\\% to 32.3\\% on ILSVRC2012 validation set. With these improvements, inference throughput only decreases from 536 to 312. To verify the performance improvement in transfer learning, fine grained classification and image retrieval tasks were tested on several public datasets and showed that the improvement to backbone network performance boosted transfer learning performance significantly. Our approach achieved 1st place in the iFood Competition Fine-Grained Visual Recognition at CVPR 2019, and the source code and trained models are available at https://github.com/clovaai/assembled-cnn", "field": ["Image Model Blocks", "Image Data Augmentation", "Initialization", "Output Functions", "Convolutional Neural Networks", "Learning Rate Schedules", "Regularization", "Recurrent Neural Networks", "Activation Functions", "Stochastic Optimization", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Attention Mechanisms", "Skip Connections", "Downsampling", "Skip Connection Blocks"], "task": ["Fine-Grained Image Classification", "Fine-Grained Visual Recognition", "Image Classification", "Image Retrieval", "Transfer Learning"], "method": ["Depthwise Convolution", "Weight Decay", "Dilated Convolution", "Selective Kernel Convolution", "Cosine Annealing", "Average Pooling", "Cutout", "Long Short-Term Memory", "Mixup", "Tanh Activation", "1x1 Convolution", "ResNet-D", "Big-Little Module", "Channel-wise Soft Attention", "Random Horizontal Flip", "AutoAugment", "Convolution", "ReLU", "Residual Connection", "Linear Layer", "Dense Connections", "MobileNetV1", "Random Resized Crop", "Xavier Initialization", "Assemble-ResNet", "Batch Normalization", "Label Smoothing", "ColorJitter", "Pointwise Convolution", "Kaiming Initialization", "Selective Kernel", "Sigmoid Activation", "DropBlock", "Color Jitter", "SGD with Momentum", "Softmax", "Anti-Alias Downsampling", "Linear Warmup With Cosine Annealing", "LSTM", "Bottleneck Residual Block", "Depthwise Separable Convolution", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["SOP", "FGVC Aircraft", "Oxford 102 Flowers", "Oxford-IIIT Pets", "ImageNet ReaL", "Food-101", "Stanford Cars", "ImageNet"], "metric": ["Top-1 Error Rate", "Accuracy", "Recall@1", "Top 1 Accuracy"], "title": "Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network"} {"abstract": "We introduce the Action Transformer model for recognizing and localizing human actions in video clips. We repurpose a Transformer-style architecture to aggregate features from the spatiotemporal context around the person whose actions we are trying to classify. We show that by using high-resolution, person-specific, class-agnostic queries, the model spontaneously learns to track individual people and to pick up on semantic context from the actions of others. Additionally its attention mechanism learns to emphasize hands and faces, which are often crucial to discriminate an action - all without explicit supervision other than boxes and class labels. We train and test our Action Transformer network on the Atomic Visual Actions (AVA) dataset, outperforming the state-of-the-art by a significant margin using only raw RGB frames as input.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Action Recognition", "Recognizing And Localizing Human Actions"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["AVA v2.1"], "metric": ["GFlops", "Params (M)", "mAP (Val)"], "title": "Video Action Transformer Network"} {"abstract": "Due to the shallow structure, classic graph neural networks (GNNs) fail in modelling high-order graph structures. Such high-order structures capture critical insights for downstream tasks. Concretely, in recommender systems, disregarding these insights lead to inadequate distillation of collaborative signals. In this paper, we employ collaborative subgraphs (CSGs) and metapaths to explicitly capture these high-order graph structures. We propose meta\\textbf{P}ath and \\textbf{E}ntity-\\textbf{A}ware \\textbf{G}raph \\textbf{N}eural \\textbf{N}etwork (PEAGNN). We extract an enclosing CSG for user-item pair within its $h$-hop neighbours. Multiple metapath-aware subgraphs are then extracted from CSG. PEAGNN trains multilayer GNNs to perform information aggregation on such subgraphs. This aggregated information from different metapaths is fused using attention mechanism. Finally, PEAGNN gives us the representations for node and subgraph, which can be used to train MLP for predicting score for target user-item pairs. To leverage the local structure of CSGs, we present entity-awareness that acts as a contrastive regularizer of node embedding. Moreover, PEAGNN can be combined with prominent layers such as GAT, GCN and GraphSage. Our empirical evaluation shows that our proposed technique outperforms competitive baselines on several datasets for recommendation task. Our analysis demonstrates that PEAGNN also learns meaningful metapath combinations from a given set of metapaths.", "field": ["Graph Models"], "task": ["Link Prediction", "Recommendation Systems"], "method": ["Graph Convolutional Network", "GAT", "Graph Attention Network", "GCN"], "dataset": ["MovieLens 25M", "Yelp"], "metric": ["Hits@10", "nDCG@10", "HR@10"], "title": "Metapath- and Entity-aware Graph Neural Network for Recommendation"} {"abstract": "How do humans recognize an object in a piece of video? Due to the deteriorated quality of single frame, it may be hard for people to identify an occluded object in this frame by just utilizing information within one image. We argue that there are two important cues for humans to recognize objects in videos: the global semantic information and the local localization information. Recently, plenty of methods adopt the self-attention mechanisms to enhance the features in key frame with either global semantic information or local localization information. In this paper we introduce memory enhanced global-local aggregation (MEGA) network, which is among the first trials that takes full consideration of both global and local information. Furthermore, empowered by a novel and carefully-designed Long Range Memory (LRM) module, our proposed MEGA could enable the key frame to get access to much more content than any previous methods. Enhanced by these two sources of information, our method achieves state-of-the-art performance on ImageNet VID dataset. Code is available at \\url{https://github.com/Scalsol/mega.pytorch}.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Object Detection", "Video Object Detection"], "method": ["ResNeXt Block", "Average Pooling", "Grouped Convolution", "ResNeXt", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Connection", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["ImageNet VID"], "metric": ["MAP"], "title": "Memory Enhanced Global-Local Aggregation for Video Object Detection"} {"abstract": "We tackle image question answering (ImageQA) problem by learning a\nconvolutional neural network (CNN) with a dynamic parameter layer whose weights\nare determined adaptively based on questions. For the adaptive parameter\nprediction, we employ a separate parameter prediction network, which consists\nof gated recurrent unit (GRU) taking a question as its input and a\nfully-connected layer generating a set of candidate weights as its output.\nHowever, it is challenging to construct a parameter prediction network for a\nlarge number of parameters in the fully-connected dynamic parameter layer of\nthe CNN. We reduce the complexity of this problem by incorporating a hashing\ntechnique, where the candidate weights given by the parameter prediction\nnetwork are selected using a predefined hash function to determine individual\nweights in the dynamic parameter layer. The proposed network---joint network\nwith the CNN for ImageQA and the parameter prediction network---is trained\nend-to-end through back-propagation, where its weights are initialized using a\npre-trained CNN and GRU. The proposed algorithm illustrates the\nstate-of-the-art performance on all available public ImageQA benchmarks.", "field": ["Recurrent Neural Networks"], "task": ["Image Retrieval with Multi-Modal Query", "Question Answering", "Visual Question Answering"], "method": ["Gated Recurrent Unit", "GRU"], "dataset": ["Fashion200k"], "metric": ["Recall@50", "Recall@1", "Recall@10"], "title": "Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction"} {"abstract": "Targeted sentiment classification predicts the sentiment polarity on given target mentions in input texts. Dominant methods employ neural networks for encoding the input sentence and extracting relations between target mentions and their contexts. Recently, graph neural network has been investigated for integrating dependency syntax for the task, achieving the state-of-the-art results. However, existing methods do not consider dependency label information, which can be intuitively useful. To solve the problem, we investigate a novel relational graph attention network that integrates typed syntactic dependency information. Results on standard benchmarks show that our method can effectively leverage label information for improving targeted sentiment classification performances. Our final model significantly outperforms state-of-the-art syntax-based approaches.", "field": ["Attention Modules", "Output Functions", "Attention Mechanisms"], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": ["Softmax", "Scaled Dot-Product Attention", "Graph Self-Attention"], "dataset": ["MAMS", "SemEval 2014 Task 4 Sub Task 2"], "metric": ["Acc", "Macro-F1", "Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Investigating Typed Syntactic Dependencies for Targeted Sentiment Classification Using Graph Attention Neural Network"} {"abstract": "We examine two fundamental tasks associated with graph representation\nlearning: link prediction and semi-supervised node classification. We present a\nnovel autoencoder architecture capable of learning a joint representation of\nboth local graph structure and available node features for the multi-task\nlearning of link prediction and node classification. Our autoencoder\narchitecture is efficiently trained end-to-end in a single learning stage to\nsimultaneously perform link prediction and node classification, whereas\nprevious related methods require multiple training steps that are difficult to\noptimize. We provide a comprehensive empirical evaluation of our models on nine\nbenchmark graph-structured datasets and demonstrate significant improvement\nover related methods for graph representation learning. Reference code and data\nare available at https://github.com/vuptran/graph-representation-learning", "field": ["Generative Models"], "task": ["Graph Representation Learning", "Link Prediction", "Multi-Task Learning", "Node Classification", "Representation Learning"], "method": ["AutoEncoder"], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Learning to Make Predictions on Graphs with Autoencoders"} {"abstract": "This work make the first attempt to generate articulated human motion\nsequence from a single image. On the one hand, we utilize paired inputs\nincluding human skeleton information as motion embedding and a single human\nimage as appearance reference, to generate novel motion frames, based on the\nconditional GAN infrastructure. On the other hand, a triplet loss is employed\nto pursue appearance-smoothness between consecutive frames. As the proposed\nframework is capable of jointly exploiting the image appearance space and\narticulated/kinematic motion space, it generates realistic articulated motion\nsequence, in contrast to most previous video generation methods which yield\nblurred motion effects. We test our model on two human action datasets\nincluding KTH and Human3.6M, and the proposed framework generates very\npromising results on both datasets.", "field": ["Generative Models", "Convolutions"], "task": ["Gesture-to-Gesture Translation", "Video Generation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["Senz3D", "NTU Hand Digit"], "metric": ["PSNR", "AMT", "IS"], "title": "Skeleton-aided Articulated Motion Generation"} {"abstract": "State-of-the-art object detection networks depend on region proposal\nalgorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN\nhave reduced the running time of these detection networks, exposing region\nproposal computation as a bottleneck. In this work, we introduce a Region\nProposal Network (RPN) that shares full-image convolutional features with the\ndetection network, thus enabling nearly cost-free region proposals. An RPN is a\nfully convolutional network that simultaneously predicts object bounds and\nobjectness scores at each position. The RPN is trained end-to-end to generate\nhigh-quality region proposals, which are used by Fast R-CNN for detection. We\nfurther merge RPN and Fast R-CNN into a single network by sharing their\nconvolutional features---using the recently popular terminology of neural\nnetworks with 'attention' mechanisms, the RPN component tells the unified\nnetwork where to look. For the very deep VGG-16 model, our detection system has\na frame rate of 5fps (including all steps) on a GPU, while achieving\nstate-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS\nCOCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015\ncompetitions, Faster R-CNN and RPN are the foundations of the 1st-place winning\nentries in several tracks. Code has been made publicly available.", "field": ["Output Functions", "RoI Feature Extractors", "Convolutions", "Region Proposal", "Object Detection Models"], "task": ["Object Detection", "Real-Time Object Detection", "Region Proposal"], "method": ["RPN", "Fast R-CNN", "Faster R-CNN", "Softmax", "RoIPool", "Convolution", "Region Proposal Network"], "dataset": ["PASCAL VOC 2007", "CARPK", "SKU-110K"], "metric": ["RMSE", "FPS", "MAP", "MAE", "AP75", "AP"], "title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks"} {"abstract": "We propose a conceptually simple and lightweight framework for deep\nreinforcement learning that uses asynchronous gradient descent for optimization\nof deep neural network controllers. We present asynchronous variants of four\nstandard reinforcement learning algorithms and show that parallel\nactor-learners have a stabilizing effect on training allowing all four methods\nto successfully train neural network controllers. The best performing method,\nan asynchronous variant of actor-critic, surpasses the current state-of-the-art\non the Atari domain while training for half the time on a single multi-core CPU\ninstead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds\non a wide variety of continuous motor control problems as well as on a new task\nof navigating random 3D mazes using a visual input.", "field": ["Policy Gradient Methods", "Output Functions", "Regularization", "Recurrent Neural Networks", "Activation Functions", "Convolutions", "Feedforward Networks"], "task": ["Atari Games"], "method": ["A2C", "A3C", "Softmax", "Long Short-Term Memory", "Entropy Regularization", "Convolution", "Tanh Activation", "LSTM", "Dense Connections", "Sigmoid Activation"], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "Asynchronous Methods for Deep Reinforcement Learning"} {"abstract": "Software log analysis helps to maintain the health of software solutions and ensure compliance and security. Existing software systems consist of heterogeneous components emitting logs in various formats. A typical solution is to unify the logs using manually built parsers, which is laborious. Instead, we explore the possibility of automating the parsing task by employing machine translation (MT). We create a tool that generates synthetic Apache log records which we used to train recurrent-neural-network-based MT models. Models' evaluation on real-world logs shows that the models can learn Apache log format and parse individual log records. The median relative edit distance between an actual real-world log record and the MT prediction is less than or equal to 28%. Thus, we show that log parsing using an MT approach is promising.", "field": ["Recurrent Neural Networks"], "task": ["LOG PARSING", "Machine Translation"], "method": ["Gated Recurrent Unit", "GRU", "Long Short-Term Memory", "LSTM"], "dataset": ["V_C (trained on T_H)", "V_B (trained on T_H)", "V_A (trained on T_H)"], "metric": ["Median Relative Edit Distance"], "title": "On Automatic Parsing of Log Records"} {"abstract": "Scaling up deep neural network capacity has been known as an effective approach to improving model quality for several different machine learning tasks. In many cases, increasing model capacity beyond the memory limit of a single accelerator has required developing special algorithms or infrastructure. These solutions are often architecture-specific and do not transfer to other tasks. To address the need for efficient and task-independent model parallelism, we introduce GPipe, a pipeline parallelism library that allows scaling any network that can be expressed as a sequence of layers. By pipelining different sub-sequences of layers on separate accelerators, GPipe provides the flexibility of scaling a variety of different networks to gigantic sizes efficiently. Moreover, GPipe utilizes a novel batch-splitting pipelining algorithm, resulting in almost linear speedup when a model is partitioned across multiple accelerators. We demonstrate the advantages of GPipe by training large-scale neural networks on two different tasks with distinct network architectures: (i) Image Classification: We train a 557-million-parameter AmoebaNet model and attain a top-1 accuracy of 84.4% on ImageNet-2012, (ii) Multilingual Neural Machine Translation: We train a single 6-billion-parameter, 128-layer Transformer model on a corpus spanning over 100 languages and achieve better quality than all bilingual models.", "field": ["Regularization", "Convolutional Neural Networks", "Stochastic Optimization", "Output Functions", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Fine-Grained Image Classification", "Image Classification", "Machine Translation"], "method": ["Average Pooling", "Adam", "Scaled Dot-Product Attention", "Spatially Separable Convolution", "Transformer", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "AmoebaNet", "Layer Normalization", "Label Smoothing", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Dropout", "Rectified Linear Units", "Max Pooling"], "dataset": ["CIFAR-100", "CIFAR-10", "Stanford Cars", "ImageNet", "Birdsnap"], "metric": ["Top 5 Accuracy", "Accuracy", "Percentage correct", "Top 1 Accuracy"], "title": "GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism"} {"abstract": "Networkembeddingisanimportantmethodtolearnlow-dimensional representations of vertexes in networks, aiming to capture and preserve the network structure. Almost all the existing network embeddingmethodsadoptshallowmodels. However,sincetheunderlyingnetworkstructureiscomplex, shallowmodelscannotcapture the highly non-linear network structure, resulting in sub-optimal network representations. Therefore, how to \ufb01nd a method that is able to effectively capture the highly non-linear network structure and preserve the global and local structure is an open yet important problem. To solve this problem, in this paper we propose a StructuralDeepNetworkEmbeddingmethod,namelySDNE.More speci\ufb01cally, we \ufb01rst propose a semi-supervised deep model, which has multiple layers of non-linear functions, thereby being able to capture the highly non-linear network structure. Then we propose to exploit the \ufb01rst-order and second-order proximity jointly to preserve the network structure. The second-order proximity is used bytheunsupervisedcomponenttocapturetheglobalnetworkstructure. Whilethe\ufb01rst-orderproximityisusedasthesupervisedinformation in the supervised component to preserve the local network structure. By jointly optimizing them in the semi-supervised deep model, our method can preserve both the local and global network structureandisrobusttosparsenetworks. Empirically,weconduct the experiments on \ufb01ve real-world networks, including a language network, a citation network and three social networks. The results show that compared to the baselines, our method can reconstruct the original network signi\ufb01cantly better and achieves substantial gains in three applications, i.e. multi-label classi\ufb01cation, link prediction and visualization.", "field": ["Graph Embeddings"], "task": ["Graph Classification", "Link Prediction", "Network Embedding"], "method": ["SDNE", "Structural Deep Network Embedding"], "dataset": ["BP-fMRI-97", "HIV-DTI-77", "HIV-fMRI-77 "], "metric": ["F1", "Accuracy"], "title": "Structural Deep Network Embedding"} {"abstract": "In recent years, a new interesting task, called emotion-cause pair extraction (ECPE), has emerged in the area of text emotion analysis. It aims at extracting the potential pairs of emotions and their corresponding causes in a document. To solve this task, the existing research employed a two-step framework, which first extracts individual emotion set and cause set, and then pair the corresponding emotions and causes. However, such a pipeline of two steps contains some inherent flaws: 1) the modeling does not aim at extracting the final emotion-cause pair directly; 2) the errors from the first step will affect the performance of the second step. To address these shortcomings, in this paper we propose a new end-to-end approach, called ECPE-Two-Dimensional (ECPE-2D), to represent the emotion-cause pairs by a 2D representation scheme. A 2D transformer module and two variants, window-constrained and cross-road 2D transformers, are further proposed to model the interactions of different emotion-cause pairs. The 2D representation, interaction, and prediction are integrated into a joint framework. In addition to the advantages of joint modeling, the experimental results on the benchmark emotion cause corpus show that our approach improves the F1 score of the state-of-the-art from 61.28{\\%} to 68.89{\\%}.", "field": ["Regularization", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Bidirectional Recurrent Neural Networks"], "task": ["Emotion-Cause Pair Extraction", "Emotion Recognition"], "method": ["Stochastic Gradient Descent", "Adam", "Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Rectified Linear Units", "Bidirectional LSTM", "ReLU", "LSTM", "Dropout", "Embedding Dropout", "SGD", "Sigmoid Activation"], "dataset": ["ECPE"], "metric": ["F1"], "title": "ECPE-2D: Emotion-Cause Pair Extraction based on Joint Two-Dimensional Representation, Interaction and Prediction"} {"abstract": "In this work, we introduce Panoptic-DeepLab, a simple, strong, and fast system for panoptic segmentation, aiming to establish a solid baseline for bottom-up methods that can achieve comparable performance of two-stage methods while yielding fast inference speed. In particular, Panoptic-DeepLab adopts the dual-ASPP and dual-decoder structures specific to semantic, and instance segmentation, respectively. The semantic segmentation branch is the same as the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation branch is class-agnostic, involving a simple instance center regression. As a result, our single Panoptic-DeepLab simultaneously ranks first at all three Cityscapes benchmarks, setting the new state-of-art of 84.2% mIoU, 39.0% AP, and 65.5% PQ on test set. Additionally, equipped with MobileNetV3, Panoptic-DeepLab runs nearly in real-time with a single 1025x2049 image (15.8 frames per second), while achieving a competitive performance on Cityscapes (54.1 PQ% on test set). On Mapillary Vistas test set, our ensemble of six models attains 42.7% PQ, outperforming the challenge winner in 2018 by a healthy margin of 1.5%. Finally, our Panoptic-DeepLab also performs on par with several top-down approaches on the challenging COCO dataset. For the first time, we demonstrate a bottom-up approach could deliver state-of-the-art results on panoptic segmentation.", "field": ["Regularization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Panoptic Segmentation", "Regression", "Semantic Segmentation"], "method": ["Depthwise Convolution", "ReLU6", "Squeeze-and-Excitation Block", "Average Pooling", "Inverted Residual Block", "Hard Swish", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Rectified Linear Units", "MobileNetV3", "Dropout", "Depthwise Separable Convolution", "Pointwise Convolution", "Global Average Pooling", "Dense Connections", "Sigmoid Activation"], "dataset": ["Mapillary val", "Cityscapes val", "COCO test-dev", "Cityscapes test"], "metric": ["PQst", "Average Precision", "mIoU", "PQth", "Mean IoU (class)", "PQ", "AP"], "title": "Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation"} {"abstract": "Image motion blur usually results from moving objects or camera shakes. Such blur is generally directional and non-uniform. Previous research efforts attempt to solve non-uniform blur by using self-recurrent multi-scale or multi-patch architectures accompanying with self-attention. However, using self-recurrent frameworks typically leads to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes blur-aware attention networks (BANet) that accomplish accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different degrees and with cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and HIDE benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-art in blurred image restoration and can provide deblurred results in realtime.", "field": ["Convolutions"], "task": ["Deblurring", "Image Restoration"], "method": ["Dilated Convolution", "Convolution"], "dataset": ["GoPro"], "metric": ["SSIM", "PSNR"], "title": "BANet: Blur-aware Attention Networks for Dynamic Scene Deblurring"} {"abstract": "Medical image segmentation is an essential prerequisite for developing healthcare systems, especially for disease diagnosis and treatment planning. On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard and achieved tremendous success. However, due to the intrinsic locality of convolution operations, U-Net generally demonstrates limitations in explicitly modeling long-range dependency. Transformers, designed for sequence-to-sequence prediction, have emerged as alternative architectures with innate global self-attention mechanisms, but can result in limited localization abilities due to insufficient low-level details. In this paper, we propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation. On one hand, the Transformer encodes tokenized image patches from a convolution neural network (CNN) feature map as the input sequence for extracting global contexts. On the other hand, the decoder upsamples the encoded features which are then combined with the high-resolution CNN feature maps to enable precise localization. We argue that Transformers can serve as strong encoders for medical image segmentation tasks, with the combination of U-Net to enhance finer details by recovering localized spatial information. TransUNet achieves superior performances to various competing methods on different medical applications including multi-organ segmentation and cardiac segmentation. Code and models are available at https://github.com/Beckschen/TransUNet.", "field": ["Semantic Segmentation Models", "Output Functions", "Regularization", "Attention Modules", "Stochastic Optimization", "Activation Functions", "Subword Segmentation", "Normalization", "Convolutions", "Feedforward Networks", "Transformers", "Pooling Operations", "Attention Mechanisms", "Skip Connections"], "task": ["Cardiac Segmentation", "Medical Image Segmentation", "Semantic Segmentation"], "method": ["U-Net", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Concatenated Skip Connection", "Convolution", "Rectified Linear Units", "ReLU", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "Label Smoothing", "Dense Connections", "Max Pooling"], "dataset": ["Synapse multi-organ CT", "Automatic Cardiac Diagnosis Challenge (ACDC)"], "metric": ["Avg HD", "Avg DSC"], "title": "TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation"} {"abstract": "Many interesting problems in machine learning are being revisited with new\ndeep learning tools. For graph-based semisupervised learning, a recent\nimportant development is graph convolutional networks (GCNs), which nicely\nintegrate local vertex features and graph topology in the convolutional layers.\nAlthough the GCN model compares favorably with other state-of-the-art methods,\nits mechanisms are not clear and it still requires a considerable amount of\nlabeled data for validation and model selection. In this paper, we develop\ndeeper insights into the GCN model and address its fundamental limits. First,\nwe show that the graph convolution of the GCN model is actually a special form\nof Laplacian smoothing, which is the key reason why GCNs work, but it also\nbrings potential concerns of over-smoothing with many convolutional layers.\nSecond, to overcome the limits of the GCN model with shallow architectures, we\npropose both co-training and self-training approaches to train GCNs. Our\napproaches significantly improve GCNs in learning with very few labels, and\nexempt them from requiring additional labels for validation. Extensive\nexperiments on benchmarks have verified our theory and proposals.", "field": ["Convolutions", "Graph Models"], "task": ["Model Selection", "Node Classification"], "method": ["Graph Convolutional Network", "GCN", "Convolution"], "dataset": ["Facebook", "Brazil Air-Traffic", "Europe Air-Traffic", "Wiki-Vote", "USA Air-Traffic", "Flickr"], "metric": ["Accuracy"], "title": "Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning"} {"abstract": "We introduce H3DNet, which takes a colorless 3D point cloud as input and outputs a collection of oriented object bounding boxes (or BB) and their semantic labels. The critical idea of H3DNet is to predict a hybrid set of geometric primitives, i.e., BB centers, BB face centers, and BB edge centers. We show how to convert the predicted geometric primitives into object proposals by defining a distance function between an object and the geometric primitives. This distance function enables continuous optimization of object proposals, and its local minimums provide high-fidelity object proposals. H3DNet then utilizes a matching and refinement module to classify object proposals into detected objects and fine-tune the geometric parameters of the detected objects. The hybrid set of geometric primitives not only provides more accurate signals for object detection than using a single type of geometric primitives, but it also provides an overcomplete set of constraints on the resulting 3D layout. Therefore, H3DNet can tolerate outliers in predicted geometric primitives. Our model achieves state-of-the-art 3D detection results on two large datasets with real 3D scans, ScanNet and SUN RGB-D.", "field": ["Object Detection Models"], "task": ["3D Object Detection", "Object Detection"], "method": ["H3DNet"], "dataset": ["ScanNetV2", "SUN-RGBD val"], "metric": ["mAP@0.5", "mAP@0.25", "MAP"], "title": "H3DNet: 3D Object Detection Using Hybrid Geometric Primitives"} {"abstract": "This paper presents our pioneering effort for emotion recognition in conversation (ERC) with pre-trained language models. Unlike regular documents, conversational utterances appear alternately from different parties and are usually organized as hierarchical structures in previous work. Such structures are not conducive to the application of pre-trained language models such as XLNet. To address this issue, we propose an all-in-one XLNet model, namely DialogXL, with enhanced memory to store longer historical context and dialog-aware self-attention to deal with the multi-party structures. Specifically, we first modify the recurrence mechanism of XLNet from segment-level to utterance-level in order to better model the conversational data. Second, we introduce dialog-aware self-attention in replacement of the vanilla self-attention in XLNet to capture useful intra- and inter-speaker dependencies. Extensive experiments are conducted on four ERC benchmarks with mainstream models presented for comparison. The experimental results show that the proposed model outperforms the baselines on all the datasets. Several other experiments such as ablation study and error analysis are also conducted and the results confirm the role of the critical modules of DialogXL.", "field": ["Regularization", "Attention Modules", "Learning Rate Schedules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Tokenizers", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Emotion Recognition", "Emotion Recognition in Conversation"], "method": ["XLNet", "Layer Normalization", "Byte Pair Encoding", "BPE", "GELU", "Adam", "Softmax", "Multi-Head Attention", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "SentencePiece", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["IEMOCAP", "MELD", "EmoryNLP", "DailyDialog"], "metric": ["Weighted Macro-F1", "F1", "Micro-F1"], "title": "DialogXL: All-in-One XLNet for Multi-Party Conversation Emotion Recognition"} {"abstract": "Learning with noisy labels, which aims to reduce expensive labors on accurate\nannotations, has become imperative in the Big Data era. Previous noise\ntransition based method has achieved promising results and presented a\ntheoretical guarantee on performance in the case of class-conditional noise.\nHowever, this type of approaches critically depend on an accurate\npre-estimation of the noise transition, which is usually impractical.\nSubsequent improvement adapts the pre-estimation along with the training\nprogress via a Softmax layer. However, the parameters in the Softmax layer are\nhighly tweaked for the fragile performance due to the ill-posed stochastic\napproximation. To address these issues, we propose a Latent Class-Conditional\nNoise model (LCCN) that naturally embeds the noise transition under a Bayesian\nframework. By projecting the noise transition into a Dirichlet-distributed\nspace, the learning is constrained on a simplex based on the whole dataset,\ninstead of some ad-hoc parametric space. We then deduce a dynamic label\nregression method for LCCN to iteratively infer the latent labels, to\nstochastically train the classifier and to model the noise. Our approach\nsafeguards the bounded update of the noise transition, which avoids previous\narbitrarily tuning via a batch of samples. We further generalize LCCN for\nopen-set noisy labels and the semi-supervised setting. We perform extensive\nexperiments with the controllable noise data sets, CIFAR-10 and CIFAR-100, and\nthe agnostic noise data sets, Clothing1M and WebVision17. The experimental\nresults have demonstrated that the proposed model outperforms several\nstate-of-the-art methods.", "field": ["Output Functions"], "task": ["Image Classification", "Learning with noisy labels", "Regression"], "method": ["Softmax"], "dataset": ["Clothing1M"], "metric": ["Accuracy"], "title": "Safeguarded Dynamic Label Regression for Generalized Noisy Supervision"} {"abstract": "We aim to localize objects in images using image-level supervision only.\nPrevious approaches to this problem mainly focus on discriminative object\nregions and often fail to locate precise object boundaries. We address this\nproblem by introducing two types of context-aware guidance models, additive and\ncontrastive models, that leverage their surrounding context regions to improve\nlocalization. The additive model encourages the predicted object region to be\nsupported by its surrounding context region. The contrastive model encourages\nthe predicted object region to be outstanding from its surrounding context\nregion. Our approach benefits from the recent success of convolutional neural\nnetworks for object recognition and extends Fast R-CNN to weakly supervised\nobject localization. Extensive experimental evaluation on the PASCAL VOC 2007\nand 2012 benchmarks shows hat our context-aware approach significantly improves\nweakly supervised localization and detection.", "field": ["Convolutions", "RoI Feature Extractors", "Object Detection Models", "Output Functions"], "task": ["Object Localization", "Object Recognition", "Weakly Supervised Object Detection", "Weakly-Supervised Object Localization"], "method": ["Fast R-CNN", "Softmax", "RoIPool", "Convolution"], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test", "Charades"], "metric": ["MAP"], "title": "ContextLocNet: Context-Aware Deep Network Models for Weakly Supervised Localization"} {"abstract": "Multi-emotion sentiment classification is a natural language processing (NLP)\nproblem with valuable use cases on real-world data. We demonstrate that\nlarge-scale unsupervised language modeling combined with finetuning offers a\npractical solution to this task on difficult datasets, including those with\nlabel class imbalance and domain-specific context. By training an\nattention-based Transformer network (Vaswani et al. 2017) on 40GB of text\n(Amazon reviews) (McAuley et al. 2015) and fine-tuning on the training set, our\nmodel achieves a 0.69 F1 score on the SemEval Task 1:E-c multi-dimensional\nemotion classification problem (Mohammad et al. 2018), based on the Plutchik\nwheel of emotions (Plutchik 1979). These results are competitive with state of\nthe art models, including strong F1 scores on difficult (emotion) categories\nsuch as Fear (0.73), Disgust (0.77) and Anger (0.78), as well as competitive\nresults on rare categories such as Anticipation (0.42) and Surprise (0.37).\nFurthermore, we demonstrate our application on a real world text classification\ntask. We create a narrowly collected text dataset of real tweets on several\ntopics, and show that our finetuned model outperforms general purpose\ncommercially available APIs for sentiment and multidimensional emotion\nclassification on this dataset by a significant margin. We also perform a\nvariety of additional studies, investigating properties of deep learning\narchitectures, datasets and algorithms for achieving practical multidimensional\nsentiment classification. Overall, we find that unsupervised language modeling\nand finetuning is a simple framework for achieving high quality results on\nreal-world sentiment classification.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Emotion Classification", "Language Modelling", "Sentiment Analysis", "Text Classification"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["SemEval 2018 Task 1E-c", "SST-2 Binary classification"], "metric": ["Macro-F1", "Accuracy"], "title": "Practical Text Classification With Large Pre-Trained Language Models"} {"abstract": "The predictive learning of spatiotemporal sequences aims to generate future images by learning from the historical frames, where spatial appearances and temporal variations are two crucial structures. This paper models these structures by presenting a predictive recurrent neural network (PredRNN). This architecture is enlightened by the idea that spatiotemporal predictive learning should memorize both spatial appearances and temporal variations in a unified memory pool. Concretely, memory states are no longer constrained inside each LSTM unit. Instead, they are allowed to zigzag in two directions: across stacked RNN layers vertically and through all RNN states horizontally. The core of this network is a new Spatiotemporal LSTM (ST-LSTM) unit that extracts and memorizes spatial and temporal representations simultaneously. PredRNN achieves the state-of-the-art prediction performance on three video prediction datasets and is a more general framework, that can be easily extended to other predictive learning tasks by integrating with other architectures.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Video Prediction"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["Human3.6M"], "metric": ["MAE", "SSIM", "MSE"], "title": "PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs"} {"abstract": "We introduce a deep memory network for aspect level sentiment classification.\nUnlike feature-based SVM and sequential neural models such as LSTM, this\napproach explicitly captures the importance of each context word when inferring\nthe sentiment polarity of an aspect. Such importance degree and text\nrepresentation are calculated with multiple computational layers, each of which\nis a neural attention model over an external memory. Experiments on laptop and\nrestaurant datasets demonstrate that our approach performs comparable to\nstate-of-art feature based SVM system, and substantially better than LSTM and\nattention-based LSTM architectures. On both datasets we show that multiple\ncomputational layers could improve the performance. Moreover, our approach is\nalso fast. The deep memory network with 9 layers is 15 times faster than LSTM\nwith a CPU implementation.", "field": ["Recurrent Neural Networks", "Activation Functions", "Non-Parametric Classification", "Working Memory Models"], "task": ["Aspect-Based Sentiment Analysis"], "method": ["Memory Network", "Support Vector Machine", "Long Short-Term Memory", "SVM", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Aspect Level Sentiment Classification with Deep Memory Network"} {"abstract": "We present an integrated framework for using Convolutional Networks for\nclassification, localization and detection. We show how a multiscale and\nsliding window approach can be efficiently implemented within a ConvNet. We\nalso introduce a novel deep learning approach to localization by learning to\npredict object boundaries. Bounding boxes are then accumulated rather than\nsuppressed in order to increase detection confidence. We show that different\ntasks can be learned simultaneously using a single shared network. This\nintegrated framework is the winner of the localization task of the ImageNet\nLarge Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very\ncompetitive results for the detection and classifications tasks. In\npost-competition work, we establish a new state of the art for the detection\ntask. Finally, we release a feature extractor from our best model called\nOverFeat.", "field": ["Image Data Augmentation", "Output Functions", "Regularization", "Stochastic Optimization", "Convolutional Neural Networks", "Activation Functions", "Convolutions", "Feedforward Networks", "Pooling Operations"], "task": ["Image Classification", "Object Detection", "Object Recognition"], "method": ["Weight Decay", "SGD with Momentum", "Softmax", "Random Horizontal Flip", "Random Resized Crop", "OverFeat", "Convolution", "Rectified Linear Units", "ReLU", "Dense Connections", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks"} {"abstract": "Real-time recognition of dynamic hand gestures from video streams is a challenging task since (i) there is no indication when a gesture starts and ends in the video, (ii) performed gestures should only be recognized once, and (iii) the entire architecture should be designed considering the memory and power budget. In this work, we address these challenges by proposing a hierarchical structure enabling offline-working convolutional neural network (CNN) architectures to operate online efficiently by using sliding window approach. The proposed architecture consists of two models: (1) A detector which is a lightweight CNN architecture to detect gestures and (2) a classifier which is a deep CNN to classify the detected gestures. In order to evaluate the single-time activations of the detected gestures, we propose to use Levenshtein distance as an evaluation metric since it can measure misclassifications, multiple detections, and missing detections at the same time. We evaluate our architecture on two publicly available datasets - EgoGesture and NVIDIA Dynamic Hand Gesture Datasets - which require temporal detection and classification of the performed hand gestures. ResNeXt-101 model, which is used as a classifier, achieves the state-of-the-art offline classification accuracy of 94.04% and 83.82% for depth modality on EgoGesture and NVIDIA benchmarks, respectively. In real-time detection and classification, we obtain considerable early detections while achieving performances close to offline operation. The codes and pretrained models used in this work are publicly available.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Action Recognition", "Hand Gesture Recognition", "Hand-Gesture Recognition", "Hierarchical structure"], "method": ["ResNeXt Block", "Average Pooling", "Grouped Convolution", "ResNeXt", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Connection", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units"], "dataset": ["EgoGesture", "NVGesture"], "metric": ["Accuracy"], "title": "Real-time Hand Gesture Detection and Classification Using Convolutional Neural Networks"} {"abstract": "Modeling the distribution of natural images is a landmark problem in\nunsupervised learning. This task requires an image model that is at once\nexpressive, tractable and scalable. We present a deep neural network that\nsequentially predicts the pixels in an image along the two spatial dimensions.\nOur method models the discrete probability of the raw pixel values and encodes\nthe complete set of dependencies in the image. Architectural novelties include\nfast two-dimensional recurrent layers and an effective use of residual\nconnections in deep recurrent networks. We achieve log-likelihood scores on\nnatural images that are considerably better than the previous state of the art.\nOur main results also provide benchmarks on the diverse ImageNet dataset.\nSamples generated from the model appear crisp, varied and globally coherent.", "field": ["Generative Models", "Recurrent Neural Networks", "Activation Functions", "Convolutions"], "task": ["2D Object Detection", "Image Generation"], "method": ["Masked Convolution", "Long Short-Term Memory", "PixelRNN", "Tanh Activation", "Pixel Recurrent Neural Network", "LSTM", "Sigmoid Activation"], "dataset": ["Binarized MNIST", "ImageNet 32x32", "CIFAR-10"], "metric": ["nats", "bits/dimension", "bpd"], "title": "Pixel Recurrent Neural Networks"} {"abstract": "This paper presents a novel approach for learning instance segmentation with image-level class labels as supervision. Our approach generates pseudo instance segmentation labels of training images, which are used to train a fully supervised model. For generating the pseudo labels, we first identify confident seed areas of object classes from attention maps of an image classification model, and propagate them to discover the entire instance areas with accurate boundaries. To this end, we propose IRNet, which estimates rough areas of individual instances and detects boundaries between different object classes. It thus enables to assign instance labels to the seeds and to propagate them within the boundaries so that the entire areas of instances can be estimated accurately. Furthermore, IRNet is trained with inter-pixel relations on the attention maps, thus no extra supervision is required. Our method with IRNet achieves an outstanding performance on the PASCAL VOC 2012 dataset, surpassing not only previous state-of-the-art trained with the same level of supervision, but also some of previous models relying on stronger supervision.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification", "Instance Segmentation", "Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val"], "metric": ["Mean IoU"], "title": "Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations"} {"abstract": "We introduce the \"exponential linear unit\" (ELU) which speeds up learning in\ndeep neural networks and leads to higher classification accuracies. Like\nrectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs\n(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for\npositive values. However, ELUs have improved learning characteristics compared\nto the units with other activation functions. In contrast to ReLUs, ELUs have\nnegative values which allows them to push mean unit activations closer to zero\nlike batch normalization but with lower computational complexity. Mean shifts\ntoward zero speed up learning by bringing the normal gradient closer to the\nunit natural gradient because of a reduced bias shift effect. While LReLUs and\nPReLUs have negative values, too, they do not ensure a noise-robust\ndeactivation state. ELUs saturate to a negative value with smaller inputs and\nthereby decrease the forward propagated variation and information. Therefore,\nELUs code the degree of presence of particular phenomena in the input, while\nthey do not quantitatively model the degree of their absence. In experiments,\nELUs lead not only to faster learning, but also to significantly better\ngeneralization performance than ReLUs and LReLUs on networks with more than 5\nlayers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with\nbatch normalization while batch normalization does not improve ELU networks.\nELU networks are among the top 10 reported CIFAR-10 results and yield the best\npublished result on CIFAR-100, without resorting to multi-view evaluation or\nmodel averaging. On ImageNet, ELU networks considerably speed up learning\ncompared to a ReLU network with the same architecture, obtaining less than 10%\nclassification error for a single crop, single model network.", "field": ["Activation Functions", "Normalization"], "task": ["Image Classification"], "method": ["Exponential Linear Unit", "Batch Normalization", "ReLU", "ELU", "Rectified Linear Units"], "dataset": ["CIFAR-100", "CIFAR-10"], "metric": ["Percentage correct"], "title": "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)"} {"abstract": "We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a gumbel softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the NLP community which require discrete inputs. Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition.", "field": ["Regularization", "Attention Modules", "Learning Rate Schedules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Clustering", "Feedforward Networks", "Attention Mechanisms", "Distributions", "Skip Connections"], "task": ["Self-Supervised Learning", "Speech Recognition"], "method": ["Gumbel Softmax", "Weight Decay", "k-Means Clustering", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["TIMIT"], "metric": ["Percentage error"], "title": "vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations"} {"abstract": "Deeper neural networks are more difficult to train. We present a residual\nlearning framework to ease the training of networks that are substantially\ndeeper than those used previously. We explicitly reformulate the layers as\nlearning residual functions with reference to the layer inputs, instead of\nlearning unreferenced functions. We provide comprehensive empirical evidence\nshowing that these residual networks are easier to optimize, and can gain\naccuracy from considerably increased depth. On the ImageNet dataset we evaluate\nresidual nets with a depth of up to 152 layers---8x deeper than VGG nets but\nstill having lower complexity. An ensemble of these residual nets achieves\n3.57% error on the ImageNet test set. This result won the 1st place on the\nILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100\nand 1000 layers.\n The depth of representations is of central importance for many visual\nrecognition tasks. Solely due to our extremely deep representations, we obtain\na 28% relative improvement on the COCO object detection dataset. Deep residual\nnets are foundations of our submissions to ILSVRC & COCO 2015 competitions,\nwhere we also won the 1st places on the tasks of ImageNet detection, ImageNet\nlocalization, COCO detection, and COCO segmentation.", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Convolutional Neural Networks", "Stochastic Optimization", "Learning Rate Schedules", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Domain Adaptation", "Domain Generalization", "Fine-Grained Image Classification", "Image Classification", "Image-to-Image Translation", "Object Detection", "Person Re-Identification", "Retinal OCT Disease Classification", "Semantic Segmentation"], "method": ["Weight Decay", "Average Pooling", "1x1 Convolution", "ResNet", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "Random Resized Crop", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Step Decay", "SGD with Momentum", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["GTAV-to-Cityscapes Labels", "Cityscapes val", "Office-Home", "PCam", "Srinivasan2014", "UCF-QNRF", "Office-31", "Syn2Real-C", "OCT2017", "ImageNet ReaL", "ImageNet-R", "COCO test-dev", "ImageNet-A", "PASCAL VOC 2007", "Stanford Cars", "ImageNet"], "metric": ["Number of params", "Acc", "Top 1 Accuracy", "MAP", "mIoU", "box AP", "Top-1 Error Rate", "Sensitivity", "MAE", "AUC", "Accuracy", "Top 5 Accuracy", "Top-1 accuracy %", "Average Accuracy"], "title": "Deep Residual Learning for Image Recognition"} {"abstract": "Convolutional neural networks typically encode an input image into a series of intermediate features with decreasing resolutions. While this structure is suited to classification tasks, it does not perform well for tasks requiring simultaneous recognition and localization (e.g., object detection). The encoder-decoder architectures are proposed to resolve this by applying a decoder network onto a backbone model designed for classification tasks. In this paper, we argue encoder-decoder architecture is ineffective in generating strong multi-scale features because of the scale-decreased backbone. We propose SpineNet, a backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by Neural Architecture Search. Using similar building blocks, SpineNet models outperform ResNet-FPN models by ~3% AP at various scales while using 10-20% fewer FLOPs. In particular, SpineNet-190 achieves 52.5% AP with a MaskR-CNN detector and achieves 52.1% AP with a RetinaNet detector on COCO for a single model without test-time augmentation, significantly outperforms prior art of detectors. SpineNet can transfer to classification tasks, achieving 5% top-1 accuracy improvement on a challenging iNaturalist fine-grained dataset. Code is at: https://github.com/tensorflow/tpu/tree/master/models/official/detection.", "field": ["Convolutional Neural Networks", "Feature Extractors", "Normalization", "Policy Gradient Methods", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Object Detection Models", "Stochastic Optimization", "Recurrent Neural Networks", "Loss Functions", "Feedforward Networks", "Neural Architecture Search", "Skip Connection Blocks", "Image Data Augmentation", "Initialization", "Output Functions", "Learning Rate Schedules", "RoI Feature Extractors", "Instance Segmentation Models", "Skip Connections"], "task": ["Image Classification", "Instance Segmentation", "Neural Architecture Search", "Object Detection", "Real-Time Object Detection"], "method": ["Weight Decay", "Cosine Annealing", "Average Pooling", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "RoIAlign", "Proximal Policy Optimization", "ResNet", "Random Horizontal Flip", "Entropy Regularization", "Convolution", "NAS-FPN", "ReLU", "Residual Connection", "FPN", "Dense Connections", "Swish", "Focal Loss", "Random Resized Crop", "Batch Normalization", "Residual Network", "PPO", "Kaiming Initialization", "Neural Architecture Search", "SpineNet", "Sigmoid Activation", "SGD with Momentum", "Softmax", "Feature Pyramid Network", "Linear Warmup With Cosine Annealing", "LSTM", "Bottleneck Residual Block", "Stochastic Depth", "Mask R-CNN", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["iNaturalist", "COCO", "COCO minival", "COCO test-dev", "ImageNet"], "metric": ["Number of params", "APM", "Top 1 Accuracy", "inference time (ms)", "MAP", "box AP", "AP75", "APS", "APL", "AP50", "Top 5 Accuracy", "mask AP"], "title": "SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization"} {"abstract": "Variational autoencoders (VAEs) defined over SMILES string and graph-based representations of molecules promise to improve the optimization of molecular properties, thereby revolutionizing the pharmaceuticals and materials industries. However, these VAEs are hindered by the non-unique nature of SMILES strings and the computational cost of graph convolutions. To efficiently pass messages along all paths through the molecular graph, we encode multiple SMILES strings of a single molecule using a set of stacked recurrent neural networks, pooling hidden representations of each atom between SMILES representations, and use attentional pooling to build a final fixed-length latent representation. By then decoding to a disjoint set of SMILES strings of the molecule, our All SMILES VAE learns an almost bijective mapping between molecules and latent representations near the high-probability-mass subspace of the prior. Our SMILES-derived but molecule-based latent representations significantly surpass the state-of-the-art in a variety of fully- and semi-supervised property regression and molecular property optimization tasks.", "field": ["Generative Models"], "task": ["Drug Discovery", "Regression"], "method": ["VAE", "Variational Autoencoder"], "dataset": ["Tox21"], "metric": ["AUC"], "title": "All SMILES Variational Autoencoder"} {"abstract": "Pretrained contextual and non-contextual subword embeddings have become available in over 250 languages, allowing massively multilingual NLP. However, while there is no dearth of pretrained embeddings, the distinct lack of systematic evaluations makes it difficult for practitioners to choose between them. In this work, we conduct an extensive evaluation comparing non-contextual subword embeddings, namely FastText and BPEmb, and a contextual representation method, namely BERT, on multilingual named entity recognition and part-of-speech tagging. We find that overall, a combination of BERT, BPEmb, and character representations works best across languages and tasks. A more detailed analysis reveals different strengths and weaknesses: Multilingual BERT performs well in medium- to high-resource languages, but is outperformed by non-contextual subword embeddings in a low-resource setting.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Word Embeddings", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Named Entity Recognition", "Part-Of-Speech Tagging"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "fastText", "Attention Dropout", "Multi-Head Attention", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["UD"], "metric": ["Avg accuracy"], "title": "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"} {"abstract": "This paper tackles the problem of motion deblurring of dynamic scenes. Although end-to-end fully convolutional designs have recently advanced the state-of-the-art in non-uniform motion deblurring, their performance-complexity trade-off is still sub-optimal. Existing approaches achieve a large receptive field by increasing the number of generic convolution layers and kernel-size, but this comes at the expense of of the increase in model size and inference speed. In this work, we propose an efficient pixel adaptive and feature attentive design for handling large blur variations across different spatial locations and process each test image adaptively. We also propose an effective content-aware global-local filtering module that significantly improves performance by considering not only global dependencies but also by dynamically exploiting neighbouring pixel information. We use a patch-hierarchical attentive architecture composed of the above module that implicitly discovers the spatial variations in the blur present in the input image and in turn, performs local and global modulation of intermediate features. Extensive qualitative and quantitative comparisons with prior art on deblurring benchmarks demonstrate that our design offers significant improvements over the state-of-the-art in accuracy as well as speed.", "field": ["Convolutions"], "task": ["Deblurring"], "method": ["Convolution"], "dataset": ["GoPro"], "metric": ["SSIM", "PSNR"], "title": "Spatially-Attentive Patch-Hierarchical Network for Adaptive Motion Deblurring"} {"abstract": "We explore the use of Evolution Strategies (ES), a class of black box\noptimization algorithms, as an alternative to popular MDP-based RL techniques\nsuch as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show\nthat ES is a viable solution strategy that scales extremely well with the\nnumber of CPUs available: By using a novel communication strategy based on\ncommon random numbers, our ES implementation only needs to communicate scalars,\nmaking it possible to scale to over a thousand parallel workers. This allows us\nto solve 3D humanoid walking in 10 minutes and obtain competitive results on\nmost Atari games after one hour of training. In addition, we highlight several\nadvantages of ES as a black box optimization technique: it is invariant to\naction frequency and delayed rewards, tolerant of extremely long horizons, and\ndoes not need temporal discounting or value function approximation.", "field": ["Off-Policy TD Control"], "task": ["Atari Games", "Q-Learning"], "method": ["Q-Learning"], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Chopper Command", "Atari 2600 Up and Down", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede"], "metric": ["Score"], "title": "Evolution Strategies as a Scalable Alternative to Reinforcement Learning"} {"abstract": "We introduce a simple baseline for action localization on the AVA dataset.\nThe model builds upon the Faster R-CNN bounding box detection framework,\nadapted to operate on pure spatiotemporal features - in our case produced\nexclusively by an I3D model pretrained on Kinetics. This model obtains 21.9%\naverage AP on the validation set of AVA v2.1, up from 14.5% for the best RGB\nspatiotemporal model used in the original AVA paper (which was pretrained on\nKinetics and ImageNet), and up from 11.3 of the publicly available baseline\nusing a ResNet101 image feature extractor, that was pretrained on ImageNet. Our\nfinal model obtains 22.8%/21.9% mAP on the val/test sets and outperforms all\nsubmissions to the AVA challenge at CVPR 2018.", "field": ["Output Functions", "RoI Feature Extractors", "Convolutions", "Region Proposal", "Object Detection Models"], "task": ["Action Localization", "Action Recognition"], "method": ["RPN", "Faster R-CNN", "Softmax", "RoIPool", "Convolution", "Region Proposal Network"], "dataset": ["AVA v2.1"], "metric": ["mAP (Val)"], "title": "A Better Baseline for AVA"} {"abstract": "We present a novel deep learning architecture to address the natural language\ninference (NLI) task. Existing approaches mostly rely on simple reading\nmechanisms for independent encoding of the premise and hypothesis. Instead, we\npropose a novel dependent reading bidirectional LSTM network (DR-BiLSTM) to\nefficiently model the relationship between a premise and a hypothesis during\nencoding and inference. We also introduce a sophisticated ensemble strategy to\ncombine our proposed models, which noticeably improves final predictions.\nFinally, we demonstrate how the results can be improved further with an\nadditional preprocessing step. Our evaluation shows that DR-BiLSTM obtains the\nbest single model and ensemble model results achieving the new state-of-the-art\nscores on the Stanford NLI dataset.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Natural Language Inference"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "DR-BiLSTM: Dependent Reading Bidirectional LSTM for Natural Language Inference"} {"abstract": "The main goal behind state-of-the-art pre-trained multilingual models such as multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in low-resource languages through zero-shot or few-shot cross-lingual transfer. However, due to limited model capacity, their transfer performance is the weakest exactly on such low-resource languages and languages unseen during pre-training. We propose MAD-X, an adapter-based framework that enables high portability and parameter-efficient transfer to arbitrary tasks and languages by learning modular language and task representations. In addition, we introduce a novel invertible adapter architecture and a strong baseline method for adapting a pre-trained multilingual model to a new language. MAD-X outperforms the state of the art in cross-lingual transfer across a representative set of typologically diverse languages on named entity recognition and causal commonsense reasoning, and achieves competitive results on question answering. Our code and adapters are available at AdapterHub.ml", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Cross-Lingual Transfer", "Named Entity Recognition", "Question Answering"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["XCOPA"], "metric": ["Accuracy"], "title": "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer"} {"abstract": "Recognizing irregular text in natural scene images is challenging due to the\nlarge variance in text appearance, such as curvature, orientation and\ndistortion. Most existing approaches rely heavily on sophisticated model\ndesigns and/or extra fine-grained annotations, which, to some extent, increase\nthe difficulty in algorithm implementation and data collection. In this work,\nwe propose an easy-to-implement strong baseline for irregular scene text\nrecognition, using off-the-shelf neural network components and only word-level\nannotations. It is composed of a $31$-layer ResNet, an LSTM-based\nencoder-decoder framework and a 2-dimensional attention module. Despite its\nsimplicity, the proposed method is robust and achieves state-of-the-art\nperformance on both regular and irregular scene text recognition benchmarks.\nCode is available at: https://tinyurl.com/ShowAttendRead", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Irregular Text Recognition", "Scene Text", "Scene Text Recognition"], "method": ["ResNet", "Average Pooling", "Residual Block", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ICDAR2013", "ICDAR2015", "SVT"], "metric": ["Accuracy"], "title": "Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition"} {"abstract": "Scale variance among different sizes of body parts and objects is a challenging problem for visual recognition tasks. Existing works usually design dedicated backbone or apply Neural architecture Search(NAS) for each task to tackle this challenge. However, existing works impose significant limitations on the design or search space. To solve these problems, we present ScaleNAS, a one-shot learning method for exploring scale-aware representations. ScaleNAS solves multiple tasks at a time by searching multi-scale feature aggregation. ScaleNAS adopts a flexible search space that allows an arbitrary number of blocks and cross-scale feature fusions. To cope with the high search cost incurred by the flexible space, ScaleNAS employs one-shot learning for multi-scale supernet driven by grouped sampling and evolutionary search. Without further retraining, ScaleNet can be directly deployed for different visual recognition tasks with superior performance. We use ScaleNAS to create high-resolution models for two different tasks, ScaleNet-P for human pose estimation and ScaleNet-S for semantic segmentation. ScaleNet-P and ScaleNet-S outperform existing manually crafted and NAS-based methods in both tasks. When applying ScaleNet-P to bottom-up human pose estimation, it surpasses the state-of-the-art HigherHRNet. In particular, ScaleNet-P4 achieves 71.6% AP on COCO test-dev, achieving new state-of-the-art result.", "field": ["Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Multi-Person Pose Estimation", "Neural Architecture Search", "One-Shot Learning", "Pose Estimation", "Semantic Segmentation"], "method": ["Average Pooling", "Scale Aggregation Block", "ScaleNet", "Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Connection", "Bottleneck Residual Block", "Dense Connections", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CrowdPose", "COCO test-dev"], "metric": ["APM", "mAP @0.5:0.95", "AR50", "AP75", "AP", "APL", "AP50", "AR"], "title": "ScaleNAS: One-Shot Learning of Scale-Aware Representations for Visual Recognition"} {"abstract": "Temporal relational modeling in video is essential for human action understanding, such as action recognition and action segmentation. Although Graph Convolution Networks (GCNs) have shown promising advantages in relation reasoning on many tasks, it is still a challenge to apply graph convolution networks on long video sequences effectively. The main reason is that large number of nodes (i.e., video frames) makes GCNs hard to capture and model temporal relations in videos. To tackle this problem, in this paper, we introduce an effective GCN module, Dilated Temporal Graph Reasoning Module (DTGRM), designed to model temporal relations and dependencies between video frames at various time spans. In particular, we capture and model temporal relations via constructing multi-level dilated temporal graphs where the nodes represent frames from different moments in video. Moreover, to enhance temporal reasoning ability of the proposed model, an auxiliary self-supervised task is proposed to encourage the dilated temporal graph reasoning module to find and correct wrong temporal relations in videos. Our DTGRM model outperforms state-of-the-art action segmentation models on three challenging datasets: 50Salads, Georgia Tech Egocentric Activities (GTEA), and the Breakfast dataset. The code is available at https://github.com/redwang/DTGRM.", "field": ["Convolutions", "Graph Models"], "task": ["Action Recognition", "Action Segmentation"], "method": ["Graph Convolutional Network", "GCN", "Convolution"], "dataset": ["50 Salads", "Breakfast"], "metric": ["Acc", "Edit", "F1@10%", "F1@25%", "F1@50%"], "title": "Temporal Relational Modeling with Self-Supervision for Action Segmentation"} {"abstract": "We propose an extension to neural network language models to adapt their\nprediction to the recent history. Our model is a simplified version of memory\naugmented networks, which stores past hidden activations as memory and accesses\nthem through a dot product with the current hidden activation. This mechanism\nis very efficient and scales to very large memory sizes. We also draw a link\nbetween the use of external memory in neural network and cache models used with\ncount based language models. We demonstrate on several language model datasets\nthat our approach performs significantly better than recent memory augmented\nnetworks.", "field": ["Regularization", "Stochastic Optimization", "Recurrent Neural Networks", "Activation Functions", "Optimization", "Language Model Components"], "task": ["Language Modelling"], "method": ["AdaGrad", "Long Short-Term Memory", "Neural Cache", "Tanh Activation", "LSTM", "Dropout", "Gradient Clipping", "Sigmoid Activation"], "dataset": ["WikiText-2", "WikiText-103"], "metric": ["Test perplexity"], "title": "Improving Neural Language Models with a Continuous Cache"} {"abstract": "FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.", "field": ["Initialization", "Output Functions", "Convolutional Neural Networks", "Feature Extractors", "Activation Functions", "RoI Feature Extractors", "Loss Functions", "Normalization", "Convolutions", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Object Detection Models", "Skip Connection Blocks"], "task": ["Object Detection", "Video Classification"], "method": ["Average Pooling", "1x1 Convolution", "RoIAlign", "ResNet", "Convolution", "ReLU", "Residual Connection", "FPN", "Focal Loss", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Softmax", "Feature Pyramid Network", "Group Normalization", "Bottleneck Residual Block", "Mask R-CNN", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival"], "metric": ["AP50", "box AP", "AP75"], "title": "Group Normalization"} {"abstract": "We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared BPE vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI dataset), cross-lingual document classification (MLDoc dataset) and parallel corpus mining (BUCC dataset) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low-resource languages. Our implementation, the pre-trained encoder and the multilingual test set are available at https://github.com/facebookresearch/LASER", "field": ["Recurrent Neural Networks", "Activation Functions", "Subword Segmentation", "Bidirectional Recurrent Neural Networks"], "task": ["Cross-Lingual Bitext Mining", "Cross-Lingual Document Classification", "Cross-Lingual Natural Language Inference", "Cross-Lingual Transfer", "Document Classification", "Joint Multilingual Sentence Representations", "Natural Language Inference", "Parallel Corpus Mining", "Sentence Embeddings"], "method": ["Byte Pair Encoding", "BPE", "Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Bidirectional LSTM", "LSTM", "Sigmoid Activation"], "dataset": ["BUCC Chinese-to-English", "MLDoc Zero-Shot English-to-German", "MLDoc Zero-Shot English-to-French", "BUCC Russian-to-English", "MLDoc Zero-Shot English-to-Spanish", "BUCC German-to-English", "MLDoc Zero-Shot English-to-Chinese", "MLDoc Zero-Shot English-to-Japanese", "BUCC French-to-English", "MLDoc Zero-Shot English-to-Italian", "MLDoc Zero-Shot English-to-Russian"], "metric": ["F1 score", "Accuracy"], "title": "Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond"} {"abstract": "Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks. Recently, an upgraded version of BERT has been released with Whole Word Masking (WWM), which mitigate the drawbacks of masking partial WordPiece tokens in pre-training BERT. In this technical report, we adapt whole word masking in Chinese text, that masking the whole word instead of masking Chinese characters, which could bring another challenge in Masked Language Model (MLM) pre-training task. The proposed models are verified on various NLP tasks, across sentence-level to document-level, including machine reading comprehension (CMRC 2018, DRCD, CJRC), natural language inference (XNLI), sentiment classification (ChnSentiCorp), sentence pair matching (LCQMC, BQ Corpus), and document classification (THUCNews). Experimental results on these datasets show that the whole word masking could bring another significant gain. Moreover, we also examine the effectiveness of the Chinese pre-trained models: BERT, ERNIE, BERT-wwm, BERT-wwm-ext, RoBERTa-wwm-ext, and RoBERTa-wwm-ext-large. We release all the pre-trained models: \\url{https://github.com/ymcui/Chinese-BERT-wwm", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Document Classification", "Language Modelling", "Machine Reading Comprehension", "Named Entity Recognition", "Natural Language Inference", "Reading Comprehension", "Sentiment Analysis"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["ChnSentiCorp Dev", "ChnSentiCorp"], "metric": ["F1"], "title": "Pre-Training with Whole Word Masking for Chinese BERT"} {"abstract": "We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to 2.33x faster in compute time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future research in self-attention models for vision.", "field": ["Image Data Augmentation", "Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Activation Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Attention Mechanisms", "Instance Segmentation Models", "Skip Connections", "Image Model Blocks", "Image Models"], "task": ["Image Classification", "Instance Segmentation", "Object Detection"], "method": ["Weight Decay", "Cosine Annealing", "Average Pooling", "Bottleneck Transformer", "Sigmoid Linear Unit", "RandAugment", "1x1 Convolution", "RoIAlign", "ResNeSt", "Scaled Dot-Product Attention", "Channel-wise Soft Attention", "Convolution", "ReLU", "Residual Connection", "SiLU", "Dense Connections", "Random Resized Crop", "Bottleneck Transformer Block", "Batch Normalization", "Label Smoothing", "Squeeze-and-Excitation Block", "Pointwise Convolution", "Split Attention", "Sigmoid Activation", "SGD with Momentum", "Softmax", "Mask R-CNN", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "ImageNet"], "metric": ["Number of params", "Top 1 Accuracy", "box AP", "AP75", "AP50", "Top 5 Accuracy", "mask AP"], "title": "Bottleneck Transformers for Visual Recognition"} {"abstract": "Object detection plays an important role in current solutions to vision and language tasks like image captioning and visual question answering. However, popular models like Faster R-CNN rely on a costly process of annotating ground-truths for both the bounding boxes and their corresponding semantic labels, making it less amenable as a primitive task for transfer learning. In this paper, we examine the effect of decoupling box proposal and featurization for down-stream tasks. The key insight is that this allows us to leverage a large amount of labeled annotations that were previously unavailable for standard object detection benchmarks. Empirically, we demonstrate that this leads to effective transfer learning and improved image captioning and visual question answering models, as measured on publicly available benchmarks.", "field": ["Output Functions", "RoI Feature Extractors", "Convolutions", "Region Proposal", "Object Detection Models"], "task": ["Image Captioning", "Object Detection", "Question Answering", "Transfer Learning", "Visual Question Answering"], "method": ["RPN", "Faster R-CNN", "Softmax", "RoIPool", "Convolution", "Region Proposal Network"], "dataset": ["VizWiz 2018"], "metric": ["number", "overall", "other", "unanswerable", "yes/no"], "title": "Decoupled Box Proposal and Featurization with Ultrafine-Grained Semantic Labels Improve Image Captioning and Visual Question Answering"} {"abstract": "Deep neural networks require collecting and annotating large amounts of data to train successfully. In order to alleviate the annotation bottleneck, we propose a novel self-supervised representation learning approach for spatiotemporal features extracted from videos. We introduce Skip-Clip, a method that utilizes temporal coherence in videos, by training a deep model for future clip order ranking conditioned on a context clip as a surrogate objective for video future prediction. We show that features learned using our method are generalizable and transfer strongly to downstream tasks. For action recognition on the UCF101 dataset, we obtain 51.8% improvement over random initialization and outperform models initialized using inflated ImageNet parameters. Skip-Clip also achieves results competitive with state-of-the-art self-supervision methods.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Action Recognition", "Future prediction", "Representation Learning", "Self-Supervised Action Recognition"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["UCF101"], "metric": ["3-fold Accuracy", "Pre-Training Dataset"], "title": "Skip-Clip: Self-Supervised Spatiotemporal Representation Learning by Future Clip Order Ranking"} {"abstract": "Deep neural networks have become an indispensable technique for audio source separation (ASS). It was recently reported that a variant of CNN architecture called MMDenseNet was successfully employed to solve the ASS problem of estimating source amplitudes, and state-of-the-art results were obtained for DSD100 dataset. To further enhance MMDenseNet, here we propose a novel architecture that integrates long short-term memory (LSTM) in multiple scales with skip connections to efficiently model long-term structures within an audio context. The experimental results show that the proposed method outperforms MMDenseNet, LSTM and a blend of the two networks. The number of parameters and processing time of the proposed model are significantly less than those for simple blending. Furthermore, the proposed method yields better results than those obtained using ideal binary masks for a singing voice separation task.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Music Source Separation"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["MUSDB18"], "metric": ["SDR (vocals)", "SDR (other)", "SDR (drums)", "SDR (bass)"], "title": "MMDenseLSTM: An efficient combination of convolutional and recurrent neural networks for audio source separation"} {"abstract": "Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch.", "field": ["Attention Modules", "Output Functions", "Learning Rate Schedules", "Regularization", "Stochastic Optimization", "Activation Functions", "Normalization", "Input Embedding Factorization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling"], "method": ["Cosine Annealing", "Variational Dropout", "Layer Normalization", "Softmax", "Adaptive Softmax", "Adam", "Multi-Head Attention", "Linear Warmup With Cosine Annealing", "Transformer-XL", "Rectified Linear Units", "ReLU", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "Adaptive Input Representations", "Dense Connections"], "dataset": ["enwik8", "Text8", "Penn Treebank (Word Level)", "Hutter Prize", "WikiText-103", "One Billion Word"], "metric": ["Number of params", "Bit per Character (BPC)", "Validation perplexity", "Test perplexity", "Params", "PPL"], "title": "Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context"} {"abstract": "Face recognition has evolved as a widely used biometric modality. However, its vulnerability against presentation attacks poses a significant security threat. Though presentation attack detection (PAD) methods try to address this issue, they often fail in generalizing to unseen attacks. In this work, we propose a new framework for PAD using a one-class classifier, where the representation used is learned with a Multi-Channel Convolutional Neural Network (MCCNN). A novel loss function is introduced, which forces the network to learn a compact embedding for bonafide class while being far from the representation of attacks. A one-class Gaussian Mixture Model is used on top of these embeddings for the PAD task. The proposed framework introduces a novel approach to learn a robust PAD system from bonafide and available (known) attack classes. This is particularly important as collecting bonafide data and simpler attacks are much easier than collecting a wide variety of expensive attacks. The proposed system is evaluated on the publicly available WMCA multi-channel face PAD database, which contains a wide variety of 2D and 3D attacks. Further, we have performed experiments with MLFP and SiW-M datasets using RGB channels only. Superior performance in unseen attack protocols shows the effectiveness of the proposed approach. Software, data, and protocols to reproduce the results are made available publicly.", "field": ["Convolutions"], "task": ["Face Anti-Spoofing", "Face Presentation Attack Detection", "Face Recognition", "One-class classifier"], "method": ["1x1 Convolution"], "dataset": ["WMCA", "MLFP"], "metric": ["ACER", "HTER"], "title": "Learning One Class Representations for Face Presentation Attack Detection using Multi-channel Convolutional Neural Networks"} {"abstract": "We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods. Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models. To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to \"collapsing\" to low-latency yet poor-accuracy models. Moreover, we seamlessly extend FasterSeg to a new collaborative search (co-searching) framework, simultaneously searching for a teacher and a student network in the same single run. The teacher-student distillation further boosts the student model's accuracy. Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg. For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions"], "task": ["Neural Architecture Search", "Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["BDD", "Cityscapes test", "Cityscapes val"], "metric": ["Mean IoU (class)", "mIoU"], "title": "FasterSeg: Searching for Faster Real-time Semantic Segmentation"} {"abstract": "While existing hierarchical text classification (HTC) methods attempt to capture label hierarchies for model training, they either make local decisions regarding each label or completely ignore the hierarchy information during inference. To solve the mismatch between training and inference as well as modeling label dependencies in a more principled way, we formulate HTC as a Markov decision process and propose to learn a Label Assignment Policy via deep reinforcement learning to determine where to place an object and when to stop the assignment process. The proposed method, HiLAP, explores the hierarchy during both training and inference time in a consistent manner and makes inter-dependent decisions. As a general framework, HiLAP can incorporate different neural encoders as base models for end-to-end training. Experiments on five public datasets and four base models show that HiLAP yields an average improvement of 33.4% in Macro-F1 over flat classifiers and outperforms state-of-the-art HTC methods by a large margin. Data and code can be found at https://github.com/morningmoni/HiLAP.", "field": ["Convolutions"], "task": ["Text Classification"], "method": ["Convolution"], "dataset": ["RCV1"], "metric": ["Macro F1", "Micro F1"], "title": "Hierarchical Text Classification with Reinforced Label Assignment"} {"abstract": "This paper presents a new baseline for visual question answering task. Given\nan image and a question in natural language, our model produces accurate\nanswers according to the content of the image. Our model, while being\narchitecturally simple and relatively small in terms of trainable parameters,\nsets a new state of the art on both unbalanced and balanced VQA benchmark. On\nVQA 1.0 open ended challenge, our model achieves 64.6% accuracy on the\ntest-standard set without using additional data, an improvement of 0.4% over\nstate of the art, and on newly released VQA 2.0, our model scores 59.7% on\nvalidation set outperforming best previously reported results by 0.5%. The\nresults presented in this paper are especially interesting because very similar\nmodels have been tried before but significantly lower performance were\nreported. In light of the new results we hope to see more meaningful research\non visual question answering in the future.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Visual Question Answering"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["VQA v1 test-std", "VQA v1 test-dev"], "metric": ["Accuracy"], "title": "Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question Answering"} {"abstract": "While deep neural networks achieve great performance on fitting the training distribution, the learned networks are prone to overfitting and are susceptible to adversarial attacks. In this regard, a number of mixup based augmentation methods have been recently proposed. However, these approaches mainly focus on creating previously unseen virtual examples and can sometimes provide misleading supervisory signal to the network. To this end, we propose Puzzle Mix, a mixup method for explicitly utilizing the saliency information and the underlying statistics of the natural examples. This leads to an interesting optimization problem alternating between the multi-label objective for optimal mixing mask and saliency discounted optimal transport objective. Our experiments show Puzzle Mix achieves the state of the art generalization and the adversarial robustness results compared to other mixup methods on CIFAR-100, Tiny-ImageNet, and ImageNet datasets. The source code is available at https://github.com/snu-mllab/PuzzleMix.", "field": ["Image Data Augmentation"], "task": ["Image Classification"], "method": ["Mixup"], "dataset": ["CIFAR-100", "Tiny-ImageNet", "ImageNet"], "metric": ["Percentage correct", "Top 1 Accuracy"], "title": "Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup"} {"abstract": "Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these \\emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders \\cite{devlin:2018:arxiv}, we propose {\\sc Hibert} (as shorthand for {\\bf HI}erachical {\\bf B}idirectional {\\bf E}ncoder {\\bf R}epresentations from {\\bf T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {\\sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Document Summarization", "Extractive Text Summarization"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization"} {"abstract": "Localizing page elements/objects such as tables, figures, equations, etc. is the primary step in extracting information from document images. We propose a novel end-to-end trainable deep network, (CDeC-Net) for detecting tables present in the documents. The proposed network consists of a multistage extension of Mask R-CNN with a dual backbone having deformable convolution for detecting tables varying in scale with high detection accuracy at higher IoU threshold. We empirically evaluate CDeC-Net on all the publicly available benchmark datasets - ICDAR-2013, ICDAR-2017, ICDAR-2019,UNLV, Marmot, PubLayNet, and TableBank - with extensive experiments. Our solution has three important properties: (i) a single trained model CDeC-Net{\\ddag} performs well across all the popular benchmark datasets; (ii) we report excellent performances across multiple, including higher, thresholds of IoU; (iii) by following the same protocol of the recent papers for each of the benchmarks, we consistently demonstrate the superior quantitative performance. Our code and models will be publicly released for enabling the reproducibility of the results.", "field": ["Convolutions", "RoI Feature Extractors", "Output Functions", "Instance Segmentation Models"], "task": ["Table Detection"], "method": ["Softmax", "Convolution", "RoIAlign", "Mask R-CNN", "Deformable Convolution"], "dataset": ["ICDAR2013"], "metric": ["Avg F1"], "title": "CDeC-Net: Composite Deformable Cascade Network for Table Detection in Document Images"} {"abstract": "Personalized federated learning is tasked with training machine learning models for multiple clients, each with its own data distribution. The goal is to train personalized models in a collaborative way while accounting for data disparities across clients and reducing communication costs. We propose a novel approach to this problem using hypernetworks, termed pFedHN for personalized Federated HyperNetworks. In this approach, a central hypernetwork model is trained to generate a set of models, one model for each client. This architecture provides effective parameter sharing across clients, while maintaining the capacity to generate unique and diverse personal models. Furthermore, since hypernetwork parameters are never transmitted, this approach decouples the communication cost from the trainable model size. We test pFedHN empirically in several personalized federated learning challenges and find that it outperforms previous methods. Finally, since hypernetworks share information across clients we show that pFedHN can generalize better to new clients whose distributions differ from any client observed during training.", "field": ["Feedforward Networks"], "task": ["Federated Learning", "Personalized Federated Learning"], "method": ["HyperNetwork"], "dataset": ["CIFAR-100", "Omniglot", "CIFAR-10"], "metric": ["ACC@1-50Clients", "ACC@1-10Clients", "ACC@1-100Clients"], "title": "Personalized Federated Learning using Hypernetworks"} {"abstract": "Translational distance-based knowledge graph embedding has shown progressive improvements on the link prediction task, from TransE to the latest state-of-the-art RotatE. However, N-1, 1-N and N-N predictions still remain challenging. In this work, we propose a novel translational distance-based approach for knowledge graph link prediction. The proposed method includes two-folds, first we extend the RotatE from 2D complex domain to high dimension space with orthogonal transforms to model relations for better modeling capacity. Second, the graph context is explicitly modeled via two directed context representations. These context representations are used as part of the distance scoring function to measure the plausibility of the triples during training and inference. The proposed approach effectively improves prediction accuracy on the difficult N-1, 1-N and N-N cases for knowledge graph link prediction task. The experimental results show that it achieves better performance on two benchmark data sets compared to the baseline RotatE, especially on data set (FB15k-237) with many high in-degree connection nodes.", "field": ["Graph Embeddings", "Negative Sampling"], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Link Prediction"], "method": ["TransE", "Self-Adversarial Negative Sampling", "RotatE"], "dataset": ["WN18RR", "FB15k-237"], "metric": ["Hits@3", "Hits@1", "MR", "MRR", "Hits@10"], "title": "Orthogonal Relation Transforms with Graph Context Modeling for Knowledge Graph Embedding"} {"abstract": "Although SGD requires shuffling the training data between epochs, currently\nnone of the word-level language modeling systems do this. Naively shuffling all\nsentences in the training data would not permit the model to learn\ninter-sentence dependencies. Here we present a method that partially shuffles\nthe training data between epochs. This method makes each batch random, while\nkeeping most sentence ordering intact. It achieves new state of the art results\non word-level language modeling on both the Penn Treebank and WikiText-2\ndatasets.", "field": ["Stochastic Optimization"], "task": ["Language Modelling", "Sentence Ordering"], "method": ["Stochastic Gradient Descent", "SGD"], "dataset": ["Penn Treebank (Word Level)", "WikiText-2"], "metric": ["Number of params", "Validation perplexity", "Test perplexity", "Params"], "title": "Partially Shuffling the Training Data to Improve Language Models"} {"abstract": "Although end-to-end neural text-to-speech (TTS) methods (such as Tacotron2)\nare proposed and achieve state-of-the-art performance, they still suffer from\ntwo problems: 1) low efficiency during training and inference; 2) hard to model\nlong dependency using current recurrent neural networks (RNNs). Inspired by the\nsuccess of Transformer network in neural machine translation (NMT), in this\npaper, we introduce and adapt the multi-head attention mechanism to replace the\nRNN structures and also the original attention mechanism in Tacotron2. With the\nhelp of multi-head self-attention, the hidden states in the encoder and decoder\nare constructed in parallel, which improves the training efficiency. Meanwhile,\nany two inputs at different times are connected directly by self-attention\nmechanism, which solves the long range dependency problem effectively. Using\nphoneme sequences as input, our Transformer TTS network generates mel\nspectrograms, followed by a WaveNet vocoder to output the final audio results.\nExperiments are conducted to test the efficiency and performance of our new\nnetwork. For the efficiency, our Transformer TTS network can speed up the\ntraining about 4.25 times faster compared with Tacotron2. For the performance,\nrigorous human tests show that our proposed model achieves state-of-the-art\nperformance (outperforms Tacotron2 with a gap of 0.048) and is very close to\nhuman quality (4.39 vs 4.44 in MOS).", "field": ["Temporal Convolutions", "Output Functions", "Regularization", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Generative Audio Models", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation", "Speech Synthesis", "Text-To-Speech Synthesis"], "method": ["WaveNet", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Dilated Causal Convolution", "Multi-Head Attention", "Transformer", "Rectified Linear Units", "ReLU", "Residual Connection", "Mixture of Logistic Distributions", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["LJSpeech"], "metric": ["Audio Quality MOS"], "title": "Neural Speech Synthesis with Transformer Network"} {"abstract": "In this work, we tackle the problem of instance segmentation, the task of\nsimultaneously solving object detection and semantic segmentation. Towards this\ngoal, we present a model, called MaskLab, which produces three outputs: box\ndetection, semantic segmentation, and direction prediction. Building on top of\nthe Faster-RCNN object detector, the predicted boxes provide accurate\nlocalization of object instances. Within each region of interest, MaskLab\nperforms foreground/background segmentation by combining semantic and direction\nprediction. Semantic segmentation assists the model in distinguishing between\nobjects of different semantic classes including background, while the direction\nprediction, estimating each pixel's direction towards its corresponding center,\nallows separating instances of the same semantic class. Moreover, we explore\nthe effect of incorporating recent successful methods from both segmentation\nand detection (i.e. atrous convolution and hypercolumn). Our proposed model is\nevaluated on the COCO instance segmentation benchmark and shows comparable\nperformance with other state-of-art models.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": ["ResNet", "Dilated Convolution", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO test-dev"], "metric": ["mask AP"], "title": "MaskLab: Instance Segmentation by Refining Object Detection with Semantic and Direction Features"} {"abstract": "Skeleton-based action recognition is an important task that requires the\nadequate understanding of movement characteristics of a human action from the\ngiven skeleton sequence. Recent studies have shown that exploring spatial and\ntemporal features of the skeleton sequence is vital for this task.\nNevertheless, how to effectively extract discriminative spatial and temporal\nfeatures is still a challenging problem. In this paper, we propose a novel\nAttention Enhanced Graph Convolutional LSTM Network (AGC-LSTM) for human action\nrecognition from skeleton data. The proposed AGC-LSTM can not only capture\ndiscriminative features in spatial configuration and temporal dynamics but also\nexplore the co-occurrence relationship between spatial and temporal domains. We\nalso present a temporal hierarchical architecture to increases temporal\nreceptive fields of the top AGC-LSTM layer, which boosts the ability to learn\nthe high-level semantic representation and significantly reduces the\ncomputation cost. Furthermore, to select discriminative spatial information,\nthe attention mechanism is employed to enhance information of key joints in\neach AGC-LSTM layer. Experimental results on two datasets are provided: NTU\nRGB+D dataset and Northwestern-UCLA dataset. The comparison results demonstrate\nthe effectiveness of our approach and show that our approach outperforms the\nstate-of-the-art methods on both datasets.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "An Attention Enhanced Graph Convolutional LSTM Network for Skeleton-Based Action Recognition"} {"abstract": "Domain adaptation enables the learner to safely generalize into novel environments by mitigating domain shifts across distributions. Previous works may not effectively uncover the underlying reasons that would lead to the drastic model degradation on the target task. In this paper, we empirically reveal that the erratic discrimination of the target domain mainly stems from its much smaller feature norms with respect to that of the source domain. To this end, we propose a novel parameter-free Adaptive Feature Norm approach. We demonstrate that progressively adapting the feature norms of the two domains to a large range of values can result in significant transfer gains, implying that those task-specific features with larger norms are more transferable. Our method successfully unifies the computation of both standard and partial domain adaptation with more robustness against the negative transfer issue. Without bells and whistles but a few lines of code, our method substantially lifts the performance on the target task and exceeds state-of-the-arts by a large margin (11.5% on Office-Home and 17.1% on VisDA2017). We hope our simple yet effective approach will shed some light on the future research of transfer learning. Code is available at https://github.com/jihanyang/AFN.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Domain Adaptation", "Partial Domain Adaptation", "Transfer Learning", "Unsupervised Domain Adaptation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["VisDA2017", "Office-31", "Office-Home", "ImageCLEF-DA"], "metric": ["Accuracy (%)", "Average Accuracy", "Accuracy"], "title": "Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation"} {"abstract": "Convolutional neural networks (CNNs) are inherently limited to model\ngeometric transformations due to the fixed geometric structures in its building\nmodules. In this work, we introduce two new modules to enhance the\ntransformation modeling capacity of CNNs, namely, deformable convolution and\ndeformable RoI pooling. Both are based on the idea of augmenting the spatial\nsampling locations in the modules with additional offsets and learning the\noffsets from target tasks, without additional supervision. The new modules can\nreadily replace their plain counterparts in existing CNNs and can be easily\ntrained end-to-end by standard back-propagation, giving rise to deformable\nconvolutional networks. Extensive experiments validate the effectiveness of our\napproach on sophisticated vision tasks of object detection and semantic\nsegmentation. The code would be released.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Object Detection", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Deformable Position-Sensitive RoI Pooling", "Deformable RoI Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "Deformable Convolution"], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "APS", "APL", "AP50"], "title": "Deformable Convolutional Networks"} {"abstract": "Machine reading comprehension with unanswerable questions is a new\nchallenging task for natural language processing. A key subtask is to reliably\npredict whether the question is unanswerable. In this paper, we propose a\nunified model, called U-Net, with three important components: answer pointer,\nno-answer pointer, and answer verifier. We introduce a universal node and thus\nprocess the question and its context passage as a single contiguous sequence of\ntokens. The universal node encodes the fused information from both the question\nand passage, and plays an important role to predict whether the question is\nanswerable and also greatly improves the conciseness of the U-Net. Different\nfrom the state-of-art pipeline models, U-Net can be learned in an end-to-end\nfashion. The experimental results on the SQuAD 2.0 dataset show that U-Net can\neffectively predict the unanswerability of questions and achieves an F1 score\nof 71.7 on SQuAD 2.0.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["SQuAD2.0 dev", "SQuAD2.0"], "metric": ["EM", "F1"], "title": "U-Net: Machine Reading Comprehension with Unanswerable Questions"} {"abstract": "In neural network-based models for natural language processing (NLP), the largest part of the parameters often consists of word embeddings. Conventional models prepare a large embedding matrix whose size depends on the vocabulary size. Therefore, storing these models in memory and disk storage is costly. In this study, to reduce the total number of parameters, the embeddings for all words are represented by transforming a shared embedding. The proposed method, ALONE (all word embeddings from one), constructs the embedding of a word by modifying the shared embedding with a filter vector, which is word-specific but non-trainable. Then, we input the constructed embedding into a feed-forward neural network to increase its expressiveness. Naively, the filter vectors occupy the same memory size as the conventional embedding matrix, which depends on the vocabulary size. To solve this issue, we also introduce a memory-efficient filter construction approach. We indicate our ALONE can be used as word representation sufficiently through an experiment on the reconstruction of pre-trained word embeddings. In addition, we also conduct experiments on NLP application tasks: machine translation and summarization. We combined ALONE with the current state-of-the-art encoder-decoder model, the Transformer, and achieved comparable scores on WMT 2014 English-to-German translation and DUC 2004 very short summarization with less parameters.", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation", "Sentence Summarization", "Text Summarization", "Word Embeddings"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections", "Rectified Linear Units"], "dataset": ["DUC 2004 Task 1"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "All Word Embeddings from One Embedding"} {"abstract": "With recent progress in graphics, it has become more tractable to train\nmodels on synthetic images, potentially avoiding the need for expensive\nannotations. However, learning from synthetic images may not achieve the\ndesired performance due to a gap between synthetic and real image\ndistributions. To reduce this gap, we propose Simulated+Unsupervised (S+U)\nlearning, where the task is to learn a model to improve the realism of a\nsimulator's output using unlabeled real data, while preserving the annotation\ninformation from the simulator. We develop a method for S+U learning that uses\nan adversarial network similar to Generative Adversarial Networks (GANs), but\nwith synthetic images as inputs instead of random vectors. We make several key\nmodifications to the standard GAN algorithm to preserve annotations, avoid\nartifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a\nlocal adversarial loss, and (iii) updating the discriminator using a history of\nrefined images. We show that this enables generation of highly realistic\nimages, which we demonstrate both qualitatively and with a user study. We\nquantitatively evaluate the generated images by training models for gaze\nestimation and hand pose estimation. We show a significant improvement over\nusing synthetic images, and achieve state-of-the-art results on the MPIIGaze\ndataset without any labeled real data.", "field": ["Generative Models", "Convolutions"], "task": ["Domain Adaptation", "Gaze Estimation", "Hand Pose Estimation", "Image-to-Image Translation", "Pose Estimation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["Cityscapes Labels-to-Photo", "Cityscapes Photo-to-Labels"], "metric": ["Per-pixel Accuracy", "Per-class Accuracy", "Class IOU"], "title": "Learning from Simulated and Unsupervised Images through Adversarial Training"} {"abstract": "Face recognition (FR) methods report significant performance by adopting the\nconvolutional neural network (CNN) based learning methods. Although CNNs are\nmostly trained by optimizing the softmax loss, the recent trend shows an\nimprovement of accuracy with different strategies, such as task-specific CNN\nlearning with different loss functions, fine-tuning on target dataset, metric\nlearning and concatenating features from multiple CNNs. Incorporating these\ntasks obviously requires additional efforts. Moreover, it demotivates the\ndiscovery of efficient CNN models for FR which are trained only with identity\nlabels. We focus on this fact and propose an easily trainable and single CNN\nbased FR method. Our CNN model exploits the residual learning framework.\nAdditionally, it uses normalized features to compute the loss. Our extensive\nexperiments show excellent generalization on different datasets. We obtain very\ncompetitive and state-of-the-art results on the LFW, IJB-A, YouTube faces and\nCACD datasets.", "field": ["Output Functions"], "task": ["Face Recognition", "Metric Learning"], "method": ["Softmax"], "dataset": ["CACDVS"], "metric": ["Accuracy"], "title": "DeepVisage: Making face recognition simple yet with powerful generalization skills"} {"abstract": "Text attributes, such as user and product information in product reviews, have been used to improve the performance of sentiment classification models. The de facto standard method is to incorporate them as additional biases in the attention mechanism, and more performance gains are achieved by extending the model architecture. In this paper, we show that the above method is the least effective way to represent and inject attributes. To demonstrate this hypothesis, unlike previous models with complicated architectures, we limit our base model to a simple BiLSTM with attention classifier, and instead focus on how and where the attributes should be incorporated in the model. We propose to represent attributes as chunk-wise importance weight matrices and consider four locations in the model (i.e., embedding, encoding, attention, classifier) to inject attributes. Experiments show that our proposed method achieves significant improvements over the standard approach and that attention mechanism is the worst location to inject attributes, contradicting prior work. We also outperform the state-of-the-art despite our use of a simple base model. Finally, we show that these representations transfer well to other tasks. Model implementation and datasets are released here: https://github.com/rktamplayo/CHIM.", "field": ["Recurrent Neural Networks", "Activation Functions", "Bidirectional Recurrent Neural Networks"], "task": ["Sentiment Analysis", "Sentiment Analysis (Product + User)"], "method": ["Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Bidirectional LSTM", "LSTM", "Sigmoid Activation"], "dataset": ["User and product information"], "metric": ["Yelp 2014 (Acc)", "Yelp 2013 (Acc)", "IMDB (Acc)"], "title": "Rethinking Attribute Representation and Injection for Sentiment Classification"} {"abstract": "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of Shotton et al. have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skeletal representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skeletal representation lies in the Lie group SE(3)\u00d7.. .\u00d7SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skeletal representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.", "field": ["Non-Parametric Classification"], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Support Vector Machine", "SVM"], "dataset": ["UT-Kinect", "NTU RGB+D", "Florence 3D"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Human Action Recognition by Representing 3D Skeletons as Points in a Lie Group"} {"abstract": "It is important to detect anomalous inputs when deploying machine learning\nsystems. The use of larger and more complex inputs in deep learning magnifies\nthe difficulty of distinguishing between anomalous and in-distribution\nexamples. At the same time, diverse image and text data are available in\nenormous quantities. We propose leveraging these data to improve deep anomaly\ndetection by training anomaly detectors against an auxiliary dataset of\noutliers, an approach we call Outlier Exposure (OE). This enables anomaly\ndetectors to generalize and detect unseen anomalies. In extensive experiments\non natural language processing and small- and large-scale vision tasks, we find\nthat Outlier Exposure significantly improves detection performance. We also\nobserve that cutting-edge generative models trained on CIFAR-10 may assign\nhigher likelihoods to SVHN images than to CIFAR-10 images; we use OE to\nmitigate this issue. We also analyze the flexibility and robustness of Outlier\nExposure, and identify characteristics of the auxiliary dataset that improve\nperformance.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Anomaly Detection", "Out-of-Distribution Detection"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CIFAR-10 vs CIFAR-100", "CIFAR-100", "CIFAR-10"], "metric": ["AUROC", "AUPR", "FPR95"], "title": "Deep Anomaly Detection with Outlier Exposure"} {"abstract": "Depth estimation features are helpful for 3D recognition. Commodity-grade depth cameras are able to capture depth and color image in real-time. However, glossy, transparent or distant surface cannot be scanned properly by the sensor. As a result, enhancement and restoration from sensing depth is an important task. Depth completion aims at filling the holes that sensors fail to detect, which is still a complex task for machine to learn. Traditional hand-tuned methods have reached their limits, while neural network based methods tend to copy and interpolate the output from surrounding depth values. This leads to blurred boundaries, and structures of the depth map are lost. Consequently, our main work is to design an end-to-end network improving completion depth maps while maintaining edge clarity. We utilize self-attention mechanism, previously used in image inpainting fields, to extract more useful information in each layer of convolution so that the complete depth map is enhanced. In addition, we propose boundary consistency concept to enhance the depth map quality and structure. Experimental results validate the effectiveness of our self-attention and boundary consistency schema, which outperforms previous state-of-the-art depth completion work on Matterport3D dataset. Our code is publicly available at https://github.com/patrickwu2/Depth-Completion", "field": ["Convolutions"], "task": ["Depth Completion", "Depth Estimation", "Image Inpainting"], "method": ["Convolution"], "dataset": ["Matterport3D"], "metric": ["RMSE"], "title": "Indoor Depth Completion with Boundary Consistency and Self-Attention"} {"abstract": "Recent studies have witnessed the successes of using 3D CNNs for video action recognition. However, most 3D models are built upon RGB and optical flow streams, which may not fully exploit pose dynamics, i.e., an important cue of modeling human actions. To fill this gap, we propose a concise Pose-Action 3D Machine (PA3D), which can effectively encode multiple pose modalities within a unified 3D framework, and consequently learn spatio-temporal pose representations for action recognition. More specifically, we introduce a novel temporal pose convolution to aggregate spatial poses over frames. Unlike the classical temporal convolution, our operation can explicitly learn the pose motions that are discriminative to recognize human actions. Extensive experiments on three popular benchmarks (i.e., JHMDB, HMDB, and Charades) show that, PA3D outperforms the recent pose-based approaches. Furthermore, PA3D is highly complementary to the recent 3D CNNs, e.g., I3D. Multi-stream fusion achieves the state-of-the-art performance on all evaluated data sets.\r", "field": ["Convolutions"], "task": ["Action Recognition", "Optical Flow Estimation", "Skeleton Based Action Recognition", "Temporal Action Localization", "Video Recognition"], "method": ["Convolution"], "dataset": ["J-HMDB", "Charades"], "metric": ["Accuracy (RGB+pose)", "MAP"], "title": "PA3D: Pose-Action 3D Machine for Video Recognition"} {"abstract": "Deep convolutional neural networks (CNNs) have greatly improved the Face\nRecognition (FR) performance in recent years. Almost all CNNs in FR are trained\non the carefully labeled datasets containing plenty of identities. However,\nsuch high-quality datasets are very expensive to collect, which restricts many\nresearchers to achieve state-of-the-art performance. In this paper, we propose\na framework, called SeqFace, for learning discriminative face features. Besides\na traditional identity training dataset, the designed SeqFace can train CNNs by\nusing an additional dataset which includes a large number of face sequences\ncollected from videos. Moreover, the label smoothing regularization (LSR) and a\nnew proposed discriminative sequence agent (DSA) loss are employed to enhance\ndiscrimination power of deep face features via making full use of the sequence\ndata. Our method achieves excellent performance on Labeled Faces in the Wild\n(LFW), YouTube Faces (YTF), only with a single ResNet. The code and models are\npublicly available on-line (https://github.com/huangyangyu/SeqFace).", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Face Recognition", "Face Verification"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Label Smoothing", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["YouTube Faces DB", "Labeled Faces in the Wild"], "metric": ["Accuracy"], "title": "SeqFace: Make full use of sequence information for face recognition"} {"abstract": "We present WiC-TSV, a new multi-domain evaluation benchmark for Word Sense Disambiguation. More specifically, we introduce a framework for Target Sense Verification of Words in Context which grounds its uniqueness in the formulation as a binary classification task thus being independent of external sense inventories, and the coverage of various domains. This makes the dataset highly flexible for the evaluation of a diverse set of models and systems in and across domains. WiC-TSV provides three different evaluation settings, depending on the input signals provided to the model. We set baseline performance on the dataset using state-of-the-art language models. Experimental results show that even though these models can perform decently on the task, there remains a gap between machine and human performance, especially in out-of-domain settings. WiC-TSV data is available at https://competitions.codalab.org/competitions/23683", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Entity Linking", "Word Sense Disambiguation"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["WiC-TSV"], "metric": ["Task 3 Accuracy: domain specific", "Task 1 Accuracy: domain specific", "Task 3 Accuracy: all", "Task 1 Accuracy: general purpose", "Task 3 Accuracy: general purpose", "Task 1 Accuracy: all", "Task 2 Accuracy: general purpose", "Task 2 Accuracy: domain specific", "Task 2 Accuracy: all"], "title": "WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context"} {"abstract": "Long-range dependencies modeling, widely used in capturing spatiotemporal correlation, has shown to be effective in CNN dominated computer vision tasks. Yet neither stacks of convolutional operations to enlarge receptive fields nor recent nonlocal modules is computationally efficient. In this paper, we present a generic family of lightweight global descriptors for modeling the interactions between positions across different dimensions (e.g., channels, frames). This descriptor enables subsequent convolutions to access the informative global features with negligible computational complexity and parameters. Benchmark experiments show that the proposed method can complete state-of-the-art long-range mechanisms with a significant reduction in extra computing cost. Code available at https://github.com/HolmesShuan/Compact-Global-Descriptor.", "field": ["Proposal Filtering", "Convolutional Neural Networks", "Feature Extractors", "Normalization", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Object Detection Models", "Region Proposal", "Stochastic Optimization", "Feedforward Networks", "Skip Connection Blocks", "Image Data Augmentation", "Initialization", "Output Functions", "Learning Rate Schedules", "RoI Feature Extractors", "Skip Connections", "Image Model Blocks"], "task": ["Audio Classification", "Deep Attention", "Image Classification", "Object Detection"], "method": ["Depthwise Convolution", "Weight Decay", "Average Pooling", "Faster R-CNN", "1x1 Convolution", "Region Proposal Network", "ResNet", "Compact Global Descriptor", "Random Horizontal Flip", "SSD", "RoIPool", "Convolution", "ReLU", "Residual Connection", "FPN", "Dense Connections", "RPN", "MobileNetV1", "Non Maximum Suppression", "Random Resized Crop", "Batch Normalization", "Residual Network", "Pointwise Convolution", "Kaiming Initialization", "Step Decay", "SGD with Momentum", "Softmax", "Feature Pyramid Network", "Bottleneck Residual Block", "Depthwise Separable Convolution", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet", "PASCAL VOC 2007", "COCO test-dev"], "metric": ["box AP", "Top 1 Accuracy", "Top 5 Accuracy", "MAP"], "title": "Compact Global Descriptor for Neural Networks"} {"abstract": "Most of the recently proposed neural models for named entity recognition have been purely data-driven, with a strong emphasis on getting rid of the efforts for collecting external resources or designing hand-crafted features. This could increase the chance of overfitting since the models cannot access any supervision signal beyond the small amount of annotated data, limiting their power to generalize beyond the annotated entities. In this work, we show that properly utilizing external gazetteers could benefit segmental neural NER models. We add a simple module on the recently proposed hybrid semi-Markov CRF architecture and observe some promising results.", "field": ["Structured Prediction"], "task": ["Named Entity Recognition"], "method": ["Conditional Random Field", "CRF"], "dataset": ["Ontonotes v5 (English)", "CoNLL 2003 (English)"], "metric": ["F1"], "title": "Towards Improving Neural Named Entity Recognition with Gazetteers"} {"abstract": "Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks. However, doing so naively leads to ill posed learning problems with degenerate solutions. In this paper, we propose a novel and principled learning formulation that addresses these issues. The method is obtained by maximizing the information between labels and input data indices. We show that this criterion extends standard crossentropy minimization to an optimal transport problem, which we solve efficiently for millions of input images and thousands of labels using a fast variant of the Sinkhorn-Knopp algorithm. The resulting method is able to self-label visual data so as to train highly competitive image representations without manual labels. Our method achieves state of the art representation learning performance for AlexNet and ResNet-50 on SVHN, CIFAR-10, CIFAR-100 and ImageNet and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline. Code and models are available.", "field": ["Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations"], "task": ["Image Clustering", "Representation Learning", "Self-Supervised Image Classification", "Self-Supervised Learning"], "method": ["Grouped Convolution", "Softmax", "Convolution", "1x1 Convolution", "ReLU", "Rectified Linear Units", "AlexNet", "Dropout", "Dense Connections", "Local Response Normalization", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy", "NMI", "Accuracy"], "title": "Self-labelling via simultaneous clustering and representation learning"} {"abstract": "We revisit two popular convolutional neural networks (CNN) in person\nre-identification (re-ID), i.e, verification and classification models. The two\nmodels have their respective advantages and limitations due to different loss\nfunctions. In this paper, we shed light on how to combine the two models to\nlearn more discriminative pedestrian descriptors. Specifically, we propose a\nnew siamese network that simultaneously computes identification loss and\nverification loss. Given a pair of training images, the network predicts the\nidentities of the two images and whether they belong to the same identity. Our\nnetwork learns a discriminative embedding and a similarity measurement at the\nsame time, thus making full usage of the annotations. Albeit simple, the\nlearned embedding improves the state-of-the-art performance on two public\nperson re-ID benchmarks. Further, we show our architecture can also be applied\nin image retrieval.", "field": ["Twin Networks"], "task": ["Image Retrieval", "Person Re-Identification"], "method": ["Siamese Network"], "dataset": ["MSMT17", "Market-1501", "DukeMTMC-reID"], "metric": ["Rank-1", "mAP", "MAP"], "title": "A Discriminatively Learned CNN Embedding for Person Re-identification"} {"abstract": "Designing effective neural networks is fundamentally important in deep multimodal learning. Most existing works focus on a single task and design neural architectures manually, which are highly task-specific and hard to generalize to different tasks. In this paper, we devise a generalized deep multimodal neural architecture search (MMnas) framework for various multimodal learning tasks. Given multimodal input, we first define a set of primitive operations, and then construct a deep encoder-decoder based unified backbone, where each encoder or decoder block corresponds to an operation searched from a predefined operation pool. On top of the unified backbone, we attach task-specific heads to tackle different multimodal learning tasks. By using a gradient-based NAS algorithm, the optimal architectures for different tasks are learned efficiently. Extensive ablation studies, comprehensive analysis, and comparative experimental results show that the obtained MMnasNet significantly outperforms existing state-of-the-art approaches across three multimodal learning tasks (over five datasets), including visual question answering, image-text matching, and visual grounding.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions"], "task": ["Neural Architecture Search", "Question Answering", "Text Matching", "Visual Grounding", "Visual Question Answering"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["VQA v2 test-std"], "metric": ["number", "overall", "other", "yes/no"], "title": "Deep Multimodal Neural Architecture Search"} {"abstract": "Attention mechanisms have improved the performance of NLP tasks while allowing models to remain explainable. Self-attention is currently widely used, however interpretability is difficult due to the numerous attention distributions. Recent work has shown that model representations can benefit from label-specific information, while facilitating interpretation of predictions. We introduce the Label Attention Layer: a new form of self-attention where attention heads represent labels. We test our novel layer by running constituency and dependency parsing experiments and show our new model obtains new state-of-the-art results for both tasks on both the Penn Treebank (PTB) and Chinese Treebank. Additionally, our model requires fewer self-attention layers compared to existing work. Finally, we find that the Label Attention heads learn relations between syntactic categories and show pathways to analyze errors.", "field": ["Image Models"], "task": ["Constituency Parsing", "Dependency Parsing"], "method": ["Interpretability"], "dataset": ["Penn Treebank"], "metric": ["F1 score", "UAS", "POS", "LAS"], "title": "Rethinking Self-Attention: Towards Interpretability in Neural Parsing"} {"abstract": "Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout remove informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on the ImageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. Source code and pretrained models are available at \\href{https://github.com/clovaai/CutMix-PyTorch}{https://github.com/clovaai/CutMix-PyTorch}.", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Domain Generalization", "Image Captioning", "Image Classification", "Object Localization", "Out-of-Distribution Detection"], "method": ["Average Pooling", "1x1 Convolution", "ResNet", "Convolution", "CutMix", "ReLU", "Residual Connection", "Grouped Convolution", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Step Decay", "ResNeXt Block", "SGD with Momentum", "ResNeXt", "Bottleneck Residual Block", "Dropout", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO", "CIFAR-100", "CIFAR-10", "ImageNet-A", "PASCAL VOC 2007", "ImageNet"], "metric": ["BLEU-2", "METEOR", "BLEU-1", "Top 1 Accuracy", "Percentage correct", "Top-1 accuracy %", "CIDEr", "BLEU-3", "MAP", "Top 5 Accuracy", "BLEU-4", "ROUGE"], "title": "CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features"} {"abstract": "Recently two-stage detectors have surged ahead of single-shot detectors in\nthe accuracy-vs-speed trade-off. Nevertheless single-shot detectors are\nimmensely popular in embedded vision applications. This paper brings\nsingle-shot detectors up to the same level as current two-stage techniques. We\ndo this by improving training for the state-of-the-art single-shot detector,\nRetinaNet, in three ways: integrating instance mask prediction for the first\ntime, making the loss function adaptive and more stable, and including\nadditional hard examples in training. We call the resulting augmented network\nRetinaMask. The detection component of RetinaMask has the same computational\ncost as the original RetinaNet, but is more accurate. COCO test-dev results are\nup to 41.4 mAP for RetinaMask-101 vs 39.1mAP for RetinaNet-101, while the\nruntime is the same during evaluation. Adding Group Normalization increases the\nperformance of RetinaMask-101 to 41.7 mAP. Code is\nat:https://github.com/chengyangfu/retinamask", "field": ["Initialization", "Regularization", "Proposal Filtering", "Learning Rate Schedules", "Stochastic Optimization", "Feature Extractors", "RoI Feature Extractors", "Activation Functions", "Convolutional Neural Networks", "Loss Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Object Detection Models", "Skip Connection Blocks"], "task": ["Object Detection"], "method": ["Weight Decay", "Self-Adjusting Smooth L1 Loss", "Average Pooling", "RetinaMask", "1x1 Convolution", "RoIAlign", "ResNet", "Convolution", "ReLU", "Residual Connection", "FPN", "Grouped Convolution", "Focal Loss", "Non Maximum Suppression", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Step Decay", "ResNeXt Block", "SGD with Momentum", "ResNeXt", "Feature Pyramid Network", "Group Normalization", "Bottleneck Residual Block", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "RetinaMask: Learning to predict masks improves state-of-the-art single-shot detection for free"} {"abstract": "Traditional works have shown that patches in a natural image tend to\nredundantly recur many times inside the image, both within the same scale, as\nwell as across different scales. Make full use of these multi-scale information\ncan improve the image restoration performance. However, the current proposed\ndeep learning based restoration methods do not take the multi-scale information\ninto account. In this paper, we propose a dilated convolution based inception\nmodule to learn multi-scale information and design a deep network for single\nimage super-resolution. Different dilated convolution learns different scale\nfeature, then the inception module concatenates all these features to fuse\nmulti-scale information. In order to increase the reception field of our\nnetwork to catch more contextual information, we cascade multiple inception\nmodules to constitute a deep network to conduct single image super-resolution.\nWith the novel dilated convolution based inception module, the proposed\nend-to-end single image super-resolution network can take advantage of\nmulti-scale information to improve image super-resolution performance.\nExperimental results show that our proposed method outperforms many\nstate-of-the-art single image super-resolution methods.", "field": ["Image Model Blocks", "Convolutions", "Pooling Operations"], "task": ["Image Restoration", "Image Super-Resolution", "Super-Resolution"], "method": ["Dilated Convolution", "Inception Module", "Convolution", "1x1 Convolution", "Max Pooling"], "dataset": ["Set5 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Single Image Super-Resolution with Dilated Convolution based Multi-Scale Information Learning Inception Module"} {"abstract": "We present Wasserstein Embedding for Graph Learning (WEGL), a novel and fast framework for embedding entire graphs in a vector space, in which various machine learning models are applicable for graph-level prediction tasks. We leverage new insights on defining similarity between graphs as a function of the similarity between their node embedding distributions. Specifically, we use the Wasserstein distance to measure the dissimilarity between node embeddings of different graphs. Unlike prior work, we avoid pairwise calculation of distances between graphs and reduce the computational complexity from quadratic to linear in the number of graphs. WEGL calculates Monge maps from a reference distribution to each node embedding and, based on these maps, creates a fixed-sized vector representation of the graph. We evaluate our new graph embedding approach on various benchmark graph-property prediction tasks, showing state-of-the-art classification performance while having superior computational efficiency. The code is available at https://github.com/navid-naderi/WEGL.", "field": ["Graph Embeddings"], "task": ["Graph Classification", "Graph Embedding", "Graph Learning"], "method": ["Wasserstein Embedding for Graph Learning", "WEGL"], "dataset": ["COLLAB", "RE-M12K", "IMDb-B", "ENZYMES", "REDDIT-B", "PROTEINS", "D&D", "NCI1", "IMDb-M", "MUTAG", "PTC", "ogbg-molhiv", "RE-M5K"], "metric": ["ROC-AUC", "Accuracy"], "title": "Wasserstein Embedding for Graph Learning"} {"abstract": "Supervised learning results typically rely on assumptions of i.i.d. data. Unfortunately, those assumptions are commonly violated in practice. In this work, we tackle this problem by focusing on domain generalization: a formalization where the data generating process at test time may yield samples from never-before-seen domains (distributions). Our work relies on a simple lemma: by minimizing a notion of discrepancy between all pairs from a set of given domains, we also minimize the discrepancy between any pairs of mixtures of domains. Using this result, we derive a generalization bound for our setting. We then show that low risk over unseen domains can be achieved by representing the data in a space where (i) the training distributions are indistinguishable, and (ii) relevant information for the task at hand is preserved. Minimizing the terms in our bound yields an adversarial formulation which estimates and minimizes pairwise discrepancies. We validate our proposed strategy on standard domain generalization benchmarks, outperforming a number of recently introduced methods. Notably, we tackle a real-world application where the underlying data corresponds to multi-channel electroencephalography time series from different subjects, each considered as a distinct domain.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Domain Generalization", "Object Recognition", "Representation Learning", "Time Series"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["PACS"], "metric": ["Average Accuracy"], "title": "Generalizing to unseen domains via distribution matching"} {"abstract": "We introduce the task of Visual Dialog, which requires an AI agent to hold a\nmeaningful dialog with humans in natural, conversational language about visual\ncontent. Specifically, given an image, a dialog history, and a question about\nthe image, the agent has to ground the question in image, infer context from\nhistory, and answer the question accurately. Visual Dialog is disentangled\nenough from a specific downstream task so as to serve as a general test of\nmachine intelligence, while being grounded in vision enough to allow objective\nevaluation of individual responses and benchmark progress. We develop a novel\ntwo-person chat data-collection protocol to curate a large-scale Visual Dialog\ndataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10\nquestion-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog\nquestion-answer pairs.\n We introduce a family of neural encoder-decoder models for Visual Dialog with\n3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --\nand 2 decoders (generative and discriminative), which outperform a number of\nsophisticated baselines. We propose a retrieval-based evaluation protocol for\nVisual Dialog where the AI agent is asked to sort a set of candidate answers\nand evaluated on metrics such as mean-reciprocal-rank of human response. We\nquantify gap between machine and human performance on the Visual Dialog task\nvia human studies. Putting it all together, we demonstrate the first 'visual\nchatbot'! Our dataset, code, trained models and visual chatbot are available on\nhttps://visualdialog.org", "field": ["Working Memory Models"], "task": ["Chatbot", "Visual Dialog"], "method": ["Memory Network"], "dataset": ["Visual Dialog v1.0 test-std", "VisDial v0.9 val"], "metric": ["MRR (x 100)", "R@10", "NDCG (x 100)", "R@5", "Mean Rank", "MRR", "Mean", "R@1"], "title": "Visual Dialog"} {"abstract": "Person re-identification (re-ID) models trained on one domain often fail to\ngeneralize well to another. In our attempt, we present a \"learning via\ntranslation\" framework. In the baseline, we translate the labeled images from\nsource to target domain in an unsupervised manner. We then train re-ID models\nwith the translated images by supervised methods. Yet, being an essential part\nof this framework, unsupervised image-image translation suffers from the\ninformation loss of source-domain labels during translation.\n Our motivation is two-fold. First, for each image, the discriminative cues\ncontained in its ID label should be maintained after translation. Second, given\nthe fact that two domains have entirely different persons, a translated image\nshould be dissimilar to any of the target IDs. To this end, we propose to\npreserve two types of unsupervised similarities, 1) self-similarity of an image\nbefore and after translation, and 2) domain-dissimilarity of a translated\nsource image and a target image. Both constraints are implemented in the\nsimilarity preserving generative adversarial network (SPGAN) which consists of\nan Siamese network and a CycleGAN. Through domain adaptation experiment, we\nshow that images generated by SPGAN are more suitable for domain adaptation and\nyield consistent and competitive re-ID accuracy on two large-scale datasets.", "field": ["Discriminators", "Activation Functions", "Normalization", "Loss Functions", "Convolutions", "Generative Models", "Skip Connections", "Twin Networks", "Skip Connection Blocks"], "task": ["Domain Adaptation", "Person Re-Identification", "Unsupervised Domain Adaptation"], "method": ["Cycle Consistency Loss", "Siamese Network", "Instance Normalization", "PatchGAN", "GAN Least Squares Loss", "Batch Normalization", "Tanh Activation", "Convolution", "ReLU", "CycleGAN", "Residual Connection", "Leaky ReLU", "Residual Block", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["Market to Duke", "MSMT17->DukeMTMC-reID", "DukeMTMC-reID", "Duke to Market", "Market-1501"], "metric": ["rank-10", "mAP", "Rank-10", "MAP", "Rank-1", "rank-1", "Rank-5", "rank-5"], "title": "Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification"} {"abstract": "Named entity recognition (NER) is a widely applicable natural language processing task and building block of question answering, topic modeling, information retrieval, etc. In the medical domain, NER plays a crucial role by extracting meaningful chunks from clinical notes and reports, which are then fed to downstream tasks like assertion status detection, entity resolution, relation extraction, and de-identification. Reimplementing a Bi-LSTM-CNN-Char deep learning architecture on top of Apache Spark, we present a single trainable NER model that obtains new state-of-the-art results on seven public biomedical benchmarks without using heavy contextual embeddings like BERT. This includes improving BC4CHEMD to 93.72% (4.1% gain), Species800 to 80.91% (4.6% gain), and JNLPBA to 81.29% (5.2% gain). In addition, this model is freely available within a production-grade code base as part of the open-source Spark NLP library; can scale up for training and inference in any Spark cluster; has GPU support and libraries for popular programming languages such as Python, R, Scala and Java; and can be extended to support other human languages with no code changes.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Entity Resolution", "Information Retrieval", "Medical Named Entity Recognition", "Named Entity Recognition", "Question Answering", "Relation Extraction"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["LINNAEUS", "JNLPBA", "BC5CDR", "BioNLP13-CG", "Species800", "AnatEM", "NCBI-disease", "BC4CHEMD"], "metric": ["F1"], "title": "Biomedical Named Entity Recognition at Scale"} {"abstract": "NeurST is an open-source toolkit for neural speech translation developed by ByteDance AI Lab. The toolkit mainly focuses on end-to-end speech translation, which is easy to use, modify, and extend to advanced speech translation research and products. NeurST aims at facilitating the speech translation research for NLP researchers and provides a complete setup for speech translation benchmarks, including feature extraction, data preprocessing, distributed training, and evaluation. Moreover, The toolkit implements several major architectures for end-to-end speech translation. It shows experimental results for different benchmark datasets, which can be regarded as reliable baselines for future research. The toolkit is publicly available at https://github.com/bytedance/neurst.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Speech-to-Text Translation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["MuST-C EN->ES", "libri-trans", "MuST-C EN->DE", "MuST-C EN->FR"], "metric": ["Case-sensitive sacreBLEU", "Case-insensitive tokenized BLEU", "Case-insensitive sacreBLEU", "Case-sensitive tokenized BLEU"], "title": "NeurST: Neural Speech Translation Toolkit"} {"abstract": "Medical image datasets are usually imbalanced, due to the high costs of obtaining the data and time-consuming annotations. Training deep neural network models on such datasets to accurately classify the medical condition does not yield desired results and often over-fits the data on majority class samples. In order to address this issue, data augmentation is often performed on training data by position augmentation techniques such as scaling, cropping, flipping, padding, rotation, translation, affine transformation, and color augmentation techniques such as brightness, contrast, saturation, and hue to increase the dataset sizes. These augmentation techniques are not guaranteed to be advantageous in domains with limited data, especially medical image data, and could lead to further overfitting. In this work, we performed data augmentation on the Chest X-rays dataset through generative modeling (deep convolutional generative adversarial network) which creates artificial instances retaining similar characteristics to the original data and evaluation of the model resulted in Fr\\'echet Distance of Inception (FID) score of 1.289.", "field": ["Generative Models", "Convolutions", "Activation Functions", "Normalization"], "task": ["Data Augmentation", "Medical Image Generation"], "method": ["Convolution", "Batch Normalization", "ReLU", "DCGAN", "Deep Convolutional GAN", "Leaky ReLU", "Rectified Linear Units"], "dataset": ["Chest X-Ray Images (Pneumonia)"], "metric": ["Frechet Inception Distance"], "title": "Evaluation of Deep Convolutional Generative Adversarial Networks for data augmentation of chest X-ray images"} {"abstract": "We propose Human Pose Models that represent RGB and depth images of human\nposes independent of clothing textures, backgrounds, lighting conditions, body\nshapes and camera viewpoints. Learning such universal models requires training\nimages where all factors are varied for every human pose. Capturing such data\nis prohibitively expensive. Therefore, we develop a framework for synthesizing\nthe training data. First, we learn representative human poses from a large\ncorpus of real motion captured human skeleton data. Next, we fit synthetic 3D\nhumans with different body shapes to each pose and render each from 180 camera\nviewpoints while randomly varying the clothing textures, background and\nlighting. Generative Adversarial Networks are employed to minimize the gap\nbetween synthetic and real image distributions. CNN models are then learned\nthat transfer human poses to a shared high-level invariant space. The learned\nCNN models are then used as invariant feature extractors from real RGB and\ndepth frames of human action videos and the temporal variations are modelled by\nFourier Temporal Pyramid. Finally, linear SVM is used for classification.\nExperiments on three benchmark cross-view human action datasets show that our\nalgorithm outperforms existing methods by significant margins for RGB only and\nRGB-D action recognition.", "field": ["Non-Parametric Classification"], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Support Vector Machine", "SVM"], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Learning Human Pose Models from Synthesized Data for Robust RGB-D Action Recognition"} {"abstract": "Sign Language Translation (SLT) first uses a Sign Language Recognition (SLR) system to extract sign language glosses from videos. Then, a translation system generates spoken language translations from the sign language glosses. This paper focuses on the translation system and introduces the STMC-Transformer which improves on the current state-of-the-art by over 5 and 7 BLEU respectively on gloss-to-text and video-to-text translation of the PHOENIX-Weather 2014T dataset. On the ASLG-PC12 corpus, we report an increase of over 16 BLEU. We also demonstrate the problem in current methods that rely on gloss supervision. The video-to-text translation of our STMC-Transformer outperforms translation of GT glosses. This contradicts previous claims that GT gloss translation acts as an upper bound for SLT performance and reveals that glosses are an inefficient representation of sign language. For future SLT research, we therefore suggest an end-to-end training of the recognition and translation models, or using a different sign language annotation scheme.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Sign Language Recognition", "Sign Language Translation"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["RWTH-PHOENIX-Weather 2014 T", "ASLG-PC12"], "metric": ["BLEU-4"], "title": "Better Sign Language Translation with STMC-Transformer"} {"abstract": "An automatic table recognition method for interpretation of tabular data in document images majorly involves solving two problems of table detection and table structure recognition. The prior work involved solving both problems independently using two separate approaches. More recent works signify the use of deep learning-based solutions while also attempting to design an end to end solution. In this paper, we present an improved deep learning-based end to end approach for solving both problems of table detection and structure recognition using a single Convolution Neural Network (CNN) model. We propose CascadeTabNet: a Cascade mask Region-based CNN High-Resolution Network (Cascade mask R-CNN HRNet) based model that detects the regions of tables and recognizes the structural body cells from the detected tables at the same time. We evaluate our results on ICDAR 2013, ICDAR 2019 and TableBank public datasets. We achieved 3rd rank in ICDAR 2019 post-competition results for table detection while attaining the best accuracy results for the ICDAR 2013 and TableBank dataset. We also attain the highest accuracy results on the ICDAR 2019 table structure recognition dataset. Additionally, we demonstrate effective transfer learning and image augmentation techniques that enable CNNs to achieve very accurate table detection results. Code and dataset has been made available at: https://github.com/DevashishPrasad/CascadeTabNet", "field": ["Convolutional Neural Networks", "Activation Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Instance Segmentation Models", "Skip Connections"], "task": ["Image Augmentation", "Table Detection", "Table Recognition", "Transfer Learning"], "method": ["HRNet", "Cascade Mask R-CNN", "Batch Normalization", "Convolution", "ReLU", "Residual Connection", "RoIAlign", "Rectified Linear Units"], "dataset": ["ICDAR2013"], "metric": ["Avg F1"], "title": "CascadeTabNet: An approach for end to end table detection and structure recognition from image-based documents"} {"abstract": "Semi-supervised variational autoencoders (VAEs) have obtained strong results, but have also encountered the challenge that good ELBO values do not always imply accurate inference results. In this paper, we investigate and propose two causes of this problem: (1) The ELBO objective cannot utilize the label information directly. (2) A bottleneck value exists and continuing to optimize ELBO after this value will not improve inference accuracy. On the basis of the experiment results, we propose SHOT-VAE to address these problems without introducing additional prior knowledge. The SHOT-VAE offers two contributions: (1) A new ELBO approximation named smooth-ELBO that integrates the label predictive loss into ELBO. (2) An approximation based on optimal interpolation that breaks the ELBO value bottleneck by reducing the margin between ELBO and the data likelihood. The SHOT-VAE achieves good performance with a 25.30% error rate on CIFAR-100 with 10k labels and reduces the error rate to 6.11% on CIFAR-10 with 4k labels.", "field": ["Optimization", "Image Data Augmentation"], "task": ["Semi-Supervised Image Classification", "Variational Inference"], "method": ["Stochastic Gradient Variational Bayes", "Mixup"], "dataset": ["cifar-100, 10000 Labels", "CIFAR-10, 4000 Labels"], "metric": ["Accuracy"], "title": "SHOT-VAE: Semi-supervised Deep Generative Models With Label-aware ELBO Approximations"} {"abstract": "Adversarially trained generative models (GANs) have recently achieved compelling image synthesis results. But despite early successes in using GANs for unsupervised representation learning, they have since been superseded by approaches based on self-supervision. In this work we show that progress in image generation quality translates to substantially improved representation learning performance. Our approach, BigBiGAN, builds upon the state-of-the-art BigGAN model, extending it to representation learning by adding an encoder and modifying the discriminator. We extensively evaluate the representation learning and generation capabilities of these BigBiGAN models, demonstrating that these generation-based models achieve the state of the art in unsupervised representation learning on ImageNet, as well as in unconditional image generation. Pretrained BigBiGAN models -- including image generators and encoders -- are available on TensorFlow Hub (https://tfhub.dev/s?publisher=deepmind&q=bigbigan).", "field": ["Convolutional Neural Networks", "Normalization", "Optimization", "Attention Mechanisms", "Generative Adversarial Networks", "Discriminators", "Regularization", "Attention Modules", "Activation Functions", "Latent Variable Sampling", "Convolutions", "Pooling Operations", "Image Feature Extractors", "Stochastic Optimization", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Initialization", "Output Functions", "Generative Models", "Skip Connections", "Image Model Blocks"], "task": ["Image Generation", "Representation Learning", "Self-Supervised Image Classification", "Semi-Supervised Image Classification", "Unsupervised Representation Learning"], "method": ["TTUR", "Truncation Trick", "Off-Diagonal Orthogonal Regularization", "Average Pooling", "Spectral Normalization", "Self-Attention GAN", "Adam", "Projection Discriminator", "Early Stopping", "GAN Hinge Loss", "1x1 Convolution", "CReLU", "Softplus", "ResNet", "SAGAN Self-Attention Module", "Convolution", "ReLU", "Residual Connection", "Linear Layer", "Reversible Residual Block", "Two Time-scale Update Rule", "Dense Connections", "Feedforward Network", "Conditional Batch Normalization", "Non-Local Operation", "Batch Normalization", "Dot-Product Attention", "SAGAN", "Residual Network", "Non-Local Block", "Pointwise Convolution", "Kaiming Initialization", "Softmax", "RevNet", "BigGAN", "BigBiGAN", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet", "ImageNet - 1% labeled data", "ImageNet - 10% labeled data"], "metric": ["Top 5 Accuracy", "Number of Params", "Top 1 Accuracy"], "title": "Large Scale Adversarial Representation Learning"} {"abstract": "This paper focuses on the visible-thermal cross-modality person re-identification (VT Re-ID) task, whose goal is to match person images between the daytime visible modality and the nighttime thermal modality. The two-stream network is usually adopted to address the cross-modality discrepancy, the most challenging problem for VT Re-ID, by learning the multi-modality person features. In this paper, we explore how many parameters of two-stream network should share, which is still not well investigated in the existing literature. By well splitting the ResNet50 model to construct the modality-specific feature extracting network and modality-sharing feature embedding network, we experimentally demonstrate the effect of parameters sharing of two-stream network for VT Re-ID. Moreover, in the framework of part-level person feature learning, we propose the hetero-center based triplet loss to relax the strict constraint of traditional triplet loss through replacing the comparison of anchor to all the other samples by anchor center to all the other centers. With the extremely simple means, the proposed method can significantly improve the VT Re-ID performance. The experimental results on two datasets show that our proposed method distinctly outperforms the state-of-the-art methods by large margins, especially on RegDB dataset achieving superior performance, rank1/mAP/mINP 91.05%/83.28%/68.84%. It can be a new baseline for VT Re-ID, with a simple but effective strategy.", "field": ["Loss Functions"], "task": ["Cross-Modal Person Re-Identification", "Person Re-Identification"], "method": ["Triplet Loss"], "dataset": ["RegDB", "SYSU-MM01"], "metric": ["mAP (All-search & Single-shot)", "rank1", "rank1(V2T)", "mAP(V2T)"], "title": "Parameter Sharing Exploration and Hetero-Center based Triplet Loss for Visible-Thermal Person Re-Identification"} {"abstract": "This paper describes Tacotron 2, a neural network architecture for speech\nsynthesis directly from text. The system is composed of a recurrent\nsequence-to-sequence feature prediction network that maps character embeddings\nto mel-scale spectrograms, followed by a modified WaveNet model acting as a\nvocoder to synthesize timedomain waveforms from those spectrograms. Our model\nachieves a mean opinion score (MOS) of $4.53$ comparable to a MOS of $4.58$ for\nprofessionally recorded speech. To validate our design choices, we present\nablation studies of key components of our system and evaluate the impact of\nusing mel spectrograms as the input to WaveNet instead of linguistic, duration,\nand $F_0$ features. We further demonstrate that using a compact acoustic\nintermediate representation enables significant simplification of the WaveNet\narchitecture.", "field": ["Temporal Convolutions", "Regularization", "Output Functions", "Learning Rate Schedules", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Text-to-Speech Models", "Feedforward Networks", "Pooling Operations", "Generative Audio Models", "Attention Mechanisms", "Skip Connections", "Bidirectional Recurrent Neural Networks"], "task": ["Speech Synthesis"], "method": ["Weight Decay", "Tacotron2", "Long Short-Term Memory", "BiLSTM", "Tanh Activation", "Location Sensitive Attention", "WaveNet", "Convolution", "Bidirectional LSTM", "ReLU", "Residual Connection", "Mixture of Logistic Distributions", "Linear Layer", "Zoneout", "Dilated Causal Convolution", "Batch Normalization", "Exponential Decay", "LSTM", "Tacotron 2", "Dropout", "Rectified Linear Units", "Max Pooling"], "dataset": ["North American English"], "metric": ["Mean Opinion Score"], "title": "Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions"} {"abstract": "Recurrent neural networks (RNNs) are important class of architectures among\nneural networks useful for language modeling and sequential prediction.\nHowever, optimizing RNNs is known to be harder compared to feed-forward neural\nnetworks. A number of techniques have been proposed in literature to address\nthis problem. In this paper we propose a simple technique called fraternal\ndropout that takes advantage of dropout to achieve this goal. Specifically, we\npropose to train two identical copies of an RNN (that share parameters) with\ndifferent dropout masks while minimizing the difference between their\n(pre-softmax) predictions. In this way our regularization encourages the\nrepresentations of RNNs to be invariant to dropout mask, thus being robust. We\nshow that our regularization term is upper bounded by the expectation-linear\ndropout objective which has been shown to address the gap due to the difference\nbetween the train and inference phases of dropout. We evaluate our model and\nachieve state-of-the-art results in sequence modeling tasks on two benchmark\ndatasets - Penn Treebank and Wikitext-2. We also show that our approach leads\nto performance improvement by a significant margin in image captioning\n(Microsoft COCO) and semi-supervised (CIFAR-10) tasks.", "field": ["Regularization"], "task": ["Image Captioning", "Language Modelling"], "method": ["Fraternal Dropout", "Dropout"], "dataset": ["Penn Treebank (Word Level)", "WikiText-2"], "metric": ["Number of params", "Validation perplexity", "Test perplexity", "Params"], "title": "Fraternal Dropout"} {"abstract": "Although two-stage object detectors have continuously advanced the state-of-the-art performance in recent years, the training process itself is far from crystal. In this work, we first point out the inconsistency problem between the fixed network settings and the dynamic training procedure, which greatly affects the performance. For example, the fixed label assignment strategy and regression loss function cannot fit the distribution change of proposals and thus are harmful to training high quality detectors. Consequently, we propose Dynamic R-CNN to adjust the label assignment criteria (IoU threshold) and the shape of regression loss function (parameters of SmoothL1 Loss) automatically based on the statistics of proposals during training. This dynamic design makes better use of the training samples and pushes the detector to fit more high quality samples. Specifically, our method improves upon ResNet-50-FPN baseline with 1.9% AP and 5.5% AP$_{90}$ on the MS COCO dataset with no extra overhead. Codes and models are available at https://github.com/hkzhang95/DynamicRCNN.", "field": ["Object Detection Models", "Initialization", "Output Functions", "Proposal Filtering", "Convolutional Neural Networks", "Feature Extractors", "Activation Functions", "RoI Feature Extractors", "Normalization", "Loss Functions", "Convolutions", "Pooling Operations", "Skip Connections", "Region Proposal", "Skip Connection Blocks"], "task": ["Object Detection", "Regression"], "method": ["Average Pooling", "Faster R-CNN", "Dynamic R-CNN", "1x1 Convolution", "Region Proposal Network", "ResNet", "RoIPool", "Convolution", "ReLU", "Residual Connection", "FPN", "Dynamic SmoothL1 Loss", "Deformable Convolution", "RPN", "Non Maximum Suppression", "Soft-NMS", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Softmax", "Feature Pyramid Network", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Dynamic R-CNN: Towards High Quality Object Detection via Dynamic Training"} {"abstract": "In this paper, we propose second-order graph-based neural dependency parsing using message passing and end-to-end neural networks. We empirically show that our approaches match the accuracy of very recent state-of-the-art second-order graph-based neural dependency parsers and have significantly faster speed in both training and testing. We also empirically show the advantage of second-order parsing over first-order parsing and observe that the usefulness of the head-selection structured constraint vanishes when using BERT embedding.", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Learning Rate Schedules", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Dependency Parsing"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["Chinese Pennbank", "Penn Treebank"], "metric": ["UAS", "LAS"], "title": "Second-Order Neural Dependency Parsing with Message Passing and End-to-End Training"} {"abstract": "Predicating macroscopic influences of drugs on human body, like efficacy and\ntoxicity, is a central problem of small-molecule based drug discovery.\nMolecules can be represented as an undirected graph, and we can utilize graph\nconvolution networks to predication molecular properties. However, graph\nconvolutional networks and other graph neural networks all focus on learning\nnode-level representation rather than graph-level representation. Previous\nworks simply sum all feature vectors for all nodes in the graph to obtain the\ngraph feature vector for drug predication. In this paper, we introduce a dummy\nsuper node that is connected with all nodes in the graph by a directed edge as\nthe representation of the graph and modify the graph operation to help the\ndummy super node learn graph-level feature. Thus, we can handle graph-level\nclassification and regression in the same way as node-level classification and\nregression. In addition, we apply focal loss to address class imbalance in drug\ndatasets. The experiments on MoleculeNet show that our method can effectively\nimprove the performance of molecular properties predication.", "field": ["Loss Functions"], "task": ["Drug Discovery", "Regression"], "method": ["Focal Loss"], "dataset": ["MUV", "ToxCast", "HIV dataset", "PCBA", "Tox21"], "metric": ["AUC"], "title": "Learning Graph-Level Representation for Drug Discovery"} {"abstract": "Entity alignment is a viable means for integrating heterogeneous knowledge among different knowledge graphs (KGs). Recent developments in the field often take an embedding-based approach to model the structural information of KGs so that entity alignment can be easily performed in the embedding space. However, most existing works do not explicitly utilize useful relation representations to assist in entity alignment, which, as we will show in the paper, is a simple yet effective way for improving entity alignment. This paper presents a novel joint learning framework for entity alignment. At the core of our approach is a Graph Convolutional Network (GCN) based framework for learning both entity and relation representations. Rather than relying on pre-aligned relation seeds to learn relation representations, we first approximate them using entity embeddings learned by the GCN. We then incorporate the relation approximation into entities to iteratively learn better representations for both. Experiments performed on three real-world cross-lingual datasets show that our approach substantially outperforms state-of-the-art entity alignment methods.", "field": ["Graph Models"], "task": ["Entity Alignment", "Entity Embeddings", "Knowledge Graphs"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["DBP15k zh-en"], "metric": ["Hits@1"], "title": "Jointly Learning Entity and Relation Representations for Entity Alignment"} {"abstract": "In this paper, we present {GraRep}, a novel model for learning vertex representations of weighted graphs. This model learns low dimensional vectors to represent vertices appearing in a graph and, unlike existing work, integrates global structural information of the graph into the learning process. We also formally analyze the connections between our work and several previous research efforts, including the DeepWalk model of Perozzi et al. as well as the skip-gram model with negative sampling of Mikolov et al. We conduct experiments on a language network, a social network as well as a citation network and show that our learned global representations can be effectively used as features in tasks such as clustering, classification and visualization. Empirical results demonstrate that our representation significantly outperforms other state-of-the-art methods in such tasks.", "field": ["Graph Embeddings"], "task": ["Node Classification"], "method": ["DeepWalk", "Graph Representation with Global structure", "GraRep"], "dataset": ["BlogCatalog", "20NEWS"], "metric": ["Macro-F1", "Accuracy"], "title": "GraRep: Learning Graph Representations with Global Structural Information"} {"abstract": "We demonstrate an end-to-end question answering system that integrates BERT with the open-source Anserini information retrieval toolkit. In contrast to most question answering and reading comprehension models today, which operate over small amounts of input text, our system integrates best practices from IR with a BERT-based reader to identify answers from a large corpus of Wikipedia articles in an end-to-end fashion. We report large improvements over previous results on a standard benchmark test collection, showing that fine-tuning pretrained BERT with SQuAD is sufficient to achieve high accuracy in identifying answer spans.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Information Retrieval", "Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SQuAD1.1 dev"], "metric": ["EM"], "title": "End-to-End Open-Domain Question Answering with BERTserini"} {"abstract": "Unsupervised visual representation learning remains a largely unsolved\nproblem in computer vision research. Among a big body of recently proposed\napproaches for unsupervised learning of visual representations, a class of\nself-supervised techniques achieves superior performance on many challenging\nbenchmarks. A large number of the pretext tasks for self-supervised learning\nhave been studied, but other important aspects, such as the choice of\nconvolutional neural networks (CNN), has not received equal attention.\nTherefore, we revisit numerous previously proposed self-supervised models,\nconduct a thorough large scale study and, as a result, uncover multiple crucial\ninsights. We challenge a number of common practices in selfsupervised visual\nrepresentation learning and observe that standard recipes for CNN design do not\nalways translate to self-supervised representation learning. As part of our\nstudy, we drastically boost the performance of previously proposed techniques\nand outperform previously published state-of-the-art results by a large margin.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Representation Learning", "Self-Supervised Image Classification", "Self-Supervised Learning"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Revisiting Self-Supervised Visual Representation Learning"} {"abstract": "Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methods fail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose Video Inference for Body Pose and Shape Estimation (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a temporal network architecture and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at https://github.com/mkocabas/VIBE.", "field": ["Initialization", "Convolutional Neural Networks", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Generative Models", "Skip Connections", "Skip Connection Blocks"], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "3D Shape Reconstruction", "Motion Capture", "Pose Estimation", "Regression"], "method": ["Gated Recurrent Unit", "ResNet", "Generative Adversarial Network", "Average Pooling", "Residual Block", "GAN", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "GRU", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Human3.6M", "3DPW"], "metric": ["Average MPJPE (mm)", "PA-MPJPE", "MPVPE", "Using 2D ground-truth joints", "Multi-View or Monocular", "MPJPE"], "title": "VIBE: Video Inference for Human Body Pose and Shape Estimation"} {"abstract": "In this paper, we propose a learning-based approach to the task of automatically extracting a \"wireframe\" representation for images of cluttered man-made environments. The wireframe (see Fig. 1) contains all salient straight lines and their junctions of the scene that encode efficiently and accurately large-scale geometry and object shapes. To this end, we have built a very large new dataset of over 5,000 images with wireframes thoroughly labelled by humans. We have proposed two convolutional neural networks that are suitable for extracting junctions and lines with large spatial support, respectively. The networks trained on our dataset have achieved significantly better performance than state-of-the-art methods for junction detection and line segment detection, respectively. We have conducted extensive experiments to evaluate quantitatively and qualitatively the wireframes obtained by our method, and have convincingly shown that effectively and efficiently parsing wireframes for images of man-made environments is a feasible goal within reach. Such wireframes could benefit many important visual tasks such as feature correspondence, 3D reconstruction, vision-based mapping, localization, and navigation. The data and source code are available at https://github.com/huangkuns/wireframe.", "field": ["Graph Embeddings"], "task": ["3D Reconstruction", "Junction Detection", "Line Segment Detection"], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["York Urban Dataset", "wireframe dataset"], "metric": ["sAP15", "sAP10", "F1 score", "sAP5"], "title": "Learning to Parse Wireframes in Images of Man-Made Environments"} {"abstract": "In this paper, we propose a unified panoptic segmentation network (UPSNet)\nfor tackling the newly proposed panoptic segmentation task. On top of a single\nbackbone residual network, we first design a deformable convolution based\nsemantic segmentation head and a Mask R-CNN style instance segmentation head\nwhich solve these two subtasks simultaneously. More importantly, we introduce a\nparameter-free panoptic head which solves the panoptic segmentation via\npixel-wise classification. It first leverages the logits from the previous two\nheads and then innovatively expands the representation for enabling prediction\nof an extra unknown class which helps better resolve the conflicts between\nsemantic and instance segmentation. Additionally, it handles the challenge\ncaused by the varying number of instances and permits back propagation to the\nbottom modules in an end-to-end manner. Extensive experimental results on\nCityscapes, COCO and our internal dataset demonstrate that our UPSNet achieves\nstate-of-the-art performance with much faster inference. Code has been made\navailable at: https://github.com/uber-research/UPSNet", "field": ["Initialization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Panoptic Segmentation", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "RoIAlign", "Mask R-CNN", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "Deformable Convolution"], "dataset": ["Cityscapes val", "COCO test-dev", "KITTI Panoptic Segmentation", "Indian Driving Dataset"], "metric": ["PQst", "mIoU", "PQth", "PQ", "AP"], "title": "UPSNet: A Unified Panoptic Segmentation Network"} {"abstract": "Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis. Our code is publicly available.", "field": ["Convolutions"], "task": ["Image Classification"], "method": ["Convolution"], "dataset": ["CIFAR-10"], "metric": ["Percentage correct"], "title": "On the Relationship between Self-Attention and Convolutional Layers"} {"abstract": "Precise 3D segmentation of infant brain tissues is an essential step towards\ncomprehensive volumetric studies and quantitative analysis of early brain\ndevelopement. However, computing such segmentations is very challenging,\nespecially for 6-month infant brain, due to the poor image quality, among other\ndifficulties inherent to infant brain MRI, e.g., the isointense contrast\nbetween white and gray matter and the severe partial volume effect due to small\nbrain sizes. This study investigates the problem with an ensemble of semi-dense\nfully convolutional neural networks (CNNs), which employs T1-weighted and\nT2-weighted MR images as input. We demonstrate that the ensemble agreement is\nhighly correlated with the segmentation errors. Therefore, our method provides\nmeasures that can guide local user corrections. To the best of our knowledge,\nthis work is the first ensemble of 3D CNNs for suggesting annotations within\nimages. Furthermore, inspired by the very recent success of dense networks, we\npropose a novel architecture, SemiDenseNet, which connects all convolutional\nlayers directly to the end of the network. Our architecture allows the\nefficient propagation of gradients during training, while limiting the number\nof parameters, requiring one order of magnitude less parameters than popular\nmedical image segmentation networks such as 3D U-Net. Another contribution of\nour work is the study of the impact that early or late fusions of multiple\nimage modalities might have on the performances of deep architectures. We\nreport evaluations of our method on the public data of the MICCAI iSEG-2017\nChallenge on 6-month infant brain MRI segmentation, and show very competitive\nresults among 21 teams, ranking first or second in most metrics.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Infant Brain Mri Segmentation", "Medical Image Segmentation", "Semantic Segmentation"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["iSEG 2017 Challenge"], "metric": ["Dice Score"], "title": "Deep CNN ensembles and suggestive annotations for infant brain MRI segmentation"} {"abstract": "Representation learning has become an invaluable approach for learning from\nsymbolic data such as text and graphs. However, while complex symbolic datasets\noften exhibit a latent hierarchical structure, state-of-the-art methods\ntypically learn embeddings in Euclidean vector spaces, which do not account for\nthis property. For this purpose, we introduce a new approach for learning\nhierarchical representations of symbolic data by embedding them into hyperbolic\nspace -- or more precisely into an n-dimensional Poincar\\'e ball. Due to the\nunderlying hyperbolic geometry, this allows us to learn parsimonious\nrepresentations of symbolic data by simultaneously capturing hierarchy and\nsimilarity. We introduce an efficient algorithm to learn the embeddings based\non Riemannian optimization and show experimentally that Poincar\\'e embeddings\noutperform Euclidean embeddings significantly on data with latent hierarchies,\nboth in terms of representation capacity and in terms of generalization\nability.", "field": ["Word Embeddings"], "task": ["Graph Embedding", "Hierarchical structure", "Representation Learning"], "method": ["Poincar\u00e9 Embeddings"], "dataset": ["WordNet"], "metric": ["Accuracy"], "title": "Poincar\u00e9 Embeddings for Learning Hierarchical Representations"} {"abstract": "Despite remarkable recent progress on both unconditional and conditional image synthesis, it remains a long-standing problem to learn generative models that are capable of synthesizing realistic and sharp images from reconfigurable spatial layout (i.e., bounding boxes + class labels in an image lattice) and style (i.e., structural and appearance variations encoded by latent vectors), especially at high resolution. By reconfigurable, it means that a model can preserve the intrinsic one-to-many mapping from a given layout to multiple plausible images with different styles, and is adaptive with respect to perturbations of a layout and style latent code. In this paper, we present a layout- and style-based architecture for generative adversarial networks (termed LostGANs) that can be trained end-to-end to generate images from reconfigurable layout and style. Inspired by the vanilla StyleGAN, the proposed LostGAN consists of two new components: (i) learning fine-grained mask maps in a weakly-supervised manner to bridge the gap between layouts and images, and (ii) learning object instance-specific layout-aware feature normalization (ISLA-Norm) in the generator to realize multi-object style generation. In experiments, the proposed method is tested on the COCO-Stuff dataset and the Visual Genome dataset with state-of-the-art performance obtained. The code and pretrained models are available at \\url{https://github.com/iVMCL/LostGANs}.", "field": ["Regularization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Generative Models"], "task": ["Image Generation", "Layout-to-Image Generation"], "method": ["Feedforward Network", "Convolution", "Adaptive Instance Normalization", "Leaky ReLU", "R1 Regularization", "StyleGAN", "Dense Connections"], "dataset": ["COCO-Stuff 64x64", "Visual Genome 128x128", "COCO-Stuff 128x128", "Visual Genome 64x64"], "metric": ["Inception Score", "SceneFID", "FID"], "title": "Image Synthesis From Reconfigurable Layout and Style"} {"abstract": "Data augmentation (DA) has been widely utilized to improve generalization in training deep neural networks. Recently, human-designed data augmentation has been gradually replaced by automatically learned augmentation policy. Through finding the best policy in well-designed search space of data augmentation, AutoAugment can significantly improve validation accuracy on image classification tasks. However, this approach is not computationally practical for large-scale problems. In this paper, we develop an adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss. The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization. In contrast to prior work, we reuse the computation in target network training for policy evaluation, and dispense with the retraining of the target network. Compared to AutoAugment, this leads to about 12x reduction in computing cost and 11x shortening in time overhead on ImageNet. We show experimental results of our approach on CIFAR-10/CIFAR-100, ImageNet, and demonstrate significant performance improvements over state-of-the-art. On CIFAR-10, we achieve a top-1 test error of 1.36%, which is the currently best performing single model. On ImageNet, we achieve a leading performance of top-1 accuracy 79.40% on ResNet-50 and 80.00% on ResNet-50-D without extra data.", "field": ["Recurrent Neural Networks", "Activation Functions", "Image Data Augmentation"], "task": ["Data Augmentation", "Image Classification"], "method": ["Long Short-Term Memory", "AutoAugment", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Adversarial AutoAugment"} {"abstract": "Monocular head pose estimation requires learning a model that computes the\nintrinsic Euler angles for pose (yaw, pitch, roll) from an input image of human\nface. Annotating ground truth head pose angles for images in the wild is\ndifficult and requires ad-hoc fitting procedures (which provides only coarse\nand approximate annotations). This highlights the need for approaches which can\ntrain on data captured in controlled environment and generalize on the images\nin the wild (with varying appearance and illumination of the face). Most\npresent day deep learning approaches which learn a regression function directly\non the input images fail to do so. To this end, we propose to use a higher\nlevel representation to regress the head pose while using deep learning\narchitectures. More specifically, we use the uncertainty maps in the form of 2D\nsoft localization heatmap images over five facial keypoints, namely left ear,\nright ear, left eye, right eye and nose, and pass them through an convolutional\nneural network to regress the head-pose. We show head pose estimation results\non two challenging benchmarks BIWI and AFLW and our approach surpasses the\nstate of the art on both the datasets.", "field": ["Output Functions"], "task": ["Head Pose Estimation", "Pose Estimation", "Regression"], "method": ["Heatmap"], "dataset": ["AFLW"], "metric": ["MAE"], "title": "Nose, eyes and ears: Head pose estimation by locating facial keypoints"} {"abstract": "Deep learning has been demonstrated to achieve excellent results for image\nclassification and object detection. However, the impact of deep learning on\nvideo analysis (e.g. action detection and recognition) has been limited due to\ncomplexity of video data and lack of annotations. Previous convolutional neural\nnetworks (CNN) based video action detection approaches usually consist of two\nmajor steps: frame-level action proposal detection and association of proposals\nacross frames. Also, these methods employ two-stream CNN framework to handle\nspatial and temporal feature separately. In this paper, we propose an\nend-to-end deep network called Tube Convolutional Neural Network (T-CNN) for\naction detection in videos. The proposed architecture is a unified network that\nis able to recognize and localize action based on 3D convolution features. A\nvideo is first divided into equal length clips and for each clip a set of tube\nproposals are generated next based on 3D Convolutional Network (ConvNet)\nfeatures. Finally, the tube proposals of different clips are linked together\nemploying network flow and spatio-temporal action detection is performed using\nthese linked video proposals. Extensive experiments on several video datasets\ndemonstrate the superior performance of T-CNN for classifying and localizing\nactions in both trimmed and untrimmed videos compared to state-of-the-arts.", "field": ["Convolutions"], "task": ["Action Detection", "Image Classification", "Object Detection"], "method": ["3D Convolution", "Convolution"], "dataset": ["UCF101-24"], "metric": ["Video-mAP 0.1", "Video-mAP 0.2"], "title": "Tube Convolutional Neural Network (T-CNN) for Action Detection in Videos"} {"abstract": "Pedestrians in videos have a wide range of appearances such as body poses,\nocclusions, and complex backgrounds, and there exists the proposal shift\nproblem in pedestrian detection that causes the loss of body parts such as head\nand legs. To address it, we propose part-level convolutional neural networks\n(CNN) for pedestrian detection using saliency and boundary box alignment in\nthis paper. The proposed network consists of two sub-networks: detection and\nalignment. We use saliency in the detection sub-network to remove false\npositives such as lamp posts and trees. We adopt bounding box alignment on\ndetection proposals in the alignment sub-network to address the proposal shift\nproblem. First, we combine FCN and CAM to extract deep features for pedestrian\ndetection. Then, we perform part-level CNN to recall the lost body parts.\nExperimental results on various datasets demonstrate that the proposed method\nremarkably improves accuracy in pedestrian detection and outperforms existing\nstate-of-the-arts in terms of log average miss rate at false position per image\n(FPPI).", "field": ["Convolutions", "Interpretability", "Pooling Operations", "Semantic Segmentation Models"], "task": ["Pedestrian Detection"], "method": ["CAM", "Class-activation map", "Convolution", "Fully Convolutional Network", "Max Pooling", "FCN"], "dataset": ["Caltech"], "metric": ["Reasonable Miss Rate"], "title": "Part-Level Convolutional Neural Networks for Pedestrian Detection Using Saliency and Boundary Box Alignment"} {"abstract": "Contrastive unsupervised learning has recently shown encouraging progress, e.g., in Momentum Contrast (MoCo) and SimCLR. In this note, we verify the effectiveness of two of SimCLR's design improvements by implementing them in the MoCo framework. With simple modifications to MoCo---namely, using an MLP projection head and more data augmentation---we establish stronger baselines that outperform SimCLR and do not require large training batches. We hope this will make state-of-the-art unsupervised learning research more accessible. Code will be made public.", "field": ["Self-Supervised Learning", "Image Data Augmentation", "Initialization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Data Augmentation", "Representation Learning", "Self-Supervised Image Classification"], "method": ["InfoNCE", "Cosine Annealing", "Average Pooling", "MoCo v2", "1x1 Convolution", "ResNet", "MoCo", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Feedforward Network", "Momentum Contrast", "Random Resized Crop", "Batch Normalization", "Residual Network", "Kaiming Initialization", "SGD with Momentum", "Random Gaussian Blur", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Number of Params", "Top 1 Accuracy"], "title": "Improved Baselines with Momentum Contrastive Learning"} {"abstract": "We introduce, TextureNet, a neural network architecture designed to extract\nfeatures from high-resolution signals associated with 3D surface meshes (e.g.,\ncolor texture maps). The key idea is to utilize a 4-rotational symmetric\n(4-RoSy) field to define a domain for convolution on a surface. Though 4-RoSy\nfields have several properties favorable for convolution on surfaces (low\ndistortion, few singularities, consistent parameterization, etc.), orientations\nare ambiguous up to 4-fold rotation at any sample point. So, we introduce a new\nconvolutional operator invariant to the 4-RoSy ambiguity and use it in a\nnetwork to extract features from high-resolution signals on geodesic\nneighborhoods of a surface. In comparison to alternatives, such as PointNet\nbased methods which lack a notion of orientation, the coherent structure given\nby these neighborhoods results in significantly stronger features. As an\nexample application, we demonstrate the benefits of our architecture for 3D\nsemantic segmentation of textured 3D meshes. The results show that our method\noutperforms all existing methods on the basis of mean IoU by a significant\nmargin in both geometry-only (6.4%) and RGB+Geometry (6.9-8.2%) settings.", "field": ["Convolutions"], "task": ["3D Semantic Segmentation", "Semantic Segmentation"], "method": ["Convolution"], "dataset": ["ScanNet"], "metric": ["3DIoU"], "title": "TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes"} {"abstract": "We demonstrate that replacing an LSTM encoder with a self-attentive\narchitecture can lead to improvements to a state-of-the-art discriminative\nconstituency parser. The use of attention makes explicit the manner in which\ninformation is propagated between different locations in the sentence, which we\nuse to both analyze our model and propose potential improvements. For example,\nwe find that separating positional and content information in the encoder can\nlead to improved parsing accuracy. Additionally, we evaluate different\napproaches for lexical representation. Our parser achieves new state-of-the-art\nresults for single models trained on the Penn Treebank: 93.55 F1 without the\nuse of any external data, and 95.13 F1 when using pre-trained word\nrepresentations. Our parser also outperforms the previous best-published\naccuracy figures on 8 of the 9 languages in the SPMRL dataset.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Constituency Parsing"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["Penn Treebank"], "metric": ["F1 score"], "title": "Constituency Parsing with a Self-Attentive Encoder"} {"abstract": "Entity mentions embedded in longer entity mentions are referred to as nested entities. Most named entity recognition (NER) systems deal only with the flat entities and ignore the inner nested ones, which fails to capture finer-grained semantic information in underlying texts. To address this issue, we propose a novel neural model to identify nested entities by dynamically stacking flat NER layers. Each flat NER layer is based on the state-of-the-art flat NER model that captures sequential context representation with bidirectional Long Short-Term Memory (LSTM) layer and feeds it to the cascaded CRF layer. Our model merges the output of the LSTM layer in the current flat NER layer to build new representation for detected entities and subsequently feeds them into the next flat NER layer. This allows our model to extract outer entities by taking full advantage of information encoded in their corresponding inner entities, in an inside-to-outside way. Our model dynamically stacks the flat NER layers until no outer entities are extracted. Extensive evaluation shows that our dynamic model outperforms state-of-the-art feature-based systems on nested NER, achieving 74.7{\\%} and 72.2{\\%} on GENIA and ACE2005 datasets, respectively, in terms of F-score.", "field": ["Recurrent Neural Networks", "Activation Functions", "Structured Prediction"], "task": ["Entity Linking", "Named Entity Recognition", "Nested Mention Recognition", "Nested Named Entity Recognition", "Relation Extraction"], "method": ["Conditional Random Field", "Long Short-Term Memory", "CRF", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["GENIA", "ACE 2005"], "metric": ["F1"], "title": "A Neural Layered Model for Nested Named Entity Recognition"} {"abstract": "Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-of-the-art single image super-resolution approaches.", "field": ["Convolutions"], "task": ["Image Super-Resolution", "Super-Resolution"], "method": ["Convolution"], "dataset": ["BSD100 - 4x upscaling", "Urban100 - 8x upscaling", "Set14 - 2x upscaling", "BSD100 - 2x upscaling", "Urban100 - 3x upscaling", "Set5 - 2x upscaling", "Urban100 - 4x upscaling", "Set5 - 3x upscaling", "Manga109 - 3x upscaling", "Set14 - 4x upscaling", "Set14 - 3x upscaling", "Set5 - 4x upscaling", "Set14 - 8x upscaling", "Manga109 - 8x upscaling", "Manga109 - 4x upscaling", "BSD100 - 3x upscaling", "Urban100 - 2x upscaling", "Manga109 - 2x upscaling", "Set5 - 8x upscaling", "BSD100 - 8x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Single Image Super-Resolution via a Holistic Attention Network"} {"abstract": "Modern object detectors can rarely achieve short training time, fast inference speed, and high accuracy at the same time. To strike a balance among them, we propose the Training-Time-Friendly Network (TTFNet). In this work, we start with light-head, single-stage, and anchor-free designs, which enable fast inference speed. Then, we focus on shortening training time. We notice that encoding more training samples from annotated boxes plays a similar role as increasing batch size, which helps enlarge the learning rate and accelerate the training process. To this end, we introduce a novel approach using Gaussian kernels to encode training samples. Besides, we design the initiative sample weights for better information utilization. Experiments on MS COCO show that our TTFNet has great advantages in balancing training time, inference speed, and accuracy. It has reduced training time by more than seven times compared to previous real-time detectors while maintaining state-of-the-art performances. In addition, our super-fast version of TTFNet-18 and TTFNet-53 can outperform SSD300 and YOLOv3 by less than one-tenth of their training time, respectively. The code has been made available at \\url{https://github.com/ZJULearning/ttfnet}.", "field": ["Generalized Linear Models", "Output Functions", "Convolutional Neural Networks", "Normalization", "Convolutions", "Clustering", "Pooling Operations", "Skip Connections", "Object Detection Models"], "task": ["Object Detection", "Real-Time Object Detection"], "method": ["Logistic Regression", "k-Means Clustering", "YOLOv3", "Average Pooling", "Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "Residual Connection", "Darknet-53", "Global Average Pooling"], "dataset": ["COCO"], "metric": ["inference time (ms)", "FPS", "MAP"], "title": "Training-Time-Friendly Network for Real-Time Object Detection"} {"abstract": "Attention-based pre-trained language models such as GPT-2 brought considerable progress to end-to-end dialogue modelling. However, they also present considerable risks for task-oriented dialogue, such as lack of knowledge grounding or diversity. To address these issues, we introduce modified training objectives for language model finetuning, and we employ massive data augmentation via back-translation to increase the diversity of the training data. We further examine the possibilities of combining data from multiples sources to improve performance on the target dataset. We carefully evaluate our contributions with both human and automatic methods. Our model achieves state-of-the-art performance on the MultiWOZ data and shows competitive performance in human evaluation.", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Learning Rate Schedules", "Output Functions", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Fine-Tuning", "Skip Connections"], "task": ["End-To-End Dialogue Modelling"], "method": ["Weight Decay", "Cosine Annealing", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Discriminative Fine-Tuning", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Cosine Annealing", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "GPT-2", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["MULTIWOZ 2.1", "MULTIWOZ 2.0"], "metric": ["MultiWOZ (Inform)", "BLEU", "MultiWOZ (Success)"], "title": "AuGPT: Dialogue with Pre-trained Language Models and Data Augmentation"} {"abstract": "We propose and study a task we name panoptic segmentation (PS). Panoptic\nsegmentation unifies the typically distinct tasks of semantic segmentation\n(assign a class label to each pixel) and instance segmentation (detect and\nsegment each object instance). The proposed task requires generating a coherent\nscene segmentation that is rich and complete, an important step toward\nreal-world vision systems. While early work in computer vision addressed\nrelated image/scene parsing tasks, these are not currently popular, possibly\ndue to lack of appropriate metrics or associated recognition challenges. To\naddress this, we propose a novel panoptic quality (PQ) metric that captures\nperformance for all classes (stuff and things) in an interpretable and unified\nmanner. Using the proposed metric, we perform a rigorous study of both human\nand machine performance for PS on three existing datasets, revealing\ninteresting insights about the task. The aim of our work is to revive the\ninterest of the community in a more unified view of image segmentation.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Panoptic Segmentation", "Scene Parsing", "Scene Segmentation", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Cityscapes val"], "metric": ["PQst", "PQ", "AP", "PQth"], "title": "Panoptic Segmentation"} {"abstract": "With the development of the super-resolution convolutional neural network (SRCNN), deep learning technique has been widely applied in the field of image super-resolution. Previous works mainly focus on optimizing the structure of SRCNN, which have been achieved well performance in speed and restoration quality for image super-resolution. However, most of these approaches only consider a specific scale image during the training process, while ignoring the relationship between different scales of images. Motivated by this concern, in this paper, we propose a cascaded convolution neural network for image super-resolution (CSRCNN), which includes three cascaded Fast SRCNNs and each Fast SRCNN can process a specific scale image. Images of different scales can be trained simultaneously and the learned network can make full use of the information resided in different scales of images. Extensive experiments show that our network can achieve well performance for image SR.", "field": ["Convolutions"], "task": ["Image Super-Resolution", "Super-Resolution"], "method": ["Convolution"], "dataset": ["BSD200 - 2x upscaling", "Set14 - 2x upscaling", "Set14 - 4x upscaling", "Set5 - 4x upscaling", "Set14 - 8x upscaling", "Set5 - 8x upscaling", "Set5 - 2x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Cascade Convolutional Neural Network for Image Super-Resolution"} {"abstract": "Recently, a simple combination of passage retrieval using off-the-shelf IR\ntechniques and a BERT reader was found to be very effective for question\nanswering directly on Wikipedia, yielding a large improvement over the previous\nstate of the art on a standard benchmark dataset. In this paper, we present a\ndata augmentation technique using distant supervision that exploits positive as\nwell as negative examples. We apply a stage-wise approach to fine tuning BERT\non multiple datasets, starting with data that is \"furthest\" from the test data\nand ending with the \"closest\". Experimental results show large gains in\neffectiveness over previous approaches on English QA datasets, and we establish\nnew baselines on two recent Chinese QA datasets.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Data Augmentation", "Open-Domain Question Answering", "Question Answering"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SQuAD1.1 dev"], "metric": ["EM"], "title": "Data Augmentation for BERT Fine-Tuning in Open-Domain Question Answering"} {"abstract": "In human parsing, the pixel-wise classification loss has drawbacks in its\nlow-level local inconsistency and high-level semantic inconsistency. The\nintroduction of the adversarial network tackles the two problems using a single\ndiscriminator. However, the two types of parsing inconsistency are generated by\ndistinct mechanisms, so it is difficult for a single discriminator to solve\nthem both. To address the two kinds of inconsistencies, this paper proposes the\nMacro-Micro Adversarial Net (MMAN). It has two discriminators. One\ndiscriminator, Macro D, acts on the low-resolution label map and penalizes\nsemantic inconsistency, e.g., misplaced body parts. The other discriminator,\nMicro D, focuses on multiple patches of the high-resolution label map to\naddress the local inconsistency, e.g., blur and hole. Compared with traditional\nadversarial networks, MMAN not only enforces local and semantic consistency\nexplicitly, but also avoids the poor convergence problem of adversarial\nnetworks when handling high resolution images. In our experiment, we validate\nthat the two discriminators are complementary to each other in improving the\nhuman parsing accuracy. The proposed framework is capable of producing\ncompetitive parsing performance compared with the state-of-the-art methods,\ni.e., mIoU=46.81% and 59.91% on LIP and PASCAL-Person-Part, respectively. On a\nrelatively small dataset PPSS, our pre-trained model demonstrates impressive\ngeneralization ability. The code is publicly available at\nhttps://github.com/RoyalVane/MMAN.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Human Parsing", "Human Part Segmentation", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["LIP val"], "metric": ["mIoU"], "title": "Macro-Micro Adversarial Network for Human Parsing"} {"abstract": "Organized relational knowledge in the form of {``}knowledge graphs{''} is important for many applications. However, the ability to populate knowledge bases with facts automatically extracted from documents has improved frustratingly slowly. This paper simultaneously addresses two issues that have held back prior work. We first propose an effective new model, which combines an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction. Then we build TACRED, a large (119,474 examples) supervised relation extraction dataset obtained via crowdsourcing and targeted towards TAC KBP relations. The combination of better supervised data and a more appropriate high-capacity model enables much better relation extraction performance. When the model trained on this new dataset replaces the previous relation extraction component of the best TAC KBP 2015 slot filling system, its F1 score increases markedly from 22.2{\\%} to 26.7{\\%}.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Knowledge Base Population", "Knowledge Graphs", "Relation Extraction", "Slot Filling"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["TACRED", "Re-TACRED"], "metric": ["F1"], "title": "Position-aware Attention and Supervised Data Improve Slot Filling"} {"abstract": "Generating multi-sentence descriptions for videos is one of the most challenging captioning tasks due to its high requirements for not only visual relevance but also discourse-based coherence across the sentences in the paragraph. Towards this goal, we propose a new approach called Memory-Augmented Recurrent Transformer (MART), which uses a memory module to augment the transformer architecture. The memory module generates a highly summarized memory state from the video segments and the sentence history so as to help better prediction of the next sentence (w.r.t. coreference and repetition aspects), thus encouraging coherent paragraph generation. Extensive experiments, human evaluations, and qualitative analyses on two popular datasets ActivityNet Captions and YouCookII show that MART generates more coherent and less repetitive paragraph captions than baseline methods, while maintaining relevance to the input video events. All code is available open-source at: https://github.com/jayleicn/recurrent-transformer", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Video Captioning"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["ActivityNet Captions"], "metric": ["BLEU4", "METEOR", "CIDEr"], "title": "MART: Memory-Augmented Recurrent Transformer for Coherent Video Paragraph Captioning"} {"abstract": "Person-person mutual action recognition (also referred to as interaction recognition) is an important research branch of human activity analysis. Current solutions in the field -- mainly dominated by CNNs, GCNs and LSTMs -- often consist of complicated architectures and mechanisms to embed the relationships between the two persons on the architecture itself, to ensure the interaction patterns can be properly learned. Our main contribution with this work is by proposing a simpler yet very powerful architecture, named Interaction Relational Network, which utilizes minimal prior knowledge about the structure of the human body. We drive the network to identify by itself how to relate the body parts from the individuals interacting. In order to better represent the interaction, we define two different relationships, leading to specialized architectures and models for each. These multiple relationship models will then be fused into a single and special architecture, in order to leverage both streams of information for further enhancing the relational reasoning capability. Furthermore we define important structured pair-wise operations to extract meaningful extra information from each pair of joints -- distance and motion. Ultimately, with the coupling of an LSTM, our IRN is capable of paramount sequential relational reasoning. These important extensions we made to our network can also be valuable to other problems that require sophisticated relational reasoning. Our solution is able to achieve state-of-the-art performance on the traditional interaction recognition datasets SBU and UT, and also on the mutual actions from the large-scale dataset NTU RGB+D. Furthermore, it obtains competitive performance in the NTU RGB+D 120 dataset interactions subset.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Action Recognition", "Human Interaction Recognition", "Relational Reasoning"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["NTU RGB+D", "NTU RGB+D 120", "UT-Interaction", "SBU"], "metric": ["Accuracy (Cross-View)", "Accuracy (Cross-Subject)", "Accuracy (Set 1)", "Accuracy (Cross-Setup)", "Accuracy", "Accuracy (Set 2)"], "title": "Interaction Relational Network for Mutual Action Recognition"} {"abstract": "Tokenization is the first step of many natural language processing (NLP) tasks and plays an important role for neural NLP models. Tokenizaton method such as byte-pair encoding (BPE), which can greatly reduce the large vocabulary and deal with out-of-vocabulary words, has shown to be effective and is widely adopted for sequence generation tasks. While various tokenization methods exist, there is no common acknowledgement which is the best. In this work, we propose to leverage the mixed representations from different tokenization methods for sequence generation tasks, in order to boost the model performance with unique characteristics and advantages of individual tokenization methods. Specifically, we introduce a new model architecture to incorporate mixed representations and a co-teaching algorithm to better utilize the diversity of different tokenization methods. Our approach achieves significant improvements on neural machine translation (NMT) tasks with six language pairs (e.g., English\u2194German, English\u2194Romanian), as well as an abstractive summarization task.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Abstractive Text Summarization", "Machine Translation", "Tokenization"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["IWSLT2014 English-German", "IWSLT2014 German-English"], "metric": ["BLEU score"], "title": "Sequence Generation with Mixed Representations"} {"abstract": "This paper proposes to tackle the AMR parsing bottleneck by improving two components of an AMR parser: concept identification and alignment. We first build a Bidirectional LSTM based concept identifier that is able to incorporate richer contextual information to learn sparse AMR concept labels. We then extend an HMM-based word-to-concept alignment model with graph distance distortion and a rescoring method during decoding to incorporate the structural information in the AMR graph. We show integrating the two components into an existing AMR parser results in consistently better performance over the state of the art on various datasets.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["AMR Parsing", "Feature Engineering", "Reading Comprehension", "Text Generation"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["LDC2014T12"], "metric": ["F1 Full"], "title": "Getting the Most out of AMR Parsing"} {"abstract": "The predictive learning of spatiotemporal sequences aims to generate future images by learning from the historical context, where the visual dynamics are believed to have modular structures that can be learned with compositional subsystems. This paper models these structures by presenting PredRNN, a new recurrent network, in which a pair of memory cells are explicitly decoupled, operate in nearly independent transition manners, and finally form unified representations of the complex environment. Concretely, besides the original memory cell of LSTM, this network is featured by a zigzag memory flow that propagates in both bottom-up and top-down directions across all layers, enabling the learned visual dynamics at different levels of RNNs to communicate. It also leverages a memory decoupling loss to keep the memory cells from learning redundant features. We further improve PredRNN with a new curriculum learning strategy, which can be generalized to most sequence-to-sequence RNNs in predictive learning scenarios. We provide detailed ablation studies, gradient analyses, and visualizations to verify the effectiveness of each component. We show that our approach obtains highly competitive results on three standard datasets: the synthetic Moving MNIST dataset, the KTH human action dataset, and a radar echo dataset for precipitation forecasting.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Video Prediction"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["KTH", "Moving MNIST"], "metric": ["SSIM", "MSE", "PSNR", "LPIPS"], "title": "PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning"} {"abstract": "To investigate critical frequency bands and channels, this paper introduces deep belief networks (DBNs) to constructing EEG-based emotion recognition models for three emotions: positive, neutral and negative. We develop an EEG dataset acquired from 15 subjects. Each subject performs the experiments twice at the interval of a few days. DBNs are trained with differential entropy features extracted from multichannel EEG data. We examine the weights of the trained DBNs and investigate the critical frequency bands and channels. Four different profiles of 4, 6, 9, and 12 channels are selected. The recognition accuracies of these four profiles are relatively stable with the best accuracy of 86.65%, which is even better than that of the original 62 channels. The critical frequency bands and channels determined by using the weights of trained DBNs are consistent with the existing observations. In addition, our experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals. We compare the performance of deep models with shallow models. The average accuracies of DBN, SVM, LR, and KNN are 86.08%, 83.99%, 82.70%, and 72.60%, respectively.", "field": ["Non-Parametric Classification"], "task": ["EEG", "Emotion Recognition"], "method": ["Support Vector Machine", "SVM"], "dataset": ["SEED-IV", "\u3000SEED"], "metric": ["Accuracy"], "title": "Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks"} {"abstract": "Data augmentation is a critical component of training deep learning models. Although data augmentation has been shown to significantly improve image classification, its potential has not been thoroughly investigated for object detection. Given the additional cost for annotating images for object detection, data augmentation may be of even greater importance for this computer vision task. In this work, we study the impact of data augmentation on object detection. We first demonstrate that data augmentation operations borrowed from image classification may be helpful for training detection models, but the improvement is limited. Thus, we investigate how learned, specialized data augmentation policies improve generalization performance for detection models. Importantly, these augmentation policies only affect training and leave a trained model unchanged during evaluation. Experiments on the COCO dataset indicate that an optimized data augmentation policy improves detection accuracy by more than +2.3 mAP, and allow a single inference model to achieve a state-of-the-art accuracy of 50.7 mAP. Importantly, the best policy found on COCO may be transferred unchanged to other detection datasets and models to improve predictive accuracy. For example, the best augmentation policy identified with COCO improves a strong baseline on PASCAL-VOC by +2.7 mAP. Our results also reveal that a learned augmentation policy is superior to state-of-the-art architecture regularization methods for object detection, even when considering strong baselines. Code for training with the learned policy is available online at https://github.com/tensorflow/tpu/tree/master/models/official/detection", "field": ["Convolutional Neural Networks", "Feature Extractors", "Normalization", "Policy Gradient Methods", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Region Proposal", "Object Detection Models", "Stochastic Optimization", "Recurrent Neural Networks", "Loss Functions", "Skip Connection Blocks", "Image Data Augmentation", "Initialization", "Output Functions", "Learning Rate Schedules", "RoI Feature Extractors", "Skip Connections"], "task": ["Data Augmentation", "Image Augmentation", "Image Classification", "Object Detection"], "method": ["Average Pooling", "Faster R-CNN", "Long Short-Term Memory", "Tanh Activation", "1x1 Convolution", "Region Proposal Network", "Proximal Policy Optimization", "Spatially Separable Convolution", "ResNet", "Random Horizontal Flip", "Entropy Regularization", "RoIPool", "NAS-FPN", "Convolution", "ReLU", "Residual Connection", "FPN", "AmoebaNet", "RPN", "Image Scale Augmentation", "Focal Loss", "Batch Normalization", "Residual Network", "PPO", "Kaiming Initialization", "SGD", "Step Decay", "Sigmoid Activation", "Stochastic Gradient Descent", "Softmax", "Feature Pyramid Network", "LSTM", "Bottleneck Residual Block", "RetinaNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["PASCAL VOC 2007", "COCO test-dev"], "metric": ["APM", "MAP", "box AP", "APL", "APS"], "title": "Learning Data Augmentation Strategies for Object Detection"} {"abstract": "We address the challenging problem of Natural Language Comprehension beyond plain-text documents by introducing the TILT neural network architecture which simultaneously learns layout information, visual features, and textual semantics. Contrary to previous approaches, we rely on a decoder capable of unifying a variety of problems involving natural language. The layout is represented as an attention bias and complemented with contextualized visual information, while the core of our model is a pretrained encoder-decoder Transformer. Our novel approach achieves state-of-the-art results in extracting information from documents and answering questions which demand layout understanding (DocVQA, CORD, WikiOps, SROIE). At the same time, we simplify the process by employing an end-to-end model.", "field": ["Semantic Segmentation Models", "Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Convolutions", "Feedforward Networks", "Transformers", "Pooling Operations", "Attention Mechanisms", "Skip Connections"], "task": ["Document Image Classification", "Key Information Extraction", "Visual Question Answering"], "method": ["U-Net", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Concatenated Skip Connection", "Convolution", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections", "Max Pooling"], "dataset": ["DocVQA", "DocVQA test", "RVL-CDIP"], "metric": ["Accuracy", "ANLS"], "title": "Going Full-TILT Boogie on Document Understanding with Text-Image-Layout Transformer"} {"abstract": "Natural language understanding comprises a wide range of diverse tasks such\r\nas textual entailment, question answering, semantic similarity assessment, and\r\ndocument classification. Although large unlabeled text corpora are abundant,\r\nlabeled data for learning these specific tasks is scarce, making it challenging for\r\ndiscriminatively trained models to perform adequately. We demonstrate that large\r\ngains on these tasks can be realized by generative pre-training of a language model\r\non a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each\r\nspecific task. In contrast to previous approaches, we make use of task-aware input\r\ntransformations during fine-tuning to achieve effective transfer while requiring\r\nminimal changes to the model architecture. We demonstrate the effectiveness of\r\nour approach on a wide range of benchmarks for natural language understanding.\r\nOur general task-agnostic model outperforms discriminatively trained models that\r\nuse architectures specifically crafted for each task, significantly improving upon the\r\nstate of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute\r\nimprovements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on\r\nquestion answering (RACE), and 1.5% on textual entailment (MultiNLI).", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Fine-Tuning", "Skip Connections"], "task": ["Document Classification", "Language Modelling", "Natural Language Inference", "Natural Language Understanding", "Question Answering", "Semantic Similarity", "Semantic Textual Similarity"], "method": ["Weight Decay", "Cosine Annealing", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Discriminative Fine-Tuning", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Cosine Annealing", "GPT", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["MultiNLI", "Story Cloze Test", "RACE", "SNLI", "SciTail"], "metric": ["RACE-m", "% Test Accuracy", "RACE", "Matched", "Parameters", "Accuracy", "Mismatched", "% Train Accuracy", "RACE-h"], "title": "Improving Language Understanding by Generative Pre-Training"} {"abstract": "To solve the text-based question and answering task that requires relational\nreasoning, it is necessary to memorize a large amount of information and find\nout the question relevant information from the memory. Most approaches were\nbased on external memory and four components proposed by Memory Network. The\ndistinctive component among them was the way of finding the necessary\ninformation and it contributes to the performance. Recently, a simple but\npowerful neural network module for reasoning called Relation Network (RN) has\nbeen introduced. We analyzed RN from the view of Memory Network, and realized\nthat its MLP component is able to reveal the complicate relation between\nquestion and object pair. Motivated from it, we introduce which uses MLP to\nfind out relevant information on Memory Network architecture. It shows new\nstate-of-the-art results in jointly trained bAbI-10k story-based question\nanswering tasks and bAbI dialog-based question answering tasks.", "field": ["Working Memory Models"], "task": ["Question Answering", "Relational Reasoning"], "method": ["Memory Network"], "dataset": ["bAbi"], "metric": ["Mean Error Rate"], "title": "Finding ReMO (Related Memory Object): A Simple Neural Architecture for Text based Reasoning"} {"abstract": "Due to the compelling improvements brought by BERT, many recent representation models adopted the Transformer architecture as their main building block, consequently inheriting the wordpiece tokenization system despite it not being intrinsically linked to the notion of Transformers. While this system is thought to achieve a good balance between the flexibility of characters and the efficiency of full words, using predefined wordpiece vocabularies from the general domain is not always suitable, especially when building models for specialized domains (e.g., the medical domain). Moreover, adopting a wordpiece tokenization shifts the focus from the word level to the subword level, making the models conceptually more complex and arguably less convenient in practice. For these reasons, we propose CharacterBERT, a new variant of BERT that drops the wordpiece system altogether and uses a Character-CNN module instead to represent entire words by consulting their characters. We show that this new model improves the performance of BERT on a variety of medical domain tasks while at the same time producing robust, word-level and open-vocabulary representations.", "field": ["Output Functions", "Attention Modules", "Stochastic Optimization", "Regularization", "Learning Rate Schedules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Clinical Concept Extraction", "Drug\u2013drug Interaction Extraction", "Natural Language Inference", "Relation Extraction", "Semantic Similarity", "Tokenization"], "method": ["Weight Decay", "CharacterBERT", "Adam", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Transformer", "Residual Connection", "Dense Connections", "Layer Normalization", "Label Smoothing", "GELU", "WordPiece", "Byte Pair Encoding", "BPE", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Dropout", "BERT"], "dataset": ["ChemProt", "MedNLI", "ClinicalSTS", "2010 i2b2/VA", "DDI extraction 2013 corpus"], "metric": ["Exact Span F1", "Accuracy", "Pearson Correlation", "Micro F1"], "title": "CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters"} {"abstract": "Large-scale object detection datasets (e.g., MS-COCO) try to define the\nground truth bounding boxes as clear as possible. However, we observe that\nambiguities are still introduced when labeling the bounding boxes. In this\npaper, we propose a novel bounding box regression loss for learning bounding\nbox transformation and localization variance together. Our loss greatly\nimproves the localization accuracies of various architectures with nearly no\nadditional computation. The learned localization variance allows us to merge\nneighboring bounding boxes during non-maximum suppression (NMS), which further\nimproves the localization performance. On MS-COCO, we boost the Average\nPrecision (AP) of VGG-16 Faster R-CNN from 23.6% to 29.1%. More importantly,\nfor ResNet-50-FPN Mask R-CNN, our method improves the AP and AP90 by 1.8% and\n6.2% respectively, which significantly outperforms previous state-of-the-art\nbounding box refinement methods. Our code and models are available at:\ngithub.com/yihui-he/KL-Loss", "field": ["Object Detection Models", "Initialization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Region Proposal", "Skip Connection Blocks"], "task": ["Object Detection", "Object Localization", "Regression"], "method": ["Average Pooling", "Faster R-CNN", "1x1 Convolution", "RoIAlign", "Region Proposal Network", "ResNet", "Convolution", "RoIPool", "ReLU", "Residual Connection", "RPN", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Softmax", "Bottleneck Residual Block", "Mask R-CNN", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["PASCAL VOC 2007", "COCO test-dev"], "metric": ["box AP", "MAP"], "title": "Bounding Box Regression with Uncertainty for Accurate Object Detection"} {"abstract": "We present a novel model called OCGAN for the classical problem of one-class\nnovelty detection, where, given a set of examples from a particular class, the\ngoal is to determine if a query example is from the same class. Our solution is\nbased on learning latent representations of in-class examples using a denoising\nauto-encoder network. The key contribution of our work is our proposal to\nexplicitly constrain the latent space to exclusively represent the given class.\nIn order to accomplish this goal, firstly, we force the latent space to have\nbounded support by introducing a tanh activation in the encoder's output layer.\nSecondly, using a discriminator in the latent space that is trained\nadversarially, we ensure that encoded representations of in-class examples\nresemble uniform random samples drawn from the same bounded space. Thirdly,\nusing a second adversarial discriminator in the input space, we ensure all\nrandomly drawn latent samples generate examples that look real. Finally, we\nintroduce a gradient-descent based sampling technique that explores points in\nthe latent space that generate potential out-of-class examples, which are fed\nback to the network to further train it to generate in-class examples from\nthose points. The effectiveness of the proposed method is measured across four\npublicly available datasets using two one-class novelty detection protocols\nwhere we achieve state-of-the-art results.", "field": ["Activation Functions"], "task": ["Anomaly Detection", "Denoising"], "method": ["Tanh Activation"], "dataset": ["One-class CIFAR-10"], "metric": ["AUROC"], "title": "OCGAN: One-class Novelty Detection Using GANs with Constrained Latent Representations"} {"abstract": "As a new way of training generative models, Generative Adversarial Nets (GAN)\nthat uses a discriminative model to guide the training of the generative model\nhas enjoyed considerable success in generating real-valued data. However, it\nhas limitations when the goal is for generating sequences of discrete tokens. A\nmajor reason lies in that the discrete outputs from the generative model make\nit difficult to pass the gradient update from the discriminative model to the\ngenerative model. Also, the discriminative model can only assess a complete\nsequence, while for a partially generated sequence, it is non-trivial to\nbalance its current score and the future one once the entire sequence has been\ngenerated. In this paper, we propose a sequence generation framework, called\nSeqGAN, to solve the problems. Modeling the data generator as a stochastic\npolicy in reinforcement learning (RL), SeqGAN bypasses the generator\ndifferentiation problem by directly performing gradient policy update. The RL\nreward signal comes from the GAN discriminator judged on a complete sequence,\nand is passed back to the intermediate state-action steps using Monte Carlo\nsearch. Extensive experiments on synthetic data and real-world tasks\ndemonstrate significant improvements over strong baselines.", "field": ["Generative Models", "Convolutions"], "task": ["Text Generation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["Chinese Poems", "EMNLP2017 WMT", "COCO Captions"], "metric": ["BLEU-3", "BLEU-4", "BLEU-2", "BLEU-5"], "title": "SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient"} {"abstract": "This paper addresses the scalability challenge of architecture search by\nformulating the task in a differentiable manner. Unlike conventional approaches\nof applying evolution or reinforcement learning over a discrete and\nnon-differentiable search space, our method is based on the continuous\nrelaxation of the architecture representation, allowing efficient search of the\narchitecture using gradient descent. Extensive experiments on CIFAR-10,\nImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in\ndiscovering high-performance convolutional architectures for image\nclassification and recurrent architectures for language modeling, while being\norders of magnitude faster than state-of-the-art non-differentiable techniques.\nOur implementation has been made publicly available to facilitate further\nresearch on efficient architecture search algorithms.", "field": ["Neural Architecture Search"], "task": ["Image Classification", "Language Modelling", "Neural Architecture Search"], "method": ["Differentiable Architecture Search", "DARTS"], "dataset": ["NAS-Bench-201, ImageNet-16-120", "Penn Treebank (Word Level)", "CIFAR-10 Image Classification", "ImageNet"], "metric": ["Accuracy (Test)", "Percentage error", "MACs", "Validation perplexity", "Test perplexity", "Params", "Top-1 Error Rate", "Accuracy (val)", "Accuracy", "Search time (s)"], "title": "DARTS: Differentiable Architecture Search"} {"abstract": "The Non-Local Network (NLNet) presents a pioneering approach for capturing long-range dependencies within an image, via aggregating query-specific global context to each query position. However, through a rigorous empirical analysis, we have found that the global contexts modeled by the non-local network are almost the same for different query positions. In this paper, we take advantage of this finding to create a simplified network based on a query-independent formulation, which maintains the accuracy of NLNet but with significantly less computation. We further replace the one-layer transformation function of the non-local block by a two-layer bottleneck, which further reduces the parameter number considerably. The resulting network element, called the global context (GC) block, effectively models global context in a lightweight manner, allowing it to be applied at multiple layers of a backbone network to form a global context network (GCNet). Experiments show that GCNet generally outperforms NLNet on major benchmarks for various recognition tasks. The code and network configurations are available at https://github.com/xvjiarui/GCNet.", "field": ["Object Detection Models", "Output Functions", "Activation Functions", "Skip Connections", "Normalization", "Convolutions", "Image Feature Extractors", "Image Model Blocks"], "task": ["Instance Segmentation", "Object Detection"], "method": ["Layer Normalization", "Softmax", "Non-Local Operation", "1x1 Convolution", "ReLU", "Global Context Block", "Residual Connection", "GCNet", "Non-Local Block", "Rectified Linear Units"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["AP50", "box AP", "AP75", "mask AP"], "title": "Global Context Networks"} {"abstract": "We propose an alternative generator architecture for generative adversarial\nnetworks, borrowing from style transfer literature. The new architecture leads\nto an automatically learned, unsupervised separation of high-level attributes\n(e.g., pose and identity when trained on human faces) and stochastic variation\nin the generated images (e.g., freckles, hair), and it enables intuitive,\nscale-specific control of the synthesis. The new generator improves the\nstate-of-the-art in terms of traditional distribution quality metrics, leads to\ndemonstrably better interpolation properties, and also better disentangles the\nlatent factors of variation. To quantify interpolation quality and\ndisentanglement, we propose two new, automated methods that are applicable to\nany generator architecture. Finally, we introduce a new, highly varied and\nhigh-quality dataset of human faces.", "field": ["Regularization", "Stochastic Optimization", "Activation Functions", "Normalization", "Loss Functions", "Convolutions", "Feedforward Networks", "Generative Models"], "task": ["Image Generation"], "method": ["Feedforward Network", "Adam", "Convolution", "StyleGAN", "Adaptive Instance Normalization", "Leaky ReLU", "R1 Regularization", "WGAN-GP Loss", "Dense Connections"], "dataset": ["LSUN Bedroom 256 x 256", "LSUN Bedroom", "FFHQ", "CelebA-HQ 1024x1024"], "metric": ["FID-50k", "FID"], "title": "A Style-Based Generator Architecture for Generative Adversarial Networks"} {"abstract": "In autonomous driving pipelines, perception modules provide a visual understanding of the surrounding road scene. Among the perception tasks, vehicle detection is of paramount importance for a safe driving as it identifies the position of other agents sharing the road. In our work, we propose PointRGCN: a graph-based 3D object detection pipeline based on graph convolutional networks (GCNs) which operates exclusively on 3D LiDAR point clouds. To perform more accurate 3D object detection, we leverage a graph representation that performs proposal feature and context aggregation. We integrate residual GCNs in a two-stage 3D object detection pipeline, where 3D object proposals are refined using a novel graph representation. In particular, R-GCN is a residual GCN that classifies and regresses 3D proposals, and C-GCN is a contextual GCN that further refines proposals by sharing contextual information between multiple proposals. We integrate our refinement modules into a novel 3D detection pipeline, PointRGCN, and achieve state-of-the-art performance on the easy difficulty for the bird eye view detection task.", "field": ["Graph Models"], "task": ["3D Object Detection", "Autonomous Driving", "Object Detection"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["KITTI Cars Hard", "KITTI Cars Moderate", "KITTI Cars Easy"], "metric": ["AP"], "title": "PointRGCN: Graph Convolution Networks for 3D Vehicles Detection Refinement"} {"abstract": "Electronic Health Records (EHRs) are widely applied in healthcare facilities nowadays. Due to the inherent heterogeneity, unbalanced, incompleteness, and high-dimensional nature of EHRs, it is a challenging task to employ machine learning algorithms to analyse such EHRs for prediction and diagnostics within the scope of precision medicine. Dimensionality reduction is an efficient data preprocessing technique for the analysis of high dimensional data that reduces the number of features while improving the performance of the data analysis, e.g. classification. In this paper, we propose an efficient curvature-based feature selection method for supporting more precise diagnosis. The proposed method is a filter-based feature selection method, which directly utilises the Menger Curvature for ranking all the attributes in the given data set. We evaluate the performance of our method against conventional PCA and recent ones including BPCM, GSAM, WCNN, BLS II, VIBES, 2L-MJFA, RFGA, and VAF. Our method achieves state-of-the-art performance on four benchmark healthcare data sets including CCRFDS, BCCDS, BTDS, and DRDDS with impressive 24.73% and 13.93% improvements respectively on BTDS and CCRFDS, 7.97% improvement on BCCDS, and 3.63% improvement on DRDDS. Our CFS source code is publicly available at https://github.com/zhemingzuo/CFS.", "field": ["Dimensionality Reduction"], "task": ["Breast Cancer Detection", "Breast Tissue Identification", "Cervical cancer biopsy identification", "Diabetic Retinopathy Detection", "Dimensionality Reduction", "Feature Selection"], "method": ["Principal Components Analysis", "PCA"], "dataset": ["Breast Cancer Coimbra Data Set", "Cervical Cancer (Risk Factors) Data Set", "Diabetic Retinopathy Debrecen Data Set", "Breast Tissue Data Set"], "metric": ["Mean Accuracy"], "title": "Curvature-based Feature Selection with Application in Classifying Electronic Health Records"} {"abstract": "Dense video captioning aims to localize and describe important events in untrimmed videos. Existing methods mainly tackle this task by exploiting only visual features, while completely neglecting the audio track. Only a few prior works have utilized both modalities, yet they show poor results or demonstrate the importance on a dataset with a specific domain. In this paper, we introduce Bi-modal Transformer which generalizes the Transformer architecture for a bi-modal input. We show the effectiveness of the proposed model with audio and visual modalities on the dense video captioning task, yet the module is capable of digesting any two modalities in a sequence-to-sequence task. We also show that the pre-trained bi-modal encoder as a part of the bi-modal transformer can be used as a feature extractor for a simple proposal generation module. The performance is demonstrated on a challenging ActivityNet Captions dataset where our model achieves outstanding performance. The code is available: v-iashin.github.io/bmt", "field": ["Regularization", "Attention Modules", "Stochastic Optimization", "Output Functions", "Activation Functions", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Dense Video Captioning", "Temporal Action Proposal Generation", "Video Captioning"], "method": ["Layer Normalization", "Transformer", "Softmax", "Adam", "Multi-Head Attention", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections", "Rectified Linear Units"], "dataset": ["ActivityNet Captions"], "metric": ["Average Precision", "METEOR", "Average F1", "BLEU-3", "Average Recall", "BLEU-4"], "title": "A Better Use of Audio-Visual Cues: Dense Video Captioning with Bi-modal Transformer"} {"abstract": "We present an approach to efficiently detect the 2D pose of multiple people\nin an image. The approach uses a nonparametric representation, which we refer\nto as Part Affinity Fields (PAFs), to learn to associate body parts with\nindividuals in the image. The architecture encodes global context, allowing a\ngreedy bottom-up parsing step that maintains high accuracy while achieving\nrealtime performance, irrespective of the number of people in the image. The\narchitecture is designed to jointly learn part locations and their association\nvia two branches of the same sequential prediction process. Our method placed\nfirst in the inaugural COCO 2016 keypoints challenge, and significantly exceeds\nthe previous state-of-the-art result on the MPII Multi-Person benchmark, both\nin performance and efficiency.", "field": ["Output Functions"], "task": ["Keypoint Detection", "Multi-Person Pose Estimation", "Pose Estimation"], "method": ["PAFs", "Part Affinity Fields"], "dataset": ["COCO", "MPII Multi-Person", "COCO test-dev"], "metric": ["ARM", "Validation AP", "APM", "AR75", "AR50", "ARL", "AP75", "AP", "APL", "mAP@0.5", "AP50", "AR"], "title": "Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields"} {"abstract": "Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from distributed prioritized experience replay. We study the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy. Using a single network architecture and fixed set of hyperparameters, the resulting agent, Recurrent Replay Distributed DQN, quadruples the previous state of the art on Atari-57, and surpasses the state of the art on DMLab-30. It is the first agent to exceed human-level performance in 52 of the\n57 Atari games.", "field": ["Q-Learning Networks", "Convolutions", "Feedforward Networks", "Off-Policy TD Control"], "task": ["Atari Games"], "method": ["Q-Learning", "Convolution", "DQN", "Dense Connections", "Deep Q-Network"], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari-57", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Pitfall!", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Defender", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Solaris", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Surround", "Atari 2600 Yars Revenge", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score", "Medium Human-Normalized Score"], "title": "Recurrent Experience Replay in Distributed Reinforcement Learning"} {"abstract": "Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph information, which is often overlooked. In this paper, we propose the FGN, Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive information with the fusion mechanism. The major innovations of FGN include: (1) a novel CNN structure called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters. (2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character, which may capture potential interactive knowledge between context and glyph. Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. Further, more experiments are conducted to investigate the influences of various components and settings in FGN.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Chinese Named Entity Recognition", "Named Entity Recognition", "Representation Learning"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["Resume NER", "MSRA", "OntoNotes 4", "Weibo NER"], "metric": ["F1"], "title": "FGN: Fusion Glyph Network for Chinese Named Entity Recognition"} {"abstract": "Aspect-level sentiment classification aims to distinguish the sentiment polarities over one or more aspect terms in a sentence. Existing approaches mostly model different aspects in one sentence independently, which ignore the sentiment dependencies between different aspects. However, we find such dependency information between different aspects can bring additional valuable information. In this paper, we propose a novel aspect-level sentiment classification model based on graph convolutional networks (GCN) which can effectively capture the sentiment dependencies between multi-aspects in one sentence. Our model firstly introduces bidirectional attention mechanism with position encoding to model aspect-specific representations between each aspect and its context words, then employs GCN over the attention mechanism to capture the sentiment dependencies between different aspects in one sentence. We evaluate the proposed approach on the SemEval 2014 datasets. Experiments show that our model outperforms the state-of-the-art methods. We also conduct experiments to evaluate the effectiveness of GCN module, which indicates that the dependencies between different aspects is highly helpful in aspect-level sentiment classification.", "field": ["Graph Models"], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Modeling Sentiment Dependencies with Graph Convolutional Networks for Aspect-level Sentiment Classification"} {"abstract": "Synthesizing high-quality images from text descriptions is a challenging\nproblem in computer vision and has many practical applications. Samples\ngenerated by existing text-to-image approaches can roughly reflect the meaning\nof the given descriptions, but they fail to contain necessary details and vivid\nobject parts. In this paper, we propose Stacked Generative Adversarial Networks\n(StackGAN) to generate 256x256 photo-realistic images conditioned on text\ndescriptions. We decompose the hard problem into more manageable sub-problems\nthrough a sketch-refinement process. The Stage-I GAN sketches the primitive\nshape and colors of the object based on the given text description, yielding\nStage-I low-resolution images. The Stage-II GAN takes Stage-I results and text\ndescriptions as inputs, and generates high-resolution images with\nphoto-realistic details. It is able to rectify defects in Stage-I results and\nadd compelling details with the refinement process. To improve the diversity of\nthe synthesized images and stabilize the training of the conditional-GAN, we\nintroduce a novel Conditioning Augmentation technique that encourages\nsmoothness in the latent conditioning manifold. Extensive experiments and\ncomparisons with state-of-the-arts on benchmark datasets demonstrate that the\nproposed method achieves significant improvements on generating photo-realistic\nimages conditioned on text descriptions.", "field": ["Generative Models", "Convolutions"], "task": ["Image Generation", "Text-to-Image Generation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["COCO", "Oxford 102 Flowers", "CUB"], "metric": ["Inception score"], "title": "StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks"} {"abstract": "Human pose estimation and semantic part segmentation are two complementary\ntasks in computer vision. In this paper, we propose to solve the two tasks\njointly for natural multi-person images, in which the estimated pose provides\nobject-level shape prior to regularize part segments while the part-level\nsegments constrain the variation of pose locations. Specifically, we first\ntrain two fully convolutional neural networks (FCNs), namely Pose FCN and Part\nFCN, to provide initial estimation of pose joint potential and semantic part\npotential. Then, to refine pose joint location, the two types of potentials are\nfused with a fully-connected conditional random field (FCRF), where a novel\nsegment-joint smoothness term is used to encourage semantic and spatial\nconsistency between parts and joints. To refine part segments, the refined pose\nand the original part potential are integrated through a Part FCN, where the\nskeleton feature from pose serves as additional regularization cues for part\nsegments. Finally, to reduce the complexity of the FCRF, we induce human\ndetection boxes and infer the graph inside each box, making the inference forty\ntimes faster.\n Since there's no dataset that contains both part segments and pose labels, we\nextend the PASCAL VOC part dataset with human pose joints and perform extensive\nexperiments to compare our method against several most recent strategies. We\nshow that on this dataset our algorithm surpasses competing methods by a large\nmargin in both tasks.", "field": ["Initialization", "Semantic Segmentation Models", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Human Detection", "Multi-Person Pose Estimation", "Pose Estimation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Fully Convolutional Network", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "FCN"], "dataset": ["PASCAL-Part"], "metric": ["mIoU"], "title": "Joint Multi-Person Pose Estimation and Semantic Part Segmentation"} {"abstract": "Aspect-level sentiment classification aims at detecting the sentiment expressed towards a particular target in a sentence. Based on the observation that the sentiment polarity is often related to specific spans in the given sentence, it is possible to make use of such information for better classification. On the other hand, such information can also serve as justifications associated with the predictions.We propose a segmentation attention based LSTM model which can effectively capture the structural dependencies between the target and the sentiment expressions with a linear-chain conditional random field (CRF) layer. The model simulates human's process of inferring sentiment information when reading: when given a target, humans tend to search for surrounding relevant text spans in the sentence before making an informed decision on the underlying sentiment information.We perform sentiment classification tasks on publicly available datasets on online reviews across different languages from SemEval tasks and social comments from Twitter. Extensive experiments show that our model achieves the state-of-the-art performance while extracting interpretable sentiment expressions.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Learning Latent Opinions for Aspect-Level Sentiment Classification"} {"abstract": "End-to-end neural data-to-text (D2T) generation has recently emerged as an alternative to pipeline-based architectures. However, it has faced challenges in generalizing to new domains and generating semantically consistent text. In this work, we present DataTuner, a neural, end-to-end data-to-text generation system that makes minimal assumptions about the data representation and the target domain. We take a two-stage generation-reranking approach, combining a fine-tuned language model with a semantic fidelity classifier. Each of our components is learnt end-to-end without the need for dataset-specific heuristics, entity delexicalization, or post-processing. We show that DataTuner achieves state of the art results on the automated metrics across four major D2T datasets (LDC2017T10, WebNLG, ViGGO, and Cleaned E2E), with a fluency assessed by human annotators nearing or exceeding the human-written reference texts. We further demonstrate that the model-based semantic fidelity scorer in DataTuner is a better assessment tool compared to traditional, heuristic-based measures. Our generated text has a significantly better semantic fidelity than the state of the art across all four datasets", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Attention Mechanisms", "Feedforward Networks", "Transformers", "Fine-Tuning", "Skip Connections"], "task": ["Data-to-Text Generation", "Language Modelling", "Text Generation"], "method": ["Weight Decay", "Cosine Annealing", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Discriminative Fine-Tuning", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Cosine Annealing", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "GPT-2", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["ViGGO", "WebNLG Full", "Cleaned E2E NLG Challenge", "LDC2017T10"], "metric": ["BLEU"], "title": "Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity"} {"abstract": "We describe a new training methodology for generative adversarial networks.\nThe key idea is to grow both the generator and discriminator progressively:\nstarting from a low resolution, we add new layers that model increasingly fine\ndetails as training progresses. This both speeds the training up and greatly\nstabilizes it, allowing us to produce images of unprecedented quality, e.g.,\nCelebA images at 1024^2. We also propose a simple way to increase the variation\nin generated images, and achieve a record inception score of 8.80 in\nunsupervised CIFAR10. Additionally, we describe several implementation details\nthat are important for discouraging unhealthy competition between the generator\nand discriminator. Finally, we suggest a new metric for evaluating GAN results,\nboth in terms of image quality and variation. As an additional contribution, we\nconstruct a higher-quality version of the CelebA dataset.", "field": ["Stochastic Optimization", "Activation Functions", "Loss Functions", "Normalization", "Convolutions", "Feedforward Networks", "Generative Models"], "task": ["Face Generation", "Image Generation"], "method": ["Adam", "ProGAN", "Convolution", "1x1 Convolution", "Local Response Normalization", "Progressively Growing GAN", "Leaky ReLU", "WGAN-GP Loss", "Dense Connections"], "dataset": ["LSUN Cat 256 x 256", "CIFAR-10", "LSUN Churches 256 x 256", "FFHQ", "LSUN Bedroom 256 x 256", "CelebA-HQ 256x256", "CelebA-HQ 1024x1024"], "metric": ["Inception score", "FID"], "title": "Progressive Growing of GANs for Improved Quality, Stability, and Variation"} {"abstract": "To understand how people look, interact, or perform tasks, we need to quickly and accurately capture their 3D body, face, and hands together from an RGB image. Most existing methods focus only on parts of the body. A few recent approaches reconstruct full expressive 3D humans from images using 3D body models that include the face and hands. These methods are optimization-based and thus slow, prone to local optima, and require 2D keypoints as input. We address these limitations by introducing ExPose (EXpressive POse and Shape rEgression), which directly regresses the body, face, and hands, in SMPL-X format, from an RGB image. This is a hard problem due to the high dimensionality of the body and the lack of expressive training data. Additionally, hands and faces are much smaller than the body, occupying very few image pixels. This makes hand and face estimation hard when body images are downscaled for neural networks. We make three main contributions. First, we account for the lack of training data by curating a dataset of SMPL-X fits on in-the-wild images. Second, we observe that body estimation localizes the face and hands reasonably well. We introduce body-driven attention for face and hand regions in the original image to extract higher-resolution crops that are fed to dedicated refinement modules. Third, these modules exploit part-specific knowledge from existing face- and hand-only datasets. ExPose estimates expressive 3D humans more accurately than existing optimization methods at a small fraction of the computational cost. Our data, model and code are available for research at https://expose.is.tue.mpg.de .", "field": ["Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Skip Connections"], "task": ["3D Face Reconstruction", "3D Hand Pose Estimation", "3D Human Pose Estimation", "3D Human Reconstruction", "Motion Capture", "Regression"], "method": ["HRNet", "Convolution", "Batch Normalization", "1x1 Convolution", "ReLU", "Residual Connection", "Rectified Linear Units"], "dataset": ["Expressive hands and faces dataset (EHF).", "3DPW"], "metric": ["All", "PA-MPJPE"], "title": "Monocular Expressive Body Regression through Body-Driven Attention"} {"abstract": "In this paper, we address the scene segmentation task by capturing rich\ncontextual dependencies based on the selfattention mechanism. Unlike previous\nworks that capture contexts by multi-scale features fusion, we propose a Dual\nAttention Networks (DANet) to adaptively integrate local features with their\nglobal dependencies. Specifically, we append two types of attention modules on\ntop of traditional dilated FCN, which model the semantic interdependencies in\nspatial and channel dimensions respectively. The position attention module\nselectively aggregates the features at each position by a weighted sum of the\nfeatures at all positions. Similar features would be related to each other\nregardless of their distances. Meanwhile, the channel attention module\nselectively emphasizes interdependent channel maps by integrating associated\nfeatures among all channel maps. We sum the outputs of the two attention\nmodules to further improve feature representation which contributes to more\nprecise segmentation results. We achieve new state-of-the-art segmentation\nperformance on three challenging scene segmentation datasets, i.e., Cityscapes,\nPASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5%\non Cityscapes test set is achieved without using coarse data. We make the code\nand trained model publicly available at https://github.com/junfu1115/DANet", "field": ["Semantic Segmentation Models", "Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Scene Segmentation", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Fully Convolutional Network", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "FCN"], "dataset": ["COCO-Stuff test", "PASCAL Context", "PASCAL VOC 2012 test", "Cityscapes test"], "metric": ["Mean IoU", "Mean IoU (class)", "mIoU"], "title": "Dual Attention Network for Scene Segmentation"} {"abstract": "Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity.", "field": ["Image Models"], "task": ["Language Modelling", "Open-Domain Question Answering", "Question Answering"], "method": ["Interpretability"], "dataset": ["Natural Questions (short)"], "metric": ["F1"], "title": "REALM: Retrieval-Augmented Language Model Pre-Training"} {"abstract": "This paper introduces a network for volumetric segmentation that learns from\nsparsely annotated volumetric images. We outline two attractive use cases of\nthis method: (1) In a semi-automated setup, the user annotates some slices in\nthe volume to be segmented. The network learns from these sparse annotations\nand provides a dense 3D segmentation. (2) In a fully-automated setup, we assume\nthat a representative, sparsely annotated training set exists. Trained on this\ndata set, the network densely segments new volumetric images. The proposed\nnetwork extends the previous u-net architecture from Ronneberger et al. by\nreplacing all 2D operations with their 3D counterparts. The implementation\nperforms on-the-fly elastic deformations for efficient data augmentation during\ntraining. It is trained end-to-end from scratch, i.e., no pre-trained network\nis required. We test the performance of the proposed method on a complex,\nhighly variable 3D structure, the Xenopus kidney, and achieve good results for\nboth use cases.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Data Augmentation"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["ShapeNet-Part"], "metric": ["Instance Average IoU"], "title": "3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation"} {"abstract": "We introduce a neural network with a recurrent attention model over a\npossibly large external memory. The architecture is a form of Memory Network\n(Weston et al., 2015) but unlike the model in that work, it is trained\nend-to-end, and hence requires significantly less supervision during training,\nmaking it more generally applicable in realistic settings. It can also be seen\nas an extension of RNNsearch to the case where multiple computational steps\n(hops) are performed per output symbol. The flexibility of the model allows us\nto apply it to tasks as diverse as (synthetic) question answering and to\nlanguage modeling. For the former our approach is competitive with Memory\nNetworks, but with less supervision. For the latter, on the Penn TreeBank and\nText8 datasets our approach demonstrates comparable performance to RNNs and\nLSTMs. In both cases we show that the key concept of multiple computational\nhops yields improved results.", "field": ["Working Memory Models", "Output Functions"], "task": ["Language Modelling", "Question Answering"], "method": ["End-To-End Memory Network", "Softmax"], "dataset": ["bAbi"], "metric": ["Accuracy (trained on 1k)", "Mean Error Rate", "Accuracy (trained on 10k)"], "title": "End-To-End Memory Networks"} {"abstract": "Variations of human body skeletons may be considered as dynamic graphs, which\nare generic data representation for numerous real-world applications. In this\npaper, we propose a spatio-temporal graph convolution (STGC) approach for\nassembling the successes of local convolutional filtering and sequence learning\nability of autoregressive moving average. To encode dynamic graphs, the\nconstructed multi-scale local graph convolution filters, consisting of matrices\nof local receptive fields and signal mappings, are recursively performed on\nstructured graph data of temporal and spatial domain. The proposed model is\ngeneric and principled as it can be generalized into other dynamic models. We\ntheoretically prove the stability of STGC and provide an upper-bound of the\nsignal transformation to be learnt. Further, the proposed recursive model can\nbe stacked into a multi-layer architecture. To evaluate our model, we conduct\nextensive experiments on four benchmark skeleton-based action datasets,\nincluding the large-scale challenging NTU RGB+D. The experimental results\ndemonstrate the effectiveness of our proposed model and the improvement over\nthe state-of-the-art.", "field": ["Convolutions"], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": ["Convolution"], "dataset": ["Florence 3D"], "metric": ["Accuracy"], "title": "Spatio-Temporal Graph Convolution for Skeleton Based Action Recognition"} {"abstract": "Human action recognition from skeleton data, fueled by the Graph Convolutional Network (GCN), has attracted lots of attention, due to its powerful capability of modeling non-Euclidean structure data. However, many existing GCN methods provide a pre-defined graph and fix it through the entire network, which can loss implicit joint correlations. Besides, the mainstream spectral GCN is approximated by one-order hop, thus higher-order connections are not well involved. Therefore, huge efforts are required to explore a better GCN architecture. To address these problems, we turn to Neural Architecture Search (NAS) and propose the first automatically designed GCN for skeleton-based action recognition. Specifically, we enrich the search space by providing multiple dynamic graph modules after fully exploring the spatial-temporal correlations between nodes. Besides, we introduce multiple-hop modules and expect to break the limitation of representational capacity caused by one-order approximation. Moreover, a sampling- and memory-efficient evolution strategy is proposed to search an optimal architecture for this task. The resulted architecture proves the effectiveness of the higher-order approximation and the dynamic graph modeling mechanism with temporal interactions, which is barely discussed before. To evaluate the performance of the searched model, we conduct extensive experiments on two very large scaled datasets and the results show that our model gets the state-of-the-art results.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions", "Graph Models"], "task": ["Action Recognition", "Neural Architecture Search", "Skeleton Based Action Recognition"], "method": ["GCN", "Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Graph Convolutional Network", "Sigmoid Activation"], "dataset": ["NTU RGB+D", "Kinetics-Skeleton dataset"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Learning Graph Convolutional Network for Skeleton-based Human Action Recognition by Neural Searching"} {"abstract": "Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.", "field": ["Activation Functions", "Board Game Models", "Normalization", "Heuristic Search Algorithms", "Convolutions", "Pooling Operations", "Replay Memory", "Skip Connections", "Skip Connection Blocks"], "task": ["Atari Games", "Game of Chess", "Game of Go", "Game of Shogi"], "method": ["Average Pooling", "Prioritized Experience Replay", "Convolution", "Batch Normalization", "ReLU", "Residual Connection", "AlphaZero", "MuZero", "Monte-Carlo Tree Search", "Residual Block", "Rectified Linear Units"], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari-57", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Pitfall!", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Defender", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Solaris", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Surround", "Atari 2600 Yars Revenge", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score", "Medium Human-Normalized Score"], "title": "Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model"} {"abstract": "Self-supervised learning shows great potential in monoculardepth estimation, using image sequences as the only source ofsupervision. Although people try to use the high-resolutionimage for depth estimation, the accuracy of prediction hasnot been significantly improved. In this work, we find thecore reason comes from the inaccurate depth estimation inlarge gradient regions, making the bilinear interpolation er-ror gradually disappear as the resolution increases. To obtainmore accurate depth estimation in large gradient regions, itis necessary to obtain high-resolution features with spatialand semantic information. Therefore, we present an improvedDepthNet, HR-Depth, with two effective strategies: (1) re-design the skip-connection in DepthNet to get better high-resolution features and (2) propose feature fusion Squeeze-and-Excitation(fSE) module to fuse feature more efficiently.Using Resnet-18 as the encoder, HR-Depth surpasses all pre-vious state-of-the-art(SoTA) methods with the least param-eters at both high and low resolution. Moreover, previousstate-of-the-art methods are based on fairly complex and deepnetworks with a mass of parameters which limits their realapplications. Thus we also construct a lightweight networkwhich uses MobileNetV3 as encoder. Experiments show thatthe lightweight network can perform on par with many largemodels like Monodepth2 at high-resolution with only20%parameters. All codes and models will be available at https://github.com/shawLyu/HR-Depth.", "field": ["Regularization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Depth Estimation", "Monocular Depth Estimation", "Self-Supervised Learning"], "method": ["Depthwise Convolution", "Squeeze-and-Excitation Block", "ReLU6", "Average Pooling", "Inverted Residual Block", "Hard Swish", "Batch Normalization", "Convolution", "Rectified Linear Units", "ReLU", "1x1 Convolution", "MobileNetV3", "Dropout", "Depthwise Separable Convolution", "Pointwise Convolution", "Global Average Pooling", "Dense Connections", "Sigmoid Activation"], "dataset": ["KITTI Eigen split unsupervised"], "metric": ["absolute relative error"], "title": "HR-Depth: High Resolution Self-Supervised Monocular Depth Estimation"} {"abstract": "A cascade of fully convolutional neural networks is proposed to segment\nmulti-modal Magnetic Resonance (MR) images with brain tumor into background and\nthree hierarchical regions: whole tumor, tumor core and enhancing tumor core.\nThe cascade is designed to decompose the multi-class segmentation problem into\na sequence of three binary segmentation problems according to the subregion\nhierarchy. The whole tumor is segmented in the first step and the bounding box\nof the result is used for the tumor core segmentation in the second step. The\nenhancing tumor core is then segmented based on the bounding box of the tumor\ncore segmentation result. Our networks consist of multiple layers of\nanisotropic and dilated convolution filters, and they are combined with\nmulti-view fusion to reduce false positives. Residual connections and\nmulti-scale predictions are employed in these networks to boost the\nsegmentation performance. Experiments with BraTS 2017 validation set show that\nthe proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for\nenhancing tumor core, whole tumor and tumor core, respectively. The\ncorresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and\n0.7748, respectively.", "field": ["Convolutions"], "task": ["Brain Tumor Segmentation", "Medical Image Segmentation", "Tumor Segmentation"], "method": ["Dilated Convolution", "Convolution"], "dataset": ["BRATS-2017 val", "BRATS-2014"], "metric": ["Dice Score"], "title": "Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks"} {"abstract": "In this work, we propose \"Residual Attention Network\", a convolutional neural\nnetwork using attention mechanism which can incorporate with state-of-art feed\nforward network architecture in an end-to-end training fashion. Our Residual\nAttention Network is built by stacking Attention Modules which generate\nattention-aware features. The attention-aware features from different modules\nchange adaptively as layers going deeper. Inside each Attention Module,\nbottom-up top-down feedforward structure is used to unfold the feedforward and\nfeedback attention process into a single feedforward process. Importantly, we\npropose attention residual learning to train very deep Residual Attention\nNetworks which can be easily scaled up to hundreds of layers. Extensive\nanalyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the\neffectiveness of every module mentioned above. Our Residual Attention Network\nachieves state-of-the-art object recognition performance on three benchmark\ndatasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and\nImageNet (4.8% single model and single crop, top-5 error). Note that, our\nmethod achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69%\nforward FLOPs comparing to ResNet-200. The experiment also demonstrates that\nour network is robust against noisy labels.", "field": ["Convolutions", "Image Model Blocks"], "task": ["Image Classification", "Object Recognition"], "method": ["Spatial Attention Module", "Channel Attention Module", "Convolution"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Residual Attention Network for Image Classification"} {"abstract": "Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at https://github.com/studio-ousia/luke.", "field": ["Output Functions", "Attention Modules", "Stochastic Optimization", "Regularization", "Learning Rate Schedules", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Entity Typing", "Language Modelling", "Named Entity Recognition", "Question Answering", "Relation Classification", "Relation Extraction"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SQuAD1.1", "TACRED", "Open Entity", "CoNLL 2003 (English)", "SQuAD2.0", "SQuAD1.1 dev"], "metric": ["EM", "Relation F1", "F1"], "title": "LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention"} {"abstract": "Previous work on automated pain detection from facial expressions has primarily focused on frame-level pain metrics based on specific facial muscle activations, such as Prkachin and Solomon Pain Intensity (PSPI). However, the current gold standard pain metric is the patient's self-reported visual analog scale (VAS) level which is a video-level measure. In this work, we propose a multitask multidimensional-pain model to directly predict VAS from video. Our model consists of three stages: (1) a VGGFace neural network model trained to predict frame-level PSPI, where multitask learning is applied, i.e. individual facial action units are predicted together with PSPI, to improve the learning of PSPI; (2) a fully connected neural network to estimate sequence-level pain scores from frame-level PSPI predictions, where again we use multitask learning to learn multidimensional pain scales instead of VAS alone; and (3) an optimal linear combination of the multidimensional pain predictions to obtain a final estimation of VAS. We show on the UNBC-McMaster Shoulder Pain dataset that our multitask multidimensional-pain method achieves state-of-the-art performance with a mean absolute error (MAE) of 1.95 and an intraclass correlation coefficient (ICC) of 0.43. While still not as good as trained human observer predictions provided with the dataset, when we average our estimates with those human estimates, our model improves their MAE from 1.76 to 1.58. Trained on the UNBC-McMaster dataset and applied directly with no further training or fine-tuning on a separate dataset of facial videos recorded during post-appendectomy physical exams, our model also outperforms previous work by 6% on the Area under the ROC curve metric (AUC).", "field": ["Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Convolutions", "Feedforward Networks", "Pooling Operations"], "task": ["Pain Intensity Regression"], "method": ["VGG", "Softmax", "Convolution", "ReLU", "Dropout", "Dense Connections", "Rectified Linear Units", "Max Pooling"], "dataset": ["UNBC-McMaster ShoulderPain dataset"], "metric": ["MAE (VAS)"], "title": "Pain Evaluation in Video using Extended Multitask Learning from Multidimensional Measurements"} {"abstract": "We introduce a new pretraining approach for language models that are geared to support multi-document NLP tasks. Our cross-document language model (CD-LM) improves masked language modeling for these tasks with two key ideas. First, we pretrain with multiple related documents in a single input, via cross-document masking, which encourages the model to learn cross-document and long-range relationships. Second, extending the recent Longformer model, we pretrain with long contexts of several thousand tokens and introduce a new attention pattern that uses sequence-level global attention to predict masked tokens, while retaining the familiar local attention elsewhere. We show that our CD-LM sets new state-of-the-art results for several multi-text tasks, including cross-document event and entity coreference resolution, paper citation recommendation, and documents plagiarism detection, while using a significantly reduced number of training parameters relative to prior works.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections", "Attention Patterns"], "task": ["Citation Recommendation", "Coreference Resolution", "Cross-Document Language Modeling", "Entity Cross-Document Coreference Resolution", "Event Coreference Resolution", "Event Cross-Document Coreference Resolution", "Language Modelling", "Question Answering"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Sliding Window Attention", "Softmax", "Multi-Head Attention", "Attention Dropout", "AdamW", "Longformer", "Linear Warmup With Linear Decay", "Dilated Sliding Window Attention", "Global and Sliding Window Attention", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["MultiNews test", "MultiNews val"], "metric": ["Perplexity"], "title": "Cross-Document Language Modeling"} {"abstract": "Weakly Supervised Object Localization (WSOL) techniques learn the object location only using image-level labels, without location annotations. A common limitation for these techniques is that they cover only the most discriminative part of the object, not the entire object. To address this problem, we propose an Attention-based Dropout Layer (ADL), which utilizes the self-attention mechanism to process the feature maps of the model. The proposed method is composed of two key components: 1) hiding the most discriminative part from the model for capturing the integral extent of object, and 2) highlighting the informative region for improving the recognition power of the model. Based on extensive experiments, we demonstrate that the proposed method is effective to improve the accuracy of WSOL, achieving a new state-of-the-art localization accuracy in CUB-200-2011 dataset. We also show that the proposed method is much more efficient in terms of both parameter and computation overheads than existing techniques.", "field": ["Regularization"], "task": ["Object Localization", "Weakly-Supervised Object Localization"], "method": ["Dropout"], "dataset": [" CUB-200-2011"], "metric": ["Top-1 Error Rate"], "title": "Attention-based Dropout Layer for Weakly Supervised Object Localization"} {"abstract": "Submanifold sparse convolutional networks", "field": ["Convolutions"], "task": ["3D Semantic Segmentation", "Semantic Segmentation"], "method": ["Submanifold Convolution"], "dataset": ["ScanNet"], "metric": ["3DIoU"], "title": "3D Semantic Segmentation with Submanifold Sparse Convolutional Networks"} {"abstract": "Although simple individually, artificial neurons provide state-of-the-art\nperformance when interconnected in deep networks. Unknown to many, there exists\nan arguably even simpler and more versatile learning mechanism, namely, the\nTsetlin Automaton. Merely by means of a single integer as memory, it learns the\noptimal action in stochastic environments through increment and decrement\noperations. In this paper, we introduce the Tsetlin Machine, which solves\ncomplex pattern recognition problems with easy-to-interpret propositional\nformulas, composed by a collective of Tsetlin Automata. To eliminate the\nlongstanding problem of vanishing signal-to-noise ratio, the Tsetlin Machine\norchestrates the automata using a novel game. Our theoretical analysis\nestablishes that the Nash equilibria of the game align with the propositional\nformulas that provide optimal pattern recognition accuracy. This translates to\nlearning without local optima, only global ones. We argue that the Tsetlin\nMachine finds the propositional formula that provides optimal accuracy, with\nprobability arbitrarily close to unity. In five benchmarks, the Tsetlin Machine\nprovides competitive accuracy compared with SVMs, Decision Trees, Random\nForests, Naive Bayes Classifier, Logistic Regression, and Neural Networks. The\nTsetlin Machine further has an inherent computational advantage since both\ninputs, patterns, and outputs are expressed as bits, while recognition and\nlearning rely on bit manipulation. The combination of accuracy,\ninterpretability, and computational simplicity makes the Tsetlin Machine a\npromising tool for a wide range of domains. Being the first of its kind, we\nbelieve the Tsetlin Machine will kick-start new paths of research, with a\npotentially significant impact on the AI field and the applications of AI.", "field": ["Generalized Linear Models"], "task": ["Image Classification", "Regression"], "method": ["Logistic Regression"], "dataset": ["MNIST"], "metric": ["Accuracy"], "title": "The Tsetlin Machine - A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic"} {"abstract": "Transformer networks have lead to important progress in language modeling and machine translation. These models include two consecutive modules, a feed-forward layer and a self-attention layer. The latter allows the network to capture long term dependencies and are often regarded as the key ingredient in the success of Transformers. Building upon this intuition, we propose a new model that solely consists of attention layers. More precisely, we augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer. Thanks to these vectors, we can remove the feed-forward layer without degrading the performance of a transformer. Our evaluation shows the benefits brought by our model on standard character and word level language modeling benchmarks.", "field": ["Attention Mechanisms", "Attention Modules", "Regularization", "Stochastic Optimization"], "task": ["Language Modelling"], "method": ["AdaGrad", "All-Attention Layer", "Adaptive Masking", "Adam", "L1 Regularization"], "dataset": ["Text8", "enwik8", "WikiText-103"], "metric": ["Number of params", "Bit per Character (BPC)", "Validation perplexity", "Test perplexity"], "title": "Augmenting Self-attention with Persistent Memory"} {"abstract": "Deep neural networks have been widely used in computer vision. There are\nseveral well trained deep neural networks for the ImageNet classification\nchallenge, which has played a significant role in image recognition. However,\nlittle work has explored pre-trained neural networks for image recognition in\ndomain adaption. In this paper, we are the first to extract better-represented\nfeatures from a pre-trained Inception ResNet model for domain adaptation. We\nthen present a modified distribution alignment method for classification using\nthe extracted features. We test our model using three benchmark datasets\n(Office+Caltech-10, Office-31, and Office-Home). Extensive experiments\ndemonstrate significant improvements (4.8%, 5.5%, and 10%) in classification\naccuracy over the state-of-the-art.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Domain Adaptation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Office-31"], "metric": ["Average Accuracy"], "title": "Modified Distribution Alignment for Domain Adaptation with Pre-trained Inception ResNet"} {"abstract": "While deep learning has been successfully applied to many real-world computer vision tasks, training robust classifiers usually requires a large amount of well-labeled data. However, the annotation is often expensive and time-consuming. Few-shot image classification has thus been proposed to effectively use only a limited number of labeled examples to train models for new classes. Recent works based on transferable metric learning methods have achieved promising classification performance through learning the similarity between the features of samples from the query and support sets. However, rare of them explicitly considers the model interpretability, which can actually be revealed during the training phase. For that, in this work, we propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works as in a neural network as well as to find out specific regions that are related to each other in images coming from the query and support sets. Moreover, we also present a visualization strategy named Region Activation Mapping (RAM) to intuitively explain what our method has learned by visualizing intermediate variables in our network. We also present a new way to generalize the interpretability from the level of tasks to categories, which can also be viewed as a method to find the prototypical parts for supporting the final decision of our RCN. Extensive experiments on four benchmark datasets clearly show the effectiveness of our method over existing baselines.", "field": ["Image Models"], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Image Classification", "Metric Learning"], "method": ["Interpretability"], "dataset": ["Mini-Imagenet 5-way (1-shot)", "CIFAR-FS 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CIFAR-FS 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Region Comparison Network for Interpretable Few-shot Image Classification"} {"abstract": "We propose a multi-region two-stream R-CNN model for action detection in realistic videos. We start from frame-level action detection based on faster R-CNN [1], and make three contributions: (1) we show that a motion region proposal network generates high-quality proposals , which are complementary to those of an appearance region proposal network; (2) we show that stacking optical flow over several frames significantly improves frame-level action detection; and (3) we embed a multi-region scheme in the faster R-CNN model, which adds complementary information on body parts. We then link frame-level detections with the Viterbi algorithm, and temporally localize an action with the maximum subarray method. Experimental results on the UCF-Sports, J-HMDB and UCF101 action detection datasets show that our approach outperforms the state of the art with a significant margin in both frame-mAP and video-mAP", "field": ["Output Functions", "RoI Feature Extractors", "Convolutions", "Region Proposal", "Object Detection Models"], "task": ["Action Detection", "Action Recognition", "Optical Flow Estimation", "Region Proposal", "Skeleton Based Action Recognition"], "method": ["RPN", "Faster R-CNN", "Softmax", "RoIPool", "Convolution", "Region Proposal Network"], "dataset": ["UCF101", "J-HMDB-21", "UCF101-24", "J-HMDB"], "metric": ["Accuracy (RGB+pose)", "3-fold Accuracy", "Frame-mAP"], "title": "Multi-region two-stream R-CNN for action detection"} {"abstract": "We present a variety of new architectural features and training procedures\nthat we apply to the generative adversarial networks (GANs) framework. We focus\non two applications of GANs: semi-supervised learning, and the generation of\nimages that humans find visually realistic. Unlike most work on generative\nmodels, our primary goal is not to train a model that assigns high likelihood\nto test data, nor do we require the model to be able to learn well without\nusing any labels. Using our new techniques, we achieve state-of-the-art results\nin semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated\nimages are of high quality as confirmed by a visual Turing test: our model\ngenerates MNIST samples that humans cannot distinguish from real data, and\nCIFAR-10 samples that yield a human error rate of 21.3%. We also present\nImageNet samples with unprecedented resolution and show that our methods enable\nthe model to learn recognizable features of ImageNet classes.", "field": ["Regularization", "Normalization", "Convolutions", "Generative Models", "Generative Discrimination"], "task": ["Conditional Image Generation", "Image Generation", "Semi-Supervised Image Classification"], "method": ["Generative Adversarial Network", "Minibatch Discrimination", "GAN Feature Matching", "GAN", "Batch Normalization", "Convolution", "Label Smoothing", "Virtual Batch Normalization", "Weight Normalization"], "dataset": ["SVHN", "CIFAR-10, 4000 Labels", "SVHN, 1000 labels", "CIFAR-10"], "metric": ["Percentage error", "Inception score", "Accuracy"], "title": "Improved Techniques for Training GANs"} {"abstract": "This paper describes the primary system submitted by the author to the E2E NLG Challenge on the E2E Dataset (Novikova et al. (2017)). Based on the baseline system called TGen (Dusek and Jurcicek (2016)), the primary system uses REINFORCE to utilize multiple reference for single Meaning Representation during training, while the baseline model treated them as individual training instances.", "field": ["Policy Gradient Methods"], "task": ["Data-to-Text Generation", "Text Generation"], "method": ["REINFORCE"], "dataset": ["E2E NLG Challenge"], "metric": ["NIST", "METEOR", "CIDEr", "ROUGE-L", "BLEU"], "title": "Technical Report for E2E NLG Challenge"} {"abstract": "Human parsing has recently attracted a lot of research interests due to its\nhuge application potentials. However existing datasets have limited number of\nimages and annotations, and lack the variety of human appearances and the\ncoverage of challenging cases in unconstrained environment. In this paper, we\nintroduce a new benchmark \"Look into Person (LIP)\" that makes a significant\nadvance in terms of scalability, diversity and difficulty, a contribution that\nwe feel is crucial for future developments in human-centric analysis. This\ncomprehensive dataset contains over 50,000 elaborately annotated images with 19\nsemantic part labels, which are captured from a wider range of viewpoints,\nocclusions and background complexity. Given these rich annotations we perform\ndetailed analyses of the leading human parsing approaches, gaining insights\ninto the success and failures of these methods. Furthermore, in contrast to the\nexisting efforts on improving the feature discriminative capability, we solve\nhuman parsing by exploring a novel self-supervised structure-sensitive learning\napproach, which imposes human pose structures into parsing results without\nresorting to extra supervision (i.e., no need for specifically labeling human\njoints in model training). Our self-supervised learning framework can be\ninjected into any advanced neural networks to help incorporate rich high-level\nknowledge regarding human joints from a global perspective and improve the\nparsing results. Extensive evaluations on our LIP and the public\nPASCAL-Person-Part dataset demonstrate the superiority of our method.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Human Parsing", "Self-Supervised Learning", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["LIP val"], "metric": ["mIoU"], "title": "Look into Person: Self-supervised Structure-sensitive Learning and A New Benchmark for Human Parsing"} {"abstract": "Spatiotemporal and motion features are two complementary and crucial information for video action recognition. Recent state-of-the-art methods adopt a 3D CNN stream to learn spatiotemporal features and another flow stream to learn motion features. In this work, we aim to efficiently encode these two features in a unified 2D framework. To this end, we first propose an STM block, which contains a Channel-wise SpatioTemporal Module (CSTM) to present the spatiotemporal features and a Channel-wise Motion Module (CMM) to efficiently encode motion features. We then replace original residual blocks in the ResNet architecture with STM blcoks to form a simple yet effective STM network by introducing very limited extra computation cost. Extensive experiments demonstrate that the proposed STM network outperforms the state-of-the-art methods on both temporal-related datasets (i.e., Something-Something v1 & v2 and Jester) and scene-related datasets (i.e., Kinetics-400, UCF-101, and HMDB-51) with the help of encoding spatiotemporal and motion features together.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Action Classification", "Action Recognition", "Temporal Action Localization"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Kinetics-400", "Something-Something V2", "Jester", "HMDB-51", "Something-Something V1", "UCF101"], "metric": ["3-fold Accuracy", "Top 1 Accuracy", "Val", "Top-5 Accuracy", "Top-1 Accuracy", "Average accuracy of 3 splits", "Vid acc@1"], "title": "STM: SpatioTemporal and Motion Encoding for Action Recognition"} {"abstract": "Scale variation is one of the key challenges in object detection. In this work, we first present a controlled experiment to investigate the effect of receptive fields for scale variation in object detection. Based on the findings from the exploration experiments, we propose a novel Trident Network (TridentNet) aiming to generate scale-specific feature maps with a uniform representational power. We construct a parallel multi-branch architecture in which each branch shares the same transformation parameters but with different receptive fields. Then, we adopt a scale-aware training scheme to specialize each branch by sampling object instances of proper scales for training. As a bonus, a fast approximation version of TridentNet could achieve significant improvements without any additional parameters and computational cost compared with the vanilla detector. On the COCO dataset, our TridentNet with ResNet-101 backbone achieves state-of-the-art single-model results of 48.4 mAP. Codes are available at https://git.io/fj5vR.", "field": ["Image Data Augmentation", "Initialization", "Proposal Filtering", "Learning Rate Schedules", "Feature Extractors", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Object Detection Models", "Skip Connection Blocks"], "task": ["Object Detection"], "method": ["Dilated Convolution", "Average Pooling", "1x1 Convolution", "ResNet", "TridentNet", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "Deformable Convolution", "Soft-NMS", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Step Decay", "TridentNet Block", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Scale-Aware Trident Networks for Object Detection"} {"abstract": "We present a novel boundary-aware face alignment algorithm by utilising\nboundary lines as the geometric structure of a human face to help facial\nlandmark localisation. Unlike the conventional heatmap based method and\nregression based method, our approach derives face landmarks from boundary\nlines which remove the ambiguities in the landmark definition. Three questions\nare explored and answered by this work: 1. Why using boundary? 2. How to use\nboundary? 3. What is the relationship between boundary estimation and landmarks\nlocalisation? Our boundary- aware face alignment algorithm achieves 3.49% mean\nerror on 300-W Fullset, which outperforms state-of-the-art methods by a large\nmargin. Our method can also easily integrate information from other datasets.\nBy utilising boundary information of 300-W dataset, our method achieves 3.92%\nmean error with 0.39% failure rate on COFW dataset, and 1.25% mean error on\nAFLW-Full dataset. Moreover, we propose a new dataset WFLW to unify training\nand testing across different factors, including poses, expressions,\nilluminations, makeups, occlusions, and blurriness. Dataset and model will be\npublicly available at https://wywu.github.io/projects/LAB/LAB.html", "field": ["Output Functions"], "task": ["Face Alignment", "Facial Landmark Detection", "Regression"], "method": ["Heatmap"], "dataset": ["WFLW", "300W"], "metric": ["Fullset (public)", "AUC@0.1 (all)", "AUC0.08 private", "ME (%, all) ", "FR@0.1(%, all)"], "title": "Look at Boundary: A Boundary-Aware Face Alignment Algorithm"} {"abstract": "There is large consent that successful training of deep networks requires\nmany thousand annotated training samples. In this paper, we present a network\nand training strategy that relies on the strong use of data augmentation to use\nthe available annotated samples more efficiently. The architecture consists of\na contracting path to capture context and a symmetric expanding path that\nenables precise localization. We show that such a network can be trained\nend-to-end from very few images and outperforms the prior best method (a\nsliding-window convolutional network) on the ISBI challenge for segmentation of\nneuronal structures in electron microscopic stacks. Using the same network\ntrained on transmitted light microscopy images (phase contrast and DIC) we won\nthe ISBI cell tracking challenge 2015 in these categories by a large margin.\nMoreover, the network is fast. Segmentation of a 512x512 image takes less than\na second on a recent GPU. The full implementation (based on Caffe) and the\ntrained networks are available at\nhttp://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Cell Segmentation", "Colorectal Gland Segmentation:", "Data Augmentation", "Electron Microscopy Image Segmentation", "Image Augmentation", "Lesion Segmentation", "Lung Nodule Segmentation", "Medical Image Segmentation", "Multi-tissue Nucleus Segmentation", "Pancreas Segmentation", "Retinal Vessel Segmentation", "Semantic Segmentation", "Skin Cancer Segmentation"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["CHASE_DB1", "DIC-HeLa", "LUNA", "Anatomical Tracings of Lesions After Stroke (ATLAS) ", "SkyScapes-Dense", "PhC-U373", "ISBI 2012 EM Segmentation", "SNEMI3D", "STARE", "TCIA Pancreas-CT Dataset", "DRIVE", "Kvasir-SEG", "CVC-ClinicDB", "CT-150", "Kaggle Skin Lesion Segmentation", "Kvasir-Instrument", "CRAG", "Kumar", "RITE"], "metric": ["max E-Measure", "Recall", "S-Measure", "mean Dice", "F1 score", "Mean IoU", "Average MAE", "Precision", "Dice", "Jaccard Index", "mIoU", "Hausdorff Distance (mm)", "Dice Score", "DSC", "IoU", "Warping Error", "AUC", "F1-score"], "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"} {"abstract": "The transformer-based pre-trained language model BERT has helped to improve state-of-the-art performance on many natural language processing (NLP) tasks. Using the same architecture and parameters, we developed and evaluated a monolingual Dutch BERT model called BERTje. Compared to the multilingual BERT model, which includes Dutch but is only based on Wikipedia text, BERTje is based on a large and diverse dataset of 2.4 billion tokens. BERTje consistently outperforms the equally-sized multilingual BERT model on downstream NLP tasks (part-of-speech tagging, named-entity recognition, semantic role labeling, and sentiment analysis). Our pre-trained Dutch BERT model is made available at https://github.com/wietsedv/bertje.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Language Modelling", "Named Entity Recognition", "Part-Of-Speech Tagging", "Semantic Role Labeling", "Sentiment Analysis"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["DBRD"], "metric": ["Accuracy"], "title": "BERTje: A Dutch BERT Model"} {"abstract": "Purpose: Lung nodules have very diverse shapes and sizes, which makes\nclassifying them as benign/malignant a challenging problem. In this paper, we\npropose a novel method to predict the malignancy of nodules that have the\ncapability to analyze the shape and size of a nodule using a global feature\nextractor, as well as the density and structure of the nodule using a local\nfeature extractor. Methods: We propose to use Residual Blocks with a 3x3 kernel\nsize for local feature extraction, and Non-Local Blocks to extract the global\nfeatures. The Non-Local Block has the ability to extract global features\nwithout using a huge number of parameters. The key idea behind the Non-Local\nBlock is to apply matrix multiplications between features on the same feature\nmaps. Results: We trained and validated the proposed method on the LIDC-IDRI\ndataset which contains 1,018 computed tomography (CT) scans. We followed a\nrigorous procedure for experimental setup namely, 10-fold cross-validation and\nignored the nodules that had been annotated by less than 3 radiologists. The\nproposed method achieved state-of-the-art results with AUC=95.62%, while\nsignificantly outperforming other baseline methods. Conclusions: Our proposed\nDeep Local-Global network has the capability to accurately extract both local\nand global features. Our new method outperforms state-of-the-art architecture\nincluding Densenet and Resnet with transfer learning.", "field": ["Initialization", "Regularization", "Output Functions", "Convolutional Neural Networks", "Image Feature Extractors", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Computed Tomography (CT)", "Lung Nodule Classification", "Transfer Learning"], "method": ["Average Pooling", "1x1 Convolution", "ResNet", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Dense Block", "Non-Local Operation", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Non-Local Block", "Softmax", "Concatenated Skip Connection", "Bottleneck Residual Block", "Dropout", "DenseNet", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["LIDC-IDRI"], "metric": ["Accuracy(10-fold)", "AUC", "Accuracy"], "title": "Lung Nodule Classification using Deep Local-Global Networks"} {"abstract": "Gastrointestinal (GI) pathologies are periodically screened, biopsied, and resected using surgical tools. Usually the procedures and the treated or resected areas are not specifically tracked or analysed during or after colonoscopies. Information regarding disease borders, development and amount and size of the resected area get lost. This can lead to poor follow-up and bothersome reassessment difficulties post-treatment. To improve the current standard and also to foster more research on the topic we have released the ``Kvasir-Instrument'' dataset which consists of $590$ annotated frames containing GI procedure tools such as snares, balloons and biopsy forceps, etc. Beside of the images, the dataset includes ground truth masks and bounding boxes and has been verified by two expert GI endoscopists. Additionally, we provide a baseline for the segmentation of the GI tools to promote research and algorithm development. We obtained a dice coefficient score of 0.9158 and a Jaccard index of 0.8578 using a classical U-Net architecture. A similar dice coefficient score was observed for DoubleUNet. The qualitative results showed that the model did not work for the images with specularity and the frames with multiple instruments, while the best result for both methods was observed on all other types of images. Both, qualitative and quantitative results show that the model performs reasonably good, but there is a large potential for further improvements. Benchmarking using the dataset provides an opportunity for researchers to contribute to the field of automatic endoscopic diagnostic and therapeutic tool segmentation for GI endoscopy.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Medical Image Segmentation"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["Kvasir-Instrument"], "metric": ["DSC"], "title": "Kvasir-Instrument: Diagnostic and therapeutic tool segmentation dataset in gastrointestinal endoscopy"} {"abstract": "We propose a unified Generative Adversarial Network (GAN) for controllable image-to-image translation, i.e., transferring an image from a source to a target domain guided by controllable structures. In addition to conditioning on a reference image, we show how the model can generate images conditioned on controllable structures, e.g., class labels, object keypoints, human skeletons, and scene semantic maps. The proposed model consists of a single generator and a discriminator taking a conditional image and the target controllable structure as input. In this way, the conditional image can provide appearance information and the controllable structure can provide the structure information for generating the target result. Moreover, our model learns the image-to-image mapping through three novel losses, i.e., color loss, controllable structure guided cycle-consistency loss, and controllable structure guided self-content preserving loss. Also, we present the Fr\\'echet ResNet Distance (FRD) to evaluate the quality of the generated images. Experiments on two challenging image translation tasks, i.e., hand gesture-to-gesture translation and cross-view image translation, show that our model generates convincing results, and significantly outperforms other state-of-the-art methods on both tasks. Meanwhile, the proposed framework is a unified solution, thus it can be applied to solving other controllable structure guided image translation tasks such as landmark guided facial expression translation and keypoint guided person image generation. To the best of our knowledge, we are the first to make one GAN framework work on all such controllable structure guided image translation tasks. Code is available at https://github.com/Ha0Tang/GestureGAN.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Generative Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Facial Expression Translation", "Gesture-to-Gesture Translation", "Image Generation", "Image-to-Image Translation"], "method": ["ResNet", "Generative Adversarial Network", "Average Pooling", "GAN", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["NTU Hand Digit", "cvusa", "Senz3D", "Dayton (64x64) - ground-to-aerial", "Dayton (64\u00d764) - aerial-to-ground", "Dayton (256\u00d7256) - aerial-to-ground"], "metric": ["PSNR", "FID", "KL", "LPIPS", "SD", "SSIM", "FRD", "AMT", "IS"], "title": "Unified Generative Adversarial Networks for Controllable Image-to-Image Translation"} {"abstract": "Generative Adversarial Networks (GANs) are powerful generative models, but\nsuffer from training instability. The recently proposed Wasserstein GAN (WGAN)\nmakes progress toward stable training of GANs, but sometimes can still generate\nonly low-quality samples or fail to converge. We find that these problems are\noften due to the use of weight clipping in WGAN to enforce a Lipschitz\nconstraint on the critic, which can lead to undesired behavior. We propose an\nalternative to clipping weights: penalize the norm of gradient of the critic\nwith respect to its input. Our proposed method performs better than standard\nWGAN and enables stable training of a wide variety of GAN architectures with\nalmost no hyperparameter tuning, including 101-layer ResNets and language\nmodels over discrete data. We also achieve high quality generations on CIFAR-10\nand LSUN bedrooms.", "field": ["Initialization", "Convolutional Neural Networks", "Stochastic Optimization", "Activation Functions", "Normalization", "Loss Functions", "Convolutions", "Pooling Operations", "Skip Connections", "Generative Adversarial Networks", "Skip Connection Blocks"], "task": ["Conditional Image Generation", "Image Generation", "Synthetic Data Generation"], "method": ["Average Pooling", "RMSProp", "Adam", "1x1 Convolution", "ResNet", "WGAN GP", "Convolution", "ReLU", "Residual Connection", "Leaky ReLU", "WGAN-GP Loss", "Layer Normalization", "Wasserstein GAN (Gradient Penalty)", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CAT 256x256", "CIFAR-10"], "metric": ["Inception score", "FID"], "title": "Improved Training of Wasserstein GANs"} {"abstract": "We propose a novel memory network model named Read-Write Memory Network\n(RWMN) to perform question and answering tasks for large-scale, multimodal\nmovie story understanding. The key focus of our RWMN model is to design the\nread network and the write network that consist of multiple convolutional\nlayers, which enable memory read and write operations to have high capacity and\nflexibility. While existing memory-augmented network models treat each memory\nslot as an independent block, our use of multi-layered CNNs allows the model to\nread and write sequential memory cells as chunks, which is more reasonable to\nrepresent a sequential story because adjacent memory blocks often have strong\ncorrelations. For evaluation, we apply our model to all the six tasks of the\nMovieQA benchmark, and achieve the best accuracies on several tasks, especially\non the visual QA task. Our model shows a potential to better understand not\nonly the content in the story, but also more abstract information, such as\nrelationships between characters and the reasons for their actions.", "field": ["Working Memory Models"], "task": ["Video Story QA"], "method": ["Memory Network"], "dataset": ["MovieQA"], "metric": ["Accuracy"], "title": "A Read-Write Memory Network for Movie Story Understanding"} {"abstract": "This paper describes our system (HIT-SCIR) for CoNLL 2019 shared task: Cross-Framework Meaning Representation Parsing. We extended the basic transition-based parser with two improvements: a) Efficient Training by realizing Stack LSTM parallel training; b) Effective Encoding via adopting deep contextualized word embeddings BERT. Generally, we proposed a unified pipeline to meaning representation parsing, including framework-specific transition-based parsers, BERT-enhanced word representation, and post-processing. In the final evaluation, our system was ranked first according to ALL-F1 (86.2{\\%}) and especially ranked first in UCCA framework (81.67{\\%}).", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Learning Rate Schedules", "Activation Functions", "Recurrent Neural Networks", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["UCCA Parsing", "Word Embeddings"], "method": ["Weight Decay", "Adam", "Long Short-Term Memory", "Tanh Activation", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "Residual Connection", "Dense Connections", "Layer Normalization", "GELU", "Sigmoid Activation", "WordPiece", "Softmax", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "LSTM", "Dropout", "BERT"], "dataset": ["CoNLL 2019"], "metric": ["Full UCCA F1", "LPP UCCA F1", "LPP MRP F1", "Full MRP F1"], "title": "HIT-SCIR at MRP 2019: A Unified Pipeline for Meaning Representation Parsing via Efficient Training and Effective Encoding"} {"abstract": "Bellemare et al. (2016) introduced the notion of a pseudo-count, derived from\na density model, to generalize count-based exploration to non-tabular\nreinforcement learning. This pseudo-count was used to generate an exploration\nbonus for a DQN agent and combined with a mixed Monte Carlo update was\nsufficient to achieve state of the art on the Atari 2600 game Montezuma's\nRevenge. We consider two questions left open by their work: First, how\nimportant is the quality of the density model for exploration? Second, what\nrole does the Monte Carlo update play in exploration? We answer the first\nquestion by demonstrating the use of PixelCNN, an advanced neural density model\nfor images, to supply a pseudo-count. In particular, we examine the intrinsic\ndifficulties in adapting Bellemare et al.'s approach when assumptions about the\nmodel are violated. The result is a more practical and general algorithm\nrequiring no special apparatus. We combine PixelCNN pseudo-counts with\ndifferent agent architectures to dramatically improve the state of the art on\nseveral hard Atari games. One surprising finding is that the mixed Monte Carlo\nupdate is a powerful facilitator of exploration in the sparsest of settings,\nincluding Montezuma's Revenge.", "field": ["Q-Learning Networks", "Off-Policy TD Control", "Convolutions", "Feedforward Networks", "Generative Models"], "task": ["Atari Games", "Montezuma's Revenge"], "method": ["PixelCNN", "Q-Learning", "Convolution", "DQN", "Dense Connections", "Deep Q-Network"], "dataset": ["Atari 2600 Venture", "Atari 2600 Private Eye", "Atari 2600 Montezuma's Revenge", "Atari 2600 Freeway", "Atari 2600 Gravitar"], "metric": ["Score"], "title": "Count-Based Exploration with Neural Density Models"} {"abstract": "We propose a new approach, called self-motivated pyramid curriculum domain adaptation (PyCDA), to facilitate the adaptation of semantic segmentation neural networks from synthetic source domains to real target domains. Our approach draws on an insight connecting two existing works: curriculum domain adaptation and self-training. Inspired by the former, PyCDA constructs a pyramid curriculum which contains various properties about the target domain. Those properties are mainly about the desired label distributions over the target domain images, image regions, and pixels. By enforcing the segmentation neural network to observe those properties, we can improve the network's generalization capability to the target domain. Motivated by the self-training, we infer this pyramid of properties by resorting to the semantic segmentation network itself. Unlike prior work, we do not need to maintain any additional models (e.g., logistic regression or discriminator networks) or to solve minmax problems which are often difficult to optimize. We report state-of-the-art results for the adaptation from both GTAV and SYNTHIA to Cityscapes, two popular settings in unsupervised domain adaptation for semantic segmentation.", "field": ["Initialization", "Generalized Linear Models", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Domain Adaptation", "Regression", "Semantic Segmentation", "Unsupervised Domain Adaptation"], "method": ["Logistic Regression", "ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "Constructing Self-motivated Pyramid Curriculums for Cross-Domain Semantic Segmentation: A Non-Adversarial Approach"} {"abstract": "Recent advances in 3D fully convolutional networks (FCN) have made it\nfeasible to produce dense voxel-wise predictions of volumetric images. In this\nwork, we show that a multi-class 3D FCN trained on manually labeled CT scans of\nseveral anatomical structures (ranging from the large organs to thin vessels)\ncan achieve competitive segmentation results, while avoiding the need for\nhandcrafting features or training class-specific models.\n To this end, we propose a two-stage, coarse-to-fine approach that will first\nuse a 3D FCN to roughly define a candidate region, which will then be used as\ninput to a second 3D FCN. This reduces the number of voxels the second FCN has\nto classify to ~10% and allows it to focus on more detailed segmentation of the\norgans and vessels.\n We utilize training and validation sets consisting of 331 clinical CT images\nand test our models on a completely unseen data collection acquired at a\ndifferent hospital that includes 150 CT scans, targeting three anatomical\norgans (liver, spleen, and pancreas). In challenging organs such as the\npancreas, our cascaded approach improves the mean Dice score from 68.5 to\n82.2%, achieving the highest reported average score on this dataset. We compare\nwith a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and\nachieve a significantly higher performance in small organs and vessels.\nFurthermore, we explore fine-tuning our models to different datasets.\n Our experiments illustrate the promise and robustness of current 3D FCN based\nsemantic segmentation of medical images, achieving state-of-the-art results.\nOur code and trained models are available for download:\nhttps://github.com/holgerroth/3Dunet_abdomen_cascade.", "field": ["Convolutions", "Pooling Operations", "Semantic Segmentation Models"], "task": ["3D Medical Imaging Segmentation", "Medical Image Segmentation", "Semantic Segmentation"], "method": ["Fully Convolutional Network", "FCN", "Max Pooling", "Convolution"], "dataset": ["TCIA Pancreas-CT"], "metric": ["Dice Score"], "title": "An application of cascaded 3D fully convolutional networks for medical image segmentation"} {"abstract": "Future video prediction is an ill-posed Computer Vision problem that recently\nreceived much attention. Its main challenges are the high variability in video\ncontent, the propagation of errors through time, and the non-specificity of the\nfuture frames: given a sequence of past frames there is a continuous\ndistribution of possible futures. This work introduces bijective Gated\nRecurrent Units, a double mapping between the input and output of a GRU layer.\nThis allows for recurrent auto-encoders with state sharing between encoder and\ndecoder, stratifying the sequence representation and helping to prevent\ncapacity problems. We show how with this topology only the encoder or decoder\nneeds to be applied for input encoding and prediction, respectively. This\nreduces the computational cost and avoids re-encoding the predictions when\ngenerating a sequence of frames, mitigating the propagation of errors.\nFurthermore, it is possible to remove layers from an already trained model,\ngiving an insight to the role performed by each layer and making the model more\nexplainable. We evaluate our approach on three video datasets, outperforming\nstate of the art prediction results on MMNIST and UCF101, and obtaining\ncompetitive results on KTH with 2 and 3 times less memory usage and\ncomputational cost than the best scored approach.", "field": ["Recurrent Neural Networks"], "task": ["Video Prediction"], "method": ["Gated Recurrent Unit", "GRU"], "dataset": ["Human3.6M"], "metric": ["MAE", "SSIM", "MSE"], "title": "Folded Recurrent Neural Networks for Future Video Prediction"} {"abstract": "Although Person Re-Identification has made impressive progress, difficult cases like occlusion, change of view-pointand similar clothing still bring great challenges. Besides overall visual features, matching and comparing detailed information is also essential for tackling these challenges. This paper proposes two key recognition patterns to better utilize the detail information of pedestrian images, that most of the existing methods are unable to satisfy. Firstly, Visual Clue Alignment requires the model to select and align decisive regions pairs from two images for pair-wise comparison, while existing methods only align regions with predefined rules like high feature similarity or same semantic labels. Secondly, the Conditional Feature Embedding requires the overall feature of a query image to be dynamically adjusted based on the gallery image it matches, while most of the existing methods ignore the reference images. By introducing novel techniques including correspondence attention module and discrepancy-based GCN, we propose an end-to-end ReID method that integrates both patterns into a unified framework, called CACE-Net((C)lue(A)lignment and (C)onditional (E)mbedding). The experiments show that CACE-Net achieves state-of-the-art performance on three public datasets.", "field": ["Graph Models"], "task": ["Person Re-Identification"], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["MSMT17", "Market-1501", "DukeMTMC-reID"], "metric": ["Rank-1", "mAP", "MAP"], "title": "Devil's in the Details: Aligning Visual Clues for Conditional Embedding in Person Re-Identification"} {"abstract": "In this paper we propose Flowtron: an autoregressive flow-based generative network for text-to-speech synthesis with control over speech variation and style transfer. Flowtron borrows insights from IAF and revamps Tacotron in order to provide high-quality and expressive mel-spectrogram synthesis. Flowtron is optimized by maximizing the likelihood of the training data, which makes training simple and stable. Flowtron learns an invertible mapping of data to a latent space that can be manipulated to control many aspects of speech synthesis (pitch, tone, speech rate, cadence, accent). Our mean opinion scores (MOS) show that Flowtron matches state-of-the-art TTS models in terms of speech quality. In addition, we provide results on control of speech variation, interpolation between samples and style transfer between speakers seen and unseen during training. Code and pre-trained models will be made publicly available at https://github.com/NVIDIA/flowtron", "field": ["Temporal Convolutions", "Output Functions", "Regularization", "Recurrent Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Text-to-Speech Models", "Feedforward Networks", "Generative Audio Models", "Attention Mechanisms", "Bidirectional Recurrent Neural Networks"], "task": ["Speech Synthesis", "Style Transfer", "Text-To-Speech Synthesis"], "method": ["WaveNet", "Tacotron2", "Zoneout", "Dilated Causal Convolution", "Long Short-Term Memory", "BiLSTM", "Convolution", "Batch Normalization", "ReLU", "Bidirectional LSTM", "Mixture of Logistic Distributions", "LSTM", "Linear Layer", "Tacotron 2", "Dropout", "Location Sensitive Attention", "Rectified Linear Units"], "dataset": ["LJSpeech"], "metric": ["Pleasantness MOS"], "title": "Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis"} {"abstract": "Learning to capture long-range relations is fundamental to image/video\nrecognition. Existing CNN models generally rely on increasing depth to model\nsuch relations which is highly inefficient. In this work, we propose the\n\"double attention block\", a novel component that aggregates and propagates\ninformative global features from the entire spatio-temporal space of input\nimages/videos, enabling subsequent convolution layers to access features from\nthe entire space efficiently. The component is designed with a double attention\nmechanism in two steps, where the first step gathers features from the entire\nspace into a compact set through second-order attention pooling and the second\nstep adaptively selects and distributes features to each location via another\nattention. The proposed double attention block is easy to adopt and can be\nplugged into existing deep neural networks conveniently. We conduct extensive\nablation studies and experiments on both image and video recognition tasks for\nevaluating its performance. On the image recognition task, a ResNet-50 equipped\nwith our double attention blocks outperforms a much larger ResNet-152\narchitecture on ImageNet-1k dataset with over 40% less the number of parameters\nand less FLOPs. On the action recognition task, our proposed model achieves the\nstate-of-the-art results on the Kinetics and UCF-101 datasets with\nsignificantly higher efficiency than recent works.", "field": ["Convolutions"], "task": ["3D Absolute Human Pose Estimation", "Action Classification", "Action Recognition", "Temporal Action Localization", "Video Recognition"], "method": ["Convolution"], "dataset": ["Kinetics-400", "UCF101"], "metric": ["Vid acc@5", "3-fold Accuracy", "Vid acc@1"], "title": "$A^2$-Nets: Double Attention Networks"} {"abstract": "This is an official pytorch implementation of Deep High-Resolution\nRepresentation Learning for Human Pose Estimation. In this work, we are\ninterested in the human pose estimation problem with a focus on learning\nreliable high-resolution representations. Most existing methods recover\nhigh-resolution representations from low-resolution representations produced by\na high-to-low resolution network. Instead, our proposed network maintains\nhigh-resolution representations through the whole process. We start from a\nhigh-resolution subnetwork as the first stage, gradually add high-to-low\nresolution subnetworks one by one to form more stages, and connect the\nmutli-resolution subnetworks in parallel. We conduct repeated multi-scale\nfusions such that each of the high-to-low resolution representations receives\ninformation from other parallel representations over and over, leading to rich\nhigh-resolution representations. As a result, the predicted keypoint heatmap is\npotentially more accurate and spatially more precise. We empirically\ndemonstrate the effectiveness of our network through the superior pose\nestimation results over two benchmark datasets: the COCO keypoint detection\ndataset and the MPII Human Pose dataset. The code and models have been publicly\navailable at\n\\url{https://github.com/leoxiaobin/deep-high-resolution-net.pytorch}.", "field": ["Output Functions"], "task": ["Instance Segmentation", "Keypoint Detection", "Multi-Person Pose Estimation", "Object Detection", "Pose Estimation", "Pose Tracking", "Representation Learning"], "method": ["Heatmap"], "dataset": ["COCO", "PoseTrack2017", "COCO minival", "MPII Human Pose", "COCO test-dev"], "metric": ["Test AP", "MOTA", "Validation AP", "APM", "PCKh-0.5", "MAP", "mask AP", "AP75", "AP", "APL", "AP50", "AR"], "title": "Deep High-Resolution Representation Learning for Human Pose Estimation"} {"abstract": "We introduce OmniSource, a novel framework for leveraging web data to train video recognition models. OmniSource overcomes the barriers between data formats, such as images, short videos, and long untrimmed videos for webly-supervised learning. First, data samples with multiple formats, curated by task-specific data collection and automatically filtered by a teacher model, are transformed into a unified form. Then a joint-training strategy is proposed to deal with the domain gaps between multiple data sources and formats in webly-supervised learning. Several good practices, including data balancing, resampling, and cross-dataset mixup are adopted in joint training. Experiments show that by utilizing data from multiple sources and formats, OmniSource is more data-efficient in training. With only 3.5M images and 800K minutes videos crawled from the internet without human labeling (less than 2% of prior works), our models learned with OmniSource improve Top-1 accuracy of 2D- and 3D-ConvNet baseline models by 3.0% and 3.9%, respectively, on the Kinetics-400 benchmark. With OmniSource, we establish new records with different pretraining strategies for video recognition. Our best models achieve 80.4%, 80.5%, and 83.6 Top-1 accuracies on the Kinetics-400 benchmark respectively for training-from-scratch, ImageNet pre-training and IG-65M pre-training.", "field": ["Image Data Augmentation"], "task": ["Action Classification", "Action Recognition", "Video Recognition"], "method": ["Mixup"], "dataset": ["Kinetics-400", "UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy", "Vid acc@1"], "title": "Omni-sourced Webly-supervised Learning for Video Recognition"} {"abstract": "Convolutional neural network (CNN) is a neural network that can make use of\nthe internal structure of data such as the 2D structure of image data. This\npaper studies CNN on text categorization to exploit the 1D structure (namely,\nword order) of text data for accurate prediction. Instead of using\nlow-dimensional word vectors as input as is often done, we directly apply CNN\nto high-dimensional text data, which leads to directly learning embedding of\nsmall text regions for use in classification. In addition to a straightforward\nadaptation of CNN from image to text, a simple but new variation which employs\nbag-of-word conversion in the convolution layer is proposed. An extension to\ncombine multiple convolution layers is also explored for higher accuracy. The\nexperiments demonstrate the effectiveness of our approach in comparison with\nstate-of-the-art methods.", "field": ["Convolutions"], "task": ["Sentiment Analysis"], "method": ["Convolution"], "dataset": ["IMDb"], "metric": ["Accuracy"], "title": "Effective Use of Word Order for Text Categorization with Convolutional Neural Networks"} {"abstract": "Humans can only interact with part of the surrounding environment due to biological restrictions. Therefore, we learn to reason the spatial relationships across a series of observations to piece together the surrounding environment. Inspired by such behavior and the fact that machines also have computational constraints, we propose \\underline{CO}nditional \\underline{CO}ordinate GAN (COCO-GAN) of which the generator generates images by parts based on their spatial coordinates as the condition. On the other hand, the discriminator learns to justify realism across multiple assembled patches by global coherence, local appearance, and edge-crossing continuity. Despite the full images are never generated during training, we show that COCO-GAN can produce \\textbf{state-of-the-art-quality} full images during inference. We further demonstrate a variety of novel applications enabled by teaching the network to be aware of coordinates. First, we perform extrapolation to the learned coordinate manifold and generate off-the-boundary patches. Combining with the originally generated full image, COCO-GAN can produce images that are larger than training samples, which we called \"beyond-boundary generation\". We then showcase panorama generation within a cylindrical coordinate system that inherently preserves horizontally cyclic topology. On the computation side, COCO-GAN has a built-in divide-and-conquer paradigm that reduces memory requisition during training and inference, provides high-parallelism, and can generate parts of images on-demand.", "field": ["Generative Models", "Convolutions"], "task": ["Face Generation", "Image Generation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["CelebA-HQ 128x128", "CelebA-HQ 64x64", "LSUN Bedroom 256 x 256", "CelebA 128 x 128", "CelebA-HQ 1024x1024"], "metric": ["FID"], "title": "COCO-GAN: Generation by Parts via Conditional Coordinating"} {"abstract": "Training heuristics greatly improve various image classification model\naccuracies~\\cite{he2018bag}. Object detection models, however, have more\ncomplex neural network structures and optimization targets. The training\nstrategies and pipelines dramatically vary among different models. In this\nworks, we explore training tweaks that apply to various models including Faster\nR-CNN and YOLOv3. These tweaks do not change the model architectures,\ntherefore, the inference costs remain the same. Our empirical results\ndemonstrate that, however, these freebies can improve up to 5% absolute\nprecision compared to state-of-the-art baselines.", "field": ["Generalized Linear Models", "Output Functions", "Convolutional Neural Networks", "Normalization", "Convolutions", "Clustering", "Pooling Operations", "Skip Connections", "Object Detection Models"], "task": ["Image Classification", "Object Detection"], "method": ["Logistic Regression", "k-Means Clustering", "YOLOv3", "Average Pooling", "Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "Residual Connection", "Darknet-53", "Global Average Pooling"], "dataset": ["PASCAL VOC 2007"], "metric": ["MAP"], "title": "Bag of Freebies for Training Object Detection Neural Networks"} {"abstract": "We study the performance of customer intent classifiers designed to predict the most popular intent received through ASOS.com Customer Care Department, namely \u201cWhere is my order?\u201d. These queries are characterised by the use of colloquialism, label noise and short message length. We conduct extensive experiments with two well established classification models: logistic regression via n-grams to account for sequences in the data and recurrent neural networks that perform the extraction of these sequential patterns automatically. Maintaining the embedding layer fixed to GloVe coordinates, a Mann-Whitney U test indicated that the F1 score on a held out set of messages was lower for recurrent neural network classifiers than for linear n-grams classifiers (M1=0.828, M2=0.815; U=1,196, P=1.46e-20), unless all layers were jointly trained with all other network parameters (M1=0.831, M2=0.828, U=4,280, P=8.24e-4). This plain neural network produced top performance on a denoised set of labels (0.887 F1) matching with Human annotators (0.889 F1) and superior to linear classifiers (0.865 F1). Calibrating these models to achieve precision levels above Human performance (0.93 Precision), our results indicate a small difference in Recall of 0.05 for the plain neural networks (training under 1hr), and 0.07 for the linear n-grams (training under 10min), revealing the latter as a judicious choice of model architecture in modern AI production systems.", "field": ["Generalized Linear Models", "Word Embeddings"], "task": ["English Conversational Speech Recognition", "Intent Detection", "Regression", "Text Classification"], "method": ["Logistic Regression", "GloVe", "GloVe Embeddings"], "dataset": ["ASOS.com user intent"], "metric": ["F1"], "title": "\u201cWhere is My Parcel?\u201d Fast and Efficient Classifiers to Detect User Intent in Natural Language"} {"abstract": "Following the advance of style transfer with Convolutional Neural Networks\n(CNNs), the role of styles in CNNs has drawn growing attention from a broader\nperspective. In this paper, we aim to fully leverage the potential of styles to\nimprove the performance of CNNs in general vision tasks. We propose a\nStyle-based Recalibration Module (SRM), a simple yet effective architectural\nunit, which adaptively recalibrates intermediate feature maps by exploiting\ntheir styles. SRM first extracts the style information from each channel of the\nfeature maps by style pooling, then estimates per-channel recalibration weight\nvia channel-independent style integration. By incorporating the relative\nimportance of individual styles into feature maps, SRM effectively enhances the\nrepresentational ability of a CNN. The proposed module is directly fed into\nexisting CNN architectures with negligible overhead. We conduct comprehensive\nexperiments on general image recognition as well as tasks related to styles,\nwhich verify the benefit of SRM over recent approaches such as\nSqueeze-and-Excitation (SE). To explain the inherent difference between SRM and\nSE, we provide an in-depth comparison of their representational properties.", "field": ["Image Data Augmentation", "Initialization", "Regularization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks", "Skip Connection Blocks"], "task": ["Image Classification", "Style Transfer"], "method": ["Weight Decay", "Average Pooling", "1x1 Convolution", "Style-based Recalibration Module", "ResNet", "Instance Normalization", "Random Horizontal Flip", "Convolution", "ReLU", "Residual Connection", "Dense Connections", "Residual SRM", "Random Resized Crop", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Step Decay", "Sigmoid Activation", "SGD with Momentum", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet", "CIFAR-10"], "metric": ["Percentage correct", "Top 1 Accuracy"], "title": "SRM : A Style-based Recalibration Module for Convolutional Neural Networks"} {"abstract": "The analysis of glandular morphology within colon histopathology images is an\nimportant step in determining the grade of colon cancer. Despite the importance\nof this task, manual segmentation is laborious, time-consuming and can suffer\nfrom subjectivity among pathologists. The rise of computational pathology has\nled to the development of automated methods for gland segmentation that aim to\novercome the challenges of manual segmentation. However, this task is\nnon-trivial due to the large variability in glandular appearance and the\ndifficulty in differentiating between certain glandular and non-glandular\nhistological structures. Furthermore, a measure of uncertainty is essential for\ndiagnostic decision making. To address these challenges, we propose a fully\nconvolutional neural network that counters the loss of information caused by\nmax-pooling by re-introducing the original image at multiple points within the\nnetwork. We also use atrous spatial pyramid pooling with varying dilation rates\nfor preserving the resolution and multi-level aggregation. To incorporate\nuncertainty, we introduce random transformations during test time for an\nenhanced segmentation result that simultaneously generates an uncertainty map,\nhighlighting areas of ambiguity. We show that this map can be used to define a\nmetric for disregarding predictions with high uncertainty. The proposed network\nachieves state-of-the-art performance on the GlaS challenge dataset and on a\nsecond independent colorectal adenocarcinoma dataset. In addition, we perform\ngland instance segmentation on whole-slide images from two further datasets to\nhighlight the generalisability of our method. As an extension, we introduce\nMILD-Net+ for simultaneous gland and lumen segmentation, to increase the\ndiagnostic power of the network.", "field": ["Pooling Operations"], "task": ["Colorectal Gland Segmentation:", "Decision Making", "Instance Segmentation", "Semantic Segmentation", "whole slide images"], "method": ["Spatial Pyramid Pooling"], "dataset": ["CRAG"], "metric": ["F1-score", "Hausdorff Distance (mm)", "Dice"], "title": "MILD-Net: Minimal Information Loss Dilated Network for Gland Instance Segmentation in Colon Histology Images"} {"abstract": "This paper proposes a new graph convolutional neural network architecture based on a depth-based representation of graph structure deriving from quantum walks, which we refer to as the quantum-based subgraph convolutional neural network (QS-CNNs). This new architecture captures both the global topological structure and the local connectivity structure within a graph. Specifically, we commence by establishing a family of K-layer expansion subgraphs for each vertex of a graph by quantum walks, which captures the global topological arrangement information for substructures contained within a graph. We then design a set of fixed-size convolution filters over the subgraphs, which helps to characterise multi-scale patterns residing in the data. The idea is to apply convolution filters sliding over the entire set of subgraphs rooted at a vertex to extract the local features analogous to the standard convolution operation on grid data. Experiments on eight graph-structured datasets demonstrate that QS-CNNs architecture is capable of outperforming fourteen state-of-the-art methods for the tasks of node classification and graph classification.", "field": ["Convolutions"], "task": ["Graph Classification", "Node Classification"], "method": ["Convolution"], "dataset": ["PROTEINS"], "metric": ["Accuracy"], "title": "Quantum-based subgraph convolutional neural networks"} {"abstract": "We propose UniPose, a unified framework for human pose estimation, based on our \"Waterfall\" Atrous Spatial Pooling architecture, that achieves state-of-art-results on several pose estimation metrics. Current pose estimation methods utilizing standard CNN architectures heavily rely on statistical postprocessing or predefined anchor poses for joint localization. UniPose incorporates contextual segmentation and joint localization to estimate the human pose in a single stage, with high accuracy, without relying on statistical postprocessing methods. The Waterfall module in UniPose leverages the efficiency of progressive filtering in the cascade architecture, while maintaining multi-scale fields-of-view comparable to spatial pyramid configurations. Additionally, our method is extended to UniPose-LSTM for multi-frame processing and achieves state-of-the-art results for temporal pose estimation in Video. Our results on multiple datasets demonstrate that UniPose, with a ResNet backbone and Waterfall module, is a robust and efficient architecture for pose estimation obtaining state-of-the-art results in single person pose detection for both single images and videos.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Pose Estimation", "Skeleton Based Action Recognition"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["UPenn Action", "Leeds Sports Poses", "MPII Human Pose"], "metric": ["PCK", "PCKh-0.5", "Mean PCK@0.2"], "title": "UniPose: Unified Human Pose Estimation in Single Images and Videos"} {"abstract": "We participated in the WMT 2016 shared news translation task by building\nneural translation systems for four language pairs, each trained in both\ndirections: English<->Czech, English<->German, English<->Romanian and\nEnglish<->Russian. Our systems are based on an attentional encoder-decoder,\nusing BPE subword segmentation for open-vocabulary translation with a fixed\nvocabulary. We experimented with using automatic back-translations of the\nmonolingual News corpus as additional training data, pervasive dropout, and\ntarget-bidirectional models. All reported methods give substantial\nimprovements, and we see improvements of 4.3--11.2 BLEU over our baseline\nsystems. In the human evaluation, our systems were the (tied) best constrained\nsystem for 7 out of 8 translation directions in which we participated.", "field": ["Subword Segmentation"], "task": ["Machine Translation"], "method": ["BPE", "Byte Pair Encoding"], "dataset": ["WMT2016 Czech-English", "WMT2016 English-Russian", "WMT2016 English-German", "WMT2016 Russian-English", "WMT2016 English-Romanian", "WMT2016 German-English", "WMT2016 Romanian-English", "WMT2016 English-Czech"], "metric": ["BLEU score"], "title": "Edinburgh Neural Machine Translation Systems for WMT 16"} {"abstract": "Carrying out clinical diagnosis of retinal vascular degeneration using Fluorescein Angiography (FA) is a time consuming process and can pose significant adverse effects on the patient. Angiography requires insertion of a dye that may cause severe adverse effects and can even be fatal. Currently, there are no non-invasive systems capable of generating Fluorescein Angiography images. However, retinal fundus photography is a non-invasive imaging technique that can be completed in a few seconds. In order to eliminate the need for FA, we propose a conditional generative adversarial network (GAN) to translate fundus images to FA images. The proposed GAN consists of a novel residual block capable of generating high quality FA images. These images are important tools in the differential diagnosis of retinal diseases without the need for invasive procedure with possible side effects. Our experiments show that the proposed architecture outperforms other state-of-the-art generative networks. Furthermore, our proposed model achieves better qualitative results indistinguishable from real angiograms.", "field": ["Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Generative Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Fundus to Angiography Generation"], "method": ["Generative Adversarial Network", "GAN", "Batch Normalization", "Convolution", "ReLU", "Residual Connection", "Feedback Alignment", "FA", "Residual Block", "Rectified Linear Units"], "dataset": ["Fundus Fluorescein Angiogram Photographs & Colour Fundus Images of Diabetic Patients"], "metric": ["Kernel Inception Distance", "FID"], "title": "Fundus2Angio: A Conditional GAN Architecture for Generating Fluorescein Angiography Images from Retinal Fundus Photography"} {"abstract": "Image Captioning is an arduous task of producing syntactically and semantically correct textual descriptions of an image in natural language with context related to the image. Existing notable pieces of research in Bengali Image Captioning (BIC) are based on encoder-decoder architecture. This paper presents an end-to-end image captioning system utilizing a multimodal architecture by combining a one-dimensional convolutional neural network (CNN) to encode sequence information with a pre-trained ResNet-50 model image encoder for extracting region-based visual features. We investigate our approach's performance on the BanglaLekhaImageCaptions dataset using the existing evaluation metrics and perform a human evaluation for qualitative analysis. Experiments show that our approach's language encoder captures the fine-grained information in the caption, and combined with the image features, it generates accurate and diversified caption. Our work outperforms all the existing BIC works and achieves a new state-of-the-art (SOTA) performance by scoring 0.651 on BLUE-1, 0.572 on CIDEr, 0.297 on METEOR, 0.434 on ROUGE, and 0.357 on SPICE.", "field": ["Convolutions"], "task": ["Image Captioning"], "method": ["1-Dimensional Convolutional Neural Networks", "1D CNN"], "dataset": ["BanglaLekhaImageCaptions"], "metric": ["BLEU-2", "METEOR", "BLEU-1", "CIDEr", "BLEU-3", "SPICE", "ROUGE-L", "BLEU-4"], "title": "Improved Bengali Image Captioning via deep convolutional neural network based encoder-decoder model"} {"abstract": "The ability to automatically estimate the quality and coverage of the samples produced by a generative model is a vital requirement for driving algorithm research. We present an evaluation metric that can separately and reliably measure both of these aspects in image generation tasks by forming explicit, non-parametric representations of the manifolds of real and generated data. We demonstrate the effectiveness of our metric in StyleGAN and BigGAN by providing several illustrative examples where existing metrics yield uninformative or contradictory results. Furthermore, we analyze multiple design variants of StyleGAN to better understand the relationships between the model architecture, training methods, and the properties of the resulting sample distribution. In the process, we identify new variants that improve the state-of-the-art. We also perform the first principled analysis of truncation methods and identify an improved method. Finally, we extend our metric to estimate the perceptual quality of individual samples, and use this to study latent space interpolations.", "field": ["Normalization", "Optimization", "Attention Mechanisms", "Generative Adversarial Networks", "Discriminators", "Regularization", "Attention Modules", "Activation Functions", "Latent Variable Sampling", "Convolutions", "Image Feature Extractors", "Stochastic Optimization", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Output Functions", "Generative Models", "Skip Connections", "Image Model Blocks"], "task": ["Image Generation"], "method": ["TTUR", "Truncation Trick", "Off-Diagonal Orthogonal Regularization", "Spectral Normalization", "Self-Attention GAN", "Adam", "Projection Discriminator", "Early Stopping", "GAN Hinge Loss", "1x1 Convolution", "StyleGAN", "SAGAN Self-Attention Module", "Convolution", "ReLU", "Residual Connection", "Linear Layer", "Leaky ReLU", "Two Time-scale Update Rule", "Dense Connections", "Feedforward Network", "Conditional Batch Normalization", "Non-Local Operation", "Batch Normalization", "Dot-Product Attention", "SAGAN", "Non-Local Block", "Softmax", "BigGAN", "Adaptive Instance Normalization", "R1 Regularization", "Residual Block", "Rectified Linear Units"], "dataset": ["FFHQ"], "metric": ["FID"], "title": "Improved Precision and Recall Metric for Assessing Generative Models"} {"abstract": "Neural networks have shown promising results for relation extraction. State-of-the-art models cast the task as an end-to-end problem, solved incrementally using a local classifier. Yet previous work using statistical models have demonstrated that global optimization can achieve better performances compared to local classification. We build a globally optimized neural model for end-to-end relation extraction, proposing novel LSTM features in order to better learn context representations. In addition, we present a novel method to integrate syntactic information to facilitate global learning, yet requiring little background on syntactic grammars thus being easy to extend. Experimental results show that our proposed model is highly effective, achieving the best performances on two standard benchmarks.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Relation Extraction", "Representation Learning", "Structured Prediction"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["ACE 2005", "CoNLL04"], "metric": ["Sentence Encoder", "NER Micro F1", "RE+ Micro F1"], "title": "End-to-End Neural Relation Extraction with Global Optimization"} {"abstract": "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.\r", "field": ["Output Functions", "Convolutional Neural Networks", "Activation Functions", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connections", "Image Model Blocks"], "task": ["Image Classification", "Retinal OCT Disease Classification"], "method": ["Depthwise Convolution", "Average Pooling", "Inception Module", "Softmax", "Convolution", "Rectified Linear Units", "ReLU", "1x1 Convolution", "Residual Connection", "Xception", "Depthwise Separable Convolution", "Pointwise Convolution", "Global Average Pooling", "Dense Connections", "Max Pooling"], "dataset": ["Srinivasan2014", "OCT2017"], "metric": ["Acc", "Sensitivity"], "title": "Xception: Deep Learning With Depthwise Separable Convolutions"} {"abstract": "We employ both random forests and LSTM networks (more precisely CuDNNLSTM) as training methodologies to analyze their effectiveness in forecasting out-of-sample directional movements of constituent stocks of the S&P 500 from January 1993 till December 2018 for intraday trading. We introduce a multi-feature setting consisting not only of the returns with respect to the closing prices, but also with respect to the opening prices and intraday returns. As trading strategy, we use Krauss et al. (2017) and Fischer & Krauss (2018) as benchmark and, on each trading day, buy the 10 stocks with the highest probability and sell short the 10 stocks with the lowest probability to outperform the market in terms of intraday returns -- all with equal monetary weight. Our empirical results show that the multi-feature setting provides a daily return, prior to transaction costs, of 0.64% using LSTM networks, and 0.54% using random forests. Hence we outperform the single-feature setting in Fischer & Krauss (2018) and Krauss et al. (2017) consisting only of the daily returns with respect to the closing prices, having corresponding daily returns of 0.41% and of 0.39% with respect to LSTM and random forests, respectively.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Stock Market Prediction"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["S&P 500"], "metric": ["Average daily returns"], "title": "Forecasting directional movements of stock prices for intraday trading using LSTM and random forests"} {"abstract": "Human language is often multimodal, which comprehends a mixture of natural language, facial gestures, and acoustic behaviors. However, two major challenges in modeling such multimodal human language time-series data exist: 1) inherent data non-alignment due to variable sampling rates for the sequences from each modality; and 2) long-range dependencies between elements across modalities. In this paper, we introduce the Multimodal Transformer (MulT) to generically address the above issues in an end-to-end manner without explicitly aligning the data. At the heart of our model is the directional pairwise crossmodal attention, which attends to interactions between multimodal sequences across distinct time steps and latently adapt streams from one modality to another. Comprehensive experiments on both aligned and non-aligned multimodal time-series show that our model outperforms state-of-the-art methods by a large margin. In addition, empirical analysis suggests that correlated crossmodal signals are able to be captured by the proposed crossmodal attention mechanism in MulT.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Multimodal Sentiment Analysis", "Time Series"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["MOSI"], "metric": ["F1 score", "Accuracy"], "title": "Multimodal Transformer for Unaligned Multimodal Language Sequences"} {"abstract": "Deep neural network training without pre-trained weights and few data is shown to need more training iterations. It is also known that, deeper models are more successful than their shallow counterparts for semantic segmentation task. Thus, we introduce EfficientSeg architecture, a modified and scalable version of U-Net, which can be efficiently trained despite its depth. We evaluated EfficientSeg architecture on Minicity dataset and outperformed U-Net baseline score (40% mIoU) using the same parameter count (51.5% mIoU). Our most successful model obtained 58.1% mIoU score and got the fourth place in semantic segmentation track of ECCV 2020 VIPriors challenge.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections"], "task": ["Semantic Segmentation"], "method": ["U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling"], "dataset": ["Cityscapes VIPriors subset"], "metric": ["mIoU", "Accuracy"], "title": "EfficientSeg: An Efficient Semantic Segmentation Network"} {"abstract": "Current GNN architectures use a vertex neighborhood aggregation scheme, which limits their discriminative power to that of the 1-dimensional Weisfeiler-Lehman (WL) graph isomorphism test. Here, we propose a novel graph convolution operator that is based on the 2-dimensional WL test. We formally show that the resulting 2-WL-GNN architecture is more discriminative than existing GNN approaches. This theoretical result is complemented by experimental studies using synthetic and real data. On multiple common graph classification benchmarks, we demonstrate that the proposed model is competitive with state-of-the-art graph kernels and GNNs.", "field": ["Convolutions"], "task": ["Graph Classification"], "method": ["Convolution"], "dataset": ["IMDb-B", "D&D", "REDDIT-B", "PROTEINS", "NCI1"], "metric": ["Accuracy"], "title": "A Novel Higher-order Weisfeiler-Lehman Graph Convolution"} {"abstract": "Training generative adversarial networks requires balancing of delicate adversarial dynamics. Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes. In this work, we improve CS-GAN with natural gradient-based latent optimisation and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator. Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet ($128 \\times 128$) dataset. Our model achieves an Inception Score (IS) of $148$ and an Fr\\'echet Inception Distance (FID) of $3.4$, an improvement of $17\\%$ and $32\\%$ in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters.", "field": ["Normalization", "Optimization", "Attention Mechanisms", "Generative Adversarial Networks", "Discriminators", "Regularization", "Attention Modules", "Activation Functions", "Latent Variable Sampling", "Convolutions", "Image Feature Extractors", "Stochastic Optimization", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Output Functions", "Generative Models", "Skip Connections", "Image Model Blocks"], "task": ["Conditional Image Generation", "Image Generation"], "method": ["TTUR", "Truncation Trick", "Off-Diagonal Orthogonal Regularization", "Spectral Normalization", "Natural Gradient Descent", "Adam", "SNGAN", "Self-Attention GAN", "Projection Discriminator", "Early Stopping", "GAN Hinge Loss", "1x1 Convolution", "DCGAN", "CS-GAN", "Euclidean Norm Regularization", "LOGAN", "SAGAN Self-Attention Module", "Convolution", "ReLU", "Residual Connection", "Deep Convolutional GAN", "Linear Layer", "Leaky ReLU", "Two Time-scale Update Rule", "Latent Optimisation", "Dense Connections", "Feedforward Network", "Conditional Batch Normalization", "Non-Local Operation", "Batch Normalization", "Dot-Product Attention", "SAGAN", "Spectrally Normalised GAN", "Non-Local Block", "BigGAN-deep", "Softmax", "Bottleneck Residual Block", "Rectified Linear Units"], "dataset": ["ImageNet 128x128"], "metric": ["Inception score", "FID"], "title": "LOGAN: Latent Optimisation for Generative Adversarial Networks"} {"abstract": "Task-oriented dialogue is often decomposed into three tasks: understanding user input, deciding actions, and generating a response. While such decomposition might suggest a dedicated model for each sub-task, we find a simple, unified approach leads to state-of-the-art performance on the MultiWOZ dataset. SimpleTOD is a simple approach to task-oriented dialogue that uses a single causal language model trained on all sub-tasks recast as a single sequence prediction problem. This allows SimpleTOD to fully leverage transfer learning from pre-trained, open domain, causal language models such as GPT-2. SimpleTOD improves over the prior state-of-the-art by 0.49 points in joint goal accuracy for dialogue state tracking. More impressively, SimpleTOD also improves the main metrics used to evaluate action decisions and response generation in an end-to-end setting for task-oriented dialog systems: inform rate by 8.1 points, success rate by 9.7 points, and combined score by 7.2 points.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Attention Modules", "Activation Functions", "Normalization", "Subword Segmentation", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Fine-Tuning", "Skip Connections"], "task": ["Dialogue State Tracking", "End-To-End Dialogue Modelling", "Language Modelling", "Multi-domain Dialogue State Tracking", "Transfer Learning"], "method": ["Weight Decay", "Cosine Annealing", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Discriminative Fine-Tuning", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Cosine Annealing", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "GPT-2", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["MULTIWOZ 2.1", "MULTIWOZ 2.0"], "metric": ["MultiWOZ (Inform)", "BLEU", "MultiWOZ (Success)"], "title": "A Simple Language Model for Task-Oriented Dialogue"} {"abstract": "Feature selection is often used before a data mining or a machine learning task in order to build more accurate models. It is considered as a hard optimization problem and metaheuristics give very satisfactory results for such problems. In this work, we propose a hybrid metaheuristic that integrates a reinforcement learning algorithm with Bee Swarm Optimization metaheuristic (BSO) for solving feature selection problem. QBSO-FS follows the wrapper approach. It uses a hybrid version of BSO with Q-learning for generating feature subsets and a classifier to evaluate them. The goal of using Q-learning is to benefit from the advantage of reinforcement learning to make the search process more adaptive and more efficient. The performances of QBSO-FS are evaluated on 20 well-known datasets and the results are compared with those of original BSO and other recently published methods. The results show that QBO-FS outperforms BSO-FS for large instances and gives very satisfactory results compared to recently published algorithms.", "field": ["Off-Policy TD Control"], "task": ["Feature Selection", "Multi-agent Reinforcement Learning", "Q-Learning"], "method": ["Q-Learning"], "dataset": ["Spect", "Zoo", "German", "Sonar", "Diabets", "Glass identification", "Wine", "WDBC", "Breastcancer", "Iris", "Ionosphere_class b", "Lymphography", "Heart-StatLog", "Hepatitis", "Lung-Cancer", "Vowel", "Heart-C", "Vehicule", "Movementlibras", "Congress"], "metric": ["Average Accuracy", "Accuracy(10-fold)"], "title": "QBSO-FS: A Reinforcement Learning Based Bee Swarm Optimization Metaheuristic for Feature Selection"} {"abstract": "A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint that is, up to small constant factors, no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.", "field": ["Meta-Learning Algorithms"], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning"], "method": ["Model-Agnostic Meta-Learning", "MAML"], "dataset": ["OMNIGLOT - 1-Shot, 5-way", "OMNIGLOT - 5-Shot, 20-way", "Mini-Imagenet 5-way (1-shot)", "OMNIGLOT - 5-Shot, 5-way", "OMNIGLOT - 1-Shot, 20-way"], "metric": ["Accuracy"], "title": "Meta-Learning with Implicit Gradients"} {"abstract": "Self-attention and channel attention, modelling thesemantic interdependencies in spatial and channel dimensionsrespectively, have recently been widely used for semantic seg-mentation. However, computing spatial-attention and channelattention separately and then fusing them directly can causeconflicting feature representations. In this paper, we proposethe Channelized Axial Attention (CAA) to seamlessly integratechannel attention and axial attention with reduced computationalcomplexity. After computing axial attention maps, we propose tochannelize the intermediate results obtained from the transposeddot-product so that the channel importance of each axial repre-sentation is optimized across the whole receptive field. We furtherdevelop grouped vectorization, which allows our model to be runwith very little memory consumption at a speed comparableto the full vectorization. Comparative experiments conductedon multiple benchmark datasets, including Cityscapes, PASCALContext and COCO-Stuff, demonstrate that our CAA not onlyrequires much less computation resources compared with otherdual attention models such as DANet, but also outperformsthe state-of-the-art ResNet-101-based segmentation models on alltested datasets.", "field": ["Image Model Blocks"], "task": ["Semantic Segmentation"], "method": ["Axial Attention", "Axial"], "dataset": ["COCO-Stuff test", "PASCAL Context", "Cityscapes test"], "metric": ["Mean IoU (class)", "mIoU"], "title": "Channelized Axial Attention for Semantic Segmentation"} {"abstract": "This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the between-class similarity $s_n$. We find a majority of loss functions, including the triplet loss and the softmax plus cross-entropy loss, embed $s_n$ and $s_p$ into similarity pairs and seek to reduce $(s_n-s_p)$. Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning approaches, i.e. learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing $(s_n-s_p)$. Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several fine-grained image retrieval datasets, the achieved performance is on par with the state of the art.", "field": ["Loss Functions", "Output Functions"], "task": ["Face Recognition", "Image Retrieval", "Metric Learning", "Person Re-Identification"], "method": ["Softmax", "Triplet Loss"], "dataset": ["LFW", "MSMT17", "CUB-200-2011", "CARS196", "Market-1501", "Stanford Online Products", "CFP-FP"], "metric": ["mAP", "MAP", "Rank-1", "Accuracy", "R@1"], "title": "Circle Loss: A Unified Perspective of Pair Similarity Optimization"} {"abstract": "In this paper, we propose an Attentional Generative Adversarial Network\n(AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained\ntext-to-image generation. With a novel attentional generative network, the\nAttnGAN can synthesize fine-grained details at different subregions of the\nimage by paying attentions to the relevant words in the natural language\ndescription. In addition, a deep attentional multimodal similarity model is\nproposed to compute a fine-grained image-text matching loss for training the\ngenerator. The proposed AttnGAN significantly outperforms the previous state of\nthe art, boosting the best reported inception score by 14.14% on the CUB\ndataset and 170.25% on the more challenging COCO dataset. A detailed analysis\nis also performed by visualizing the attention layers of the AttnGAN. It for\nthe first time shows that the layered attentional GAN is able to automatically\nselect the condition at the word level for generating different parts of the\nimage.", "field": ["Generative Models", "Convolutions"], "task": ["Image Generation", "Text Matching", "Text-to-Image Generation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["COCO", "CUB"], "metric": ["Inception score", "SOA-C"], "title": "AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks"} {"abstract": "Recognition of surgical activity is an essential component to develop context-aware decision support for the operating room. In this work, we tackle the recognition of fine-grained activities, modeled as action triplets representing the tool activity. To this end, we introduce a new laparoscopic dataset, CholecT40, consisting of 40 videos from the public dataset Cholec80 in which all frames have been annotated using 128 triplet classes. Furthermore, we present an approach to recognize these triplets directly from the video data. It relies on a module called Class Activation Guide (CAG), which uses the instrument activation maps to guide the verb and target recognition. To model the recognition of multiple triplets in the same frame, we also propose a trainable 3D Interaction Space, which captures the associations between the triplet components. Finally, we demonstrate the significance of these contributions via several ablation studies and comparisons to baselines on CholecT40.", "field": ["Convolutions", "Region Proposal", "Output Functions", "3D Representations"], "task": ["Action Localization", "Action Recognition", "Action Recognition In Videos ", "Action Triplet Recognition", "Weakly Supervised Action Localization"], "method": ["Heatmap", "Class activation guide", "CAG", "3DIS", "Convolution", "3-dimensional interaction space"], "dataset": ["CholecT40"], "metric": ["mAP"], "title": "Recognition of Instrument-Tissue Interactions in Endoscopic Videos via Action Triplets"} {"abstract": "Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed self-supervised model adaptation fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. In addition, we propose a computationally efficient unimodal segmentation architecture termed AdapNet++ that incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling that has a larger effective receptive field with more than 10x fewer parameters, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on several benchmarks demonstrate that both our unimodal and multimodal architectures achieve state-of-the-art performance.", "field": ["Pooling Operations"], "task": ["Scene Recognition", "Semantic Segmentation"], "method": ["Spatial Pyramid Pooling"], "dataset": ["ScanNetV2", "ScanNet", "SUN-RGBD", "SYNTHIA-CVPR\u201916", "Cityscapes test", "Freiburg Forest"], "metric": ["Mean IoU", "Mean IoU (class)", "Average Recall"], "title": "Self-Supervised Model Adaptation for Multimodal Semantic Segmentation"} {"abstract": "Efficient identification of people and objects, segmentation of regions of interest and extraction of relevant data in images, texts, audios and videos are evolving considerably in these past years, which deep learning methods, combined with recent improvements in computational resources, contributed greatly for this achievement. Although its outstanding potential, development of efficient architectures and modules requires expert knowledge and amount of resource time available. In this paper, we propose an evolutionary-based neural architecture search approach for efficient discovery of convolutional models in a dynamic search space, within only 24 GPU hours. With its efficient search environment and phenotype representation, Gene Expression Programming is adapted for network's cell generation. Despite having limited GPU resource time and broad search space, our proposal achieved similar state-of-the-art to manually-designed convolutional networks and also NAS-generated ones, even beating similar constrained evolutionary-based NAS works. The best cells in different runs achieved stable results, with a mean error of 2.82% in CIFAR-10 dataset (which the best model achieved an error of 2.67%) and 18.83% for CIFAR-100 (best model with 18.16%). For ImageNet in the mobile setting, our best model achieved top-1 and top-5 errors of 29.51% and 10.37%, respectively. Although evolutionary-based NAS works were reported to require a considerable amount of GPU time for architecture search, our approach obtained promising results in little time, encouraging further experiments in evolutionary-based NAS, for search and network representation improvements.", "field": ["Convolutions", "Activation Functions", "Output Functions"], "task": ["Neural Architecture Search"], "method": ["Depthwise Convolution", "Softmax", "ReLU", "Depthwise Separable Convolution", "Pointwise Convolution", "Rectified Linear Units"], "dataset": ["CIFAR-100", "ImageNet", "CIFAR-10"], "metric": ["Percentage Error", "Top-1 Error Rate"], "title": "Optimizing Neural Architecture Search using Limited GPU Time in a Dynamic Search Space: A Gene Expression Programming Approach"} {"abstract": "Few ideas have enjoyed as large an impact on deep learning as convolution.\nFor any problem involving pixels or spatial representations, common intuition\nholds that convolutional neural networks may be appropriate. In this paper we\nshow a striking counterexample to this intuition via the seemingly trivial\ncoordinate transform problem, which simply requires learning a mapping between\ncoordinates in (x,y) Cartesian space and one-hot pixel space. Although\nconvolutional networks would seem appropriate for this task, we show that they\nfail spectacularly. We demonstrate and carefully analyze the failure first on a\ntoy problem, at which point a simple fix becomes obvious. We call this solution\nCoordConv, which works by giving convolution access to its own input\ncoordinates through the use of extra coordinate channels. Without sacrificing\nthe computational and parametric efficiency of ordinary convolution, CoordConv\nallows networks to learn either complete translation invariance or varying\ndegrees of translation dependence, as required by the end task. CoordConv\nsolves the coordinate transform problem with perfect generalization and 150\ntimes faster with 10--100 times fewer parameters than convolution. This stark\ncontrast raises the question: to what extent has this inability of convolution\npersisted insidiously inside other tasks, subtly hampering performance from\nwithin? A complete answer to this question will require further investigation,\nbut we show preliminary evidence that swapping convolution for CoordConv can\nimprove models on a diverse set of tasks. Using CoordConv in a GAN produced\nless mode collapse as the transform between high-level spatial latents and\npixels becomes easier to learn. A Faster R-CNN detection model trained on MNIST\nshowed 24% better IOU when using CoordConv, and in the RL domain agents playing\nAtari games benefit significantly from the use of CoordConv layers.", "field": ["Replay Memory", "Convolutional Neural Networks", "Normalization", "Policy Gradient Methods", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Distributed Reinforcement Learning", "Object Detection Models", "Region Proposal", "Stochastic Optimization", "Skip Connection Blocks", "Initialization", "Output Functions", "Learning Rate Schedules", "RoI Feature Extractors", "Generative Models", "Skip Connections"], "task": ["Atari Games", "Image Classification"], "method": ["Weight Decay", "Generative Adversarial Network", "A2C", "Average Pooling", "Faster R-CNN", "Adam", "1x1 Convolution", "DCGAN", "Region Proposal Network", "Variational Autoencoder", "ResNet", "Convolution", "RoIPool", "ReLU", "Residual Connection", "Deep Convolutional GAN", "Leaky ReLU", "CoordConv", "RPN", "GAN", "Batch Normalization", "VAE", "Residual Network", "Kaiming Initialization", "Step Decay", "Ape-X", "Softmax", "Prioritized Experience Replay", "Bottleneck Residual Block", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution"} {"abstract": "We introduce a novel 3D object proposal approach named Generative Shape\nProposal Network (GSPN) for instance segmentation in point cloud data. Instead\nof treating object proposal as a direct bounding box regression problem, we\ntake an analysis-by-synthesis strategy and generate proposals by reconstructing\nshapes from noisy observations in a scene. We incorporate GSPN into a novel 3D\ninstance segmentation framework named Region-based PointNet (R-PointNet) which\nallows flexible proposal refinement and instance segmentation generation. We\nachieve state-of-the-art performance on several 3D instance segmentation tasks.\nThe success of GSPN largely comes from its emphasis on geometric understandings\nduring object proposal, which greatly reducing proposals with low objectness.", "field": ["3D Representations"], "task": ["3D Instance Segmentation", "3D Object Detection", "Instance Segmentation", "Regression", "Semantic Segmentation"], "method": ["PointNet"], "dataset": ["ScanNetV2", "ScanNet(v2)"], "metric": ["mAP@0.5", "Mean AP @ 0.5", "mAP@0.25"], "title": "GSPN: Generative Shape Proposal Network for 3D Instance Segmentation in Point Cloud"} {"abstract": "We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.", "field": ["Attention Modules", "Output Functions", "Stochastic Optimization", "Learning Rate Schedules", "Regularization", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections", "Attention Patterns"], "task": ["Multi-Task Learning"], "method": ["Weight Decay", "Cosine Annealing", "Adam", "Scaled Dot-Product Attention", "Gaussian Linear Error Units", "GPT-3", "Residual Connection", "Dense Connections", "Layer Normalization", "GELU", "Byte Pair Encoding", "BPE", "Softmax", "Strided Attention", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Cosine Annealing", "Fixed Factorized Attention", "Dropout"], "dataset": ["Hendrycks Test"], "metric": ["Accuracy (%)"], "title": "Measuring Massive Multitask Language Understanding"} {"abstract": "We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches $74.3\\%$ top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and $79.6\\%$ with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Representation Learning", "Self-Supervised Image Classification", "Self-Supervised Learning", "Semi-Supervised Image Classification"], "method": ["ResNet", "Average Pooling", "Residual Block", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet", "ImageNet - 1% labeled data"], "metric": ["Top 5 Accuracy", "Number of Params", "Top 1 Accuracy"], "title": "Bootstrap your own latent: A new approach to self-supervised Learning"} {"abstract": "In this work, we revisit the global average pooling layer proposed in [13],\nand shed light on how it explicitly enables the convolutional neural network to\nhave remarkable localization ability despite being trained on image-level\nlabels. While this technique was previously proposed as a means for\nregularizing training, we find that it actually builds a generic localizable\ndeep representation that can be applied to a variety of tasks. Despite the\napparent simplicity of global average pooling, we are able to achieve 37.1%\ntop-5 error for object localization on ILSVRC 2014, which is remarkably close\nto the 34.2% top-5 error achieved by a fully supervised CNN approach. We\ndemonstrate that our network is able to localize the discriminative image\nregions on a variety of tasks despite not being trained for them", "field": ["Pooling Operations"], "task": ["Object Localization", "Weakly-Supervised Object Localization"], "method": ["Global Average Pooling", "Average Pooling"], "dataset": ["ILSVRC 2015", "Tiny ImageNet", "ILSVRC 2016"], "metric": ["Top-5 Error", "Top-1 Localization Accuracy", "Top-1 Error Rate"], "title": "Learning Deep Features for Discriminative Localization"} {"abstract": "In this paper, we investigate the problem of retrieving images from a database based on a multi-modal (image-text) query. Specifically, the query text prompts some modification in the query image and the task is to retrieve images with the desired modifications. For instance, a user of an E-Commerce platform is interested in buying a dress, which should look similar to her friend's dress, but the dress should be of white color with a ribbon sash. In this case, we would like the algorithm to retrieve some dresses with desired modifications in the query dress. We propose an autoencoder based model, ComposeAE, to learn the composition of image and text query for retrieving images. We adopt a deep metric learning approach and learn a metric that pushes composition of source image and text query closer to the target images. We also propose a rotational symmetry constraint on the optimization problem. Our approach is able to outperform the state-of-the-art method TIRG \\cite{TIRG} on three benchmark datasets, namely: MIT-States, Fashion200k and Fashion IQ. In order to ensure fair comparison, we introduce strong baselines by enhancing TIRG method. To ensure reproducibility of the results, we publish our code here: \\url{https://anonymous.4open.science/r/d1babc3c-0e72-448a-8594-b618bae876dc/}.", "field": ["Generative Models"], "task": ["Image Retrieval", "Image Retrieval with Multi-Modal Query", "Metric Learning"], "method": ["AutoEncoder"], "dataset": ["FashionIQ", "MIT-States", "Fashion200k"], "metric": ["Recall@50", "Recall@1", "Recall@5", "Recall@10"], "title": "Compositional Learning of Image-Text Query for Image Retrieval"} {"abstract": "We introduce Patch Refinement a two-stage model for accurate 3D object detection and localization from point cloud data. Patch Refinement is composed of two independently trained Voxelnet-based networks, a Region Proposal Network (RPN) and a Local Refinement Network (LRN). We decompose the detection task into a preliminary Bird's Eye View (BEV) detection step and a local 3D detection step. Based on the proposed BEV locations by the RPN, we extract small point cloud subsets (\"patches\"), which are then processed by the LRN, which is less limited by memory constraints due to the small area of each patch. Therefore, we can apply encoding with a higher voxel resolution locally. The independence of the LRN enables the use of additional augmentation techniques and allows for an efficient, regression focused training as it uses only a small fraction of each scene. Evaluated on the KITTI 3D object detection benchmark, our submission from January 28, 2019, outperformed all previous entries on all three difficulties of the class car, using only 50 % of the available training data and only LiDAR information.", "field": ["Region Proposal"], "task": ["3D Object Detection", "Object Detection", "Region Proposal", "Regression"], "method": ["Region Proposal Network", "RPN"], "dataset": ["KITTI Cars Hard", "KITTI Cars Moderate", "KITTI Cars Easy"], "metric": ["AP"], "title": "Patch Refinement -- Localized 3D Object Detection"} {"abstract": "Temporal action detection is a fundamental yet challenging task in video understanding. Video context is a critical cue to effectively detect actions, but current works mainly focus on temporal context, while neglecting semantic context as well as other important context properties. In this work, we propose a graph convolutional network (GCN) model to adaptively incorporate multi-level semantic context into video features and cast temporal action detection as a sub-graph localization problem. Specifically, we formulate video snippets as graph nodes, snippet-snippet correlations as edges, and actions associated with context as target sub-graphs. With graph convolution as the basic operation, we design a GCN block called GCNeXt, which learns the features of each node by aggregating its context and dynamically updates the edges in the graph. To localize each sub-graph, we also design an SGAlign layer to embed each sub-graph into the Euclidean space. Extensive experiments show that G-TAD is capable of finding effective video context without extra supervision and achieves state-of-the-art performance on two detection benchmarks. On ActivityNet-1.3, it obtains an average mAP of 34.09%; on THUMOS14, it reaches 51.6% at IoU@0.5 when combined with a proposal processing method. G-TAD code is publicly available at https://github.com/frostinassiky/gtad.", "field": ["Convolutions", "Graph Models"], "task": ["Temporal Action Localization"], "method": ["Graph Convolutional Network", "GCN", "Convolution"], "dataset": ["ActivityNet-1.3", "THUMOS\u201914"], "metric": ["mAP IOU@0.95", "mAP", "mAP IOU@0.5", "mAP IOU@0.75"], "title": "G-TAD: Sub-Graph Localization for Temporal Action Detection"} {"abstract": "We present trellis networks, a new architecture for sequence modeling. On the\none hand, a trellis network is a temporal convolutional network with special\nstructure, characterized by weight tying across depth and direct injection of\nthe input into deep layers. On the other hand, we show that truncated recurrent\nnetworks are equivalent to trellis networks with special sparsity structure in\ntheir weight matrices. Thus trellis networks with general weight matrices\ngeneralize truncated recurrent networks. We leverage these connections to\ndesign high-performing trellis networks that absorb structural and algorithmic\nelements from both recurrent and convolutional models. Experiments demonstrate\nthat trellis networks outperform the current state of the art methods on a\nvariety of challenging benchmarks, including word-level language modeling and\ncharacter-level language modeling tasks, and stress tests designed to evaluate\nlong-term memory retention. The code is available at\nhttps://github.com/locuslab/trellisnet .", "field": ["Parameter Sharing"], "task": ["Language Modelling", "Sequential Image Classification"], "method": ["Weight Tying"], "dataset": ["Sequential CIFAR-10", "Penn Treebank (Word Level)", "WikiText-103", "Penn Treebank (Character Level)"], "metric": ["Number of params", "Unpermuted Accuracy", "Bit per Character (BPC)", "Test perplexity"], "title": "Trellis Networks for Sequence Modeling"} {"abstract": "With the advent of mobile and hand-held cameras, document images have found their way into almost every domain. Dewarping of these images for the removal of perspective distortions and folds is essential so that they can be understood by document recognition algorithms. For this, we propose an end-to-end CNN architecture that can produce distortion free document images from warped documents it takes as input. We train this model on warped document images simulated synthetically to compensate for lack of enough natural data. Our method is novel in the use of a bifurcated decoder with shared weights to prevent intermingling of grid coordinates, in the use of residual networks in the U-Net skip connections to allow flow of data from different receptive fields in the model, and in the use of a gated network to help the model focus on structure and line level detail of the document image. We evaluate our method on the DocUNet dataset, a benchmark in this domain, and obtain results comparable to state-of-the-art methods.", "field": ["Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Graph Embeddings", "Skip Connections"], "task": ["MS-SSIM", "SSIM"], "method": ["LINE", "U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Large-scale Information Network Embedding", "Rectified Linear Units", "Max Pooling"], "dataset": ["DocUNet"], "metric": ["SSIM", "MS-SSIM"], "title": "RectiNet-v2: A stacked network architecture for document image dewarping"} {"abstract": "Deep convolutional neural networks demonstrate impressive results in the super-resolution domain. A series of studies concentrate on improving peak signal noise ratio (PSNR) by using much deeper layers, which are not friendly to constrained resources. Pursuing a trade-off between the restoration capacity and the simplicity of models is still non-trivial. Recent contributions are struggling to manually maximize this balance, while our work achieves the same goal automatically with neural architecture search. Specifically, we handle super-resolution with a multi-objective approach. We also propose an elastic search tactic at both micro and macro level, based on a hybrid controller that profits from evolutionary computation and reinforcement learning. Quantitative experiments help us to draw a conclusion that our generated models dominate most of the state-of-the-art methods with respect to the individual FLOPS.", "field": ["Convolutions"], "task": ["Neural Architecture Search", "Super-Resolution"], "method": ["Convolution"], "dataset": ["Set14 - 2x upscaling", "Set5 - 2x upscaling", "Urban100 - 2x upscaling", "BSD100 - 2x upscaling"], "metric": ["PSNR"], "title": "Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search"} {"abstract": "Instance-level human parsing towards real-world human analysis scenarios is\nstill under-explored due to the absence of sufficient data resources and\ntechnical difficulty in parsing multiple instances in a single pass. Several\nrelated works all follow the \"parsing-by-detection\" pipeline that heavily\nrelies on separately trained detection models to localize instances and then\nperforms human parsing for each instance sequentially. Nonetheless, two\ndiscrepant optimization targets of detection and parsing lead to suboptimal\nrepresentation learning and error accumulation for final results. In this work,\nwe make the first attempt to explore a detection-free Part Grouping Network\n(PGN) for efficiently parsing multiple people in an image in a single pass. Our\nPGN reformulates instance-level human parsing as two twinned sub-tasks that can\nbe jointly learned and mutually refined via a unified network: 1) semantic part\nsegmentation for assigning each pixel as a human part (e.g., face, arms); 2)\ninstance-aware edge detection to group semantic parts into distinct person\ninstances. Thus the shared intermediate representation would be endowed with\ncapabilities in both characterizing fine-grained parts and inferring instance\nbelongings of each part. Finally, a simple instance partition process is\nemployed to get final results during inference. We conducted experiments on\nPASCAL-Person-Part dataset and our PGN outperforms all state-of-the-art\nmethods. Furthermore, we show its superiority on a newly collected multi-person\nparsing dataset (CIHP) including 38,280 diverse images, which is the largest\ndataset so far and can facilitate more advanced human analysis. The CIHP\nbenchmark and our source code are available at http://sysu-hcp.net/lip/.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Edge Detection", "Human Parsing", "Human Part Segmentation", "Representation Learning"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["CIHP"], "metric": ["Mean IoU"], "title": "Instance-level Human Parsing via Part Grouping Network"} {"abstract": "Graph embedding methods represent nodes in a continuous vector space,\npreserving information from the graph (e.g. by sampling random walks). There\nare many hyper-parameters to these methods (such as random walk length) which\nhave to be manually tuned for every graph. In this paper, we replace random\nwalk hyper-parameters with trainable parameters that we automatically learn via\nbackpropagation. In particular, we learn a novel attention model on the power\nseries of the transition matrix, which guides the random walk to optimize an\nupstream objective. Unlike previous approaches to attention models, the method\nthat we propose utilizes attention parameters exclusively on the data (e.g. on\nthe random walk), and not used by the model for inference. We experiment on\nlink prediction tasks, as we aim to produce embeddings that best-preserve the\ngraph structure, generalizing to unseen information. We improve\nstate-of-the-art on a comprehensive suite of real world datasets including\nsocial, collaboration, and biological networks. Adding attention to random\nwalks can reduce the error by 20% to 45% on datasets we attempted. Further, our\nlearned attention parameters are different for every graph, and our\nautomatically-found values agree with the optimal choice of hyper-parameter if\nwe manually tune existing methods.", "field": ["Graph Embeddings"], "task": ["Graph Embedding", "Link Prediction", "Node Classification"], "method": ["Watch Your Step", "WYS"], "dataset": ["Cora", "Citeseer"], "metric": ["Accuracy"], "title": "Watch Your Step: Learning Node Embeddings via Graph Attention"} {"abstract": "Neural network architectures with memory and attention mechanisms exhibit\ncertain reasoning capabilities required for question answering. One such\narchitecture, the dynamic memory network (DMN), obtained high accuracy on a\nvariety of language tasks. However, it was not shown whether the architecture\nachieves strong results for question answering when supporting facts are not\nmarked during training or whether it could be applied to other modalities such\nas images. Based on an analysis of the DMN, we propose several improvements to\nits memory and input modules. Together with these changes we introduce a novel\ninput module for images in order to be able to answer visual questions. Our new\nDMN+ model improves the state of the art on both the Visual Question Answering\ndataset and the \\babi-10k text question-answering dataset without supporting\nfact supervision.", "field": ["Recurrent Neural Networks", "Working Memory Models", "Output Functions"], "task": ["Question Answering", "Visual Question Answering"], "method": ["Gated Recurrent Unit", "Softmax", "Memory Network", "GRU", "Dynamic Memory Network"], "dataset": ["COCO Visual Question Answering (VQA) real images 1.0 open ended", "VQA v1 test-std", "VQA v1 test-dev"], "metric": ["Percentage correct", "Accuracy"], "title": "Dynamic Memory Networks for Visual and Textual Question Answering"} {"abstract": "A deep learning approach to reinforcement learning led to a general learner\nable to train on visual input to play a variety of arcade games at the human\nand superhuman levels. Its creators at the Google DeepMind's team called the\napproach: Deep Q-Network (DQN). We present an extension of DQN by \"soft\" and\n\"hard\" attention mechanisms. Tests of the proposed Deep Attention Recurrent\nQ-Network (DARQN) algorithm on multiple Atari 2600 games show level of\nperformance superior to that of DQN. Moreover, built-in attention mechanisms\nallow a direct online monitoring of the training process by highlighting the\nregions of the game screen the agent is focusing on when making decisions.", "field": ["Q-Learning Networks", "Convolutions", "Feedforward Networks", "Off-Policy TD Control"], "task": ["Atari Games", "Deep Attention"], "method": ["Q-Learning", "Convolution", "DQN", "Dense Connections", "Deep Q-Network"], "dataset": ["Atari 2600 Seaquest", "Atari 2600 Breakout", "Atari 2600 Tutankham", "Atari 2600 Space Invaders", "Atari 2600 Gopher"], "metric": ["Score"], "title": "Deep Attention Recurrent Q-Network"} {"abstract": "Graph neural networks (GNN) has been successfully applied to operate on the graph-structured data. Given a specific scenario, rich human expertise and tremendous laborious trials are usually required to identify a suitable GNN architecture. It is because the performance of a GNN architecture is significantly affected by the choice of graph convolution components, such as aggregate function and hidden dimension. Neural architecture search (NAS) has shown its potential in discovering effective deep architectures for learning tasks in image and language modeling. However, existing NAS algorithms cannot be directly applied to the GNN search problem. First, the search space of GNN is different from the ones in existing NAS work. Second, the representation learning capacity of GNN architecture changes obviously with slight architecture modifications. It affects the search efficiency of traditional search methods. Third, widely used techniques in NAS such as parameter sharing might become unstable in GNN. To bridge the gap, we propose the automated graph neural networks (AGNN) framework, which aims to find an optimal GNN architecture within a predefined search space. A reinforcement learning based controller is designed to greedily validate architectures via small steps. AGNN has a novel parameter sharing strategy that enables homogeneous architectures to share parameters, based on a carefully-designed homogeneity definition. Experiments on real-world benchmark datasets demonstrate that the GNN architecture identified by AGNN achieves the best performance, comparing with existing handcrafted models and tradistional search methods.", "field": ["Recurrent Neural Networks", "Activation Functions", "Convolutions", "Output Functions"], "task": ["Language Modelling", "Neural Architecture Search", "Node Classification", "Representation Learning"], "method": ["Softmax", "Long Short-Term Memory", "Convolution", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["Cora", "Pubmed"], "metric": ["Accuracy"], "title": "Auto-GNN: Neural Architecture Search of Graph Neural Networks"} {"abstract": "The prediction of physicochemical properties from molecular structures is a crucial task for artificial intelligence aided molecular design. A growing number of Graph Neural Networks (GNNs) have been proposed to address this challenge. These models improve their expressive power by incorporating auxiliary information in molecules while inevitably increase their computational complexity. In this work, we aim to design a GNN which is both powerful and efficient for molecule structures. To achieve such goal, we propose a molecular mechanics-driven approach by first representing each molecule as a two-layer multiplex graph, where one layer contains only local connections that mainly capture the covalent interactions and another layer contains global connections that can simulate non-covalent interactions. Then for each layer, a corresponding message passing module is proposed to balance the trade-off of expression power and computational complexity. Based on these two modules, we build Multiplex Molecular Graph Neural Network (MXMNet). When validated by the QM9 dataset for small molecules and PDBBind dataset for large protein-ligand complexes, MXMNet achieves superior results to the existing state-of-the-art models under restricted resources.", "field": ["Activation Functions", "Skip Connections", "Graph Models"], "task": ["Drug Discovery", "Formation Energy"], "method": ["MXMNet", "Swish", "Multiplex Molecular Graph Neural Network", "MPNN", "Residual Connection", "Message Passing Neural Network"], "dataset": ["QM9"], "metric": ["Error ratio"], "title": "Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures"} {"abstract": "We introduce KBGAN, an adversarial learning framework to improve the\nperformances of a wide range of existing knowledge graph embedding models.\nBecause knowledge graphs typically only contain positive facts, sampling useful\nnegative training examples is a non-trivial task. Replacing the head or tail\nentity of a fact with a uniformly randomly selected entity is a conventional\nmethod for generating negative facts, but the majority of the generated\nnegative facts can be easily discriminated from positive facts, and will\ncontribute little towards the training. Inspired by generative adversarial\nnetworks (GANs), we use one knowledge graph embedding model as a negative\nsample generator to assist the training of our desired model, which acts as the\ndiscriminator in GANs. This framework is independent of the concrete form of\ngenerator and discriminator, and therefore can utilize a wide variety of\nknowledge graph embedding models as its building blocks. In experiments, we\nadversarially train two translation-based models, TransE and TransD, each with\nassistance from one of the two probability-based models, DistMult and ComplEx.\nWe evaluate the performances of KBGAN on the link prediction task, using three\nknowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental\nresults show that adversarial training substantially improves the performances\nof target embedding models under various settings.", "field": ["Graph Embeddings"], "task": ["Graph Embedding", "Knowledge Base Completion", "Knowledge Graph Embedding", "Knowledge Graph Embeddings", "Knowledge Graphs", "Link Prediction"], "method": ["TransE"], "dataset": ["WN18RR", "WN18", "FB15k-237"], "metric": ["Hits@10", "MRR"], "title": "KBGAN: Adversarial Learning for Knowledge Graph Embeddings"} {"abstract": "We present a single network method for panoptic segmentation. This method\ncombines the predictions from a jointly trained semantic and instance\nsegmentation network using heuristics. Joint training is the first step towards\nan end-to-end panoptic segmentation network and is faster and more memory\nefficient than training and predicting with two networks, as done in previous\nwork. The architecture consists of a ResNet-50 feature extractor shared by the\nsemantic segmentation and instance segmentation branch. For instance\nsegmentation, a Mask R-CNN type of architecture is used, while the semantic\nsegmentation branch is augmented with a Pyramid Pooling Module. Results for\nthis method are submitted to the COCO and Mapillary Joint Recognition Challenge\n2018. Our approach achieves a PQ score of 17.6 on the Mapillary Vistas\nvalidation set and 27.2 on the COCO test-dev set.", "field": ["Initialization", "Semantic Segmentation Modules", "Convolutional Neural Networks", "Output Functions", "Activation Functions", "RoI Feature Extractors", "Normalization", "Convolutions", "Pooling Operations", "Instance Segmentation Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Instance Segmentation", "Panoptic Segmentation", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Softmax", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "RoIAlign", "Mask R-CNN", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "Pyramid Pooling Module"], "dataset": ["Mapillary val", "COCO test-dev"], "metric": ["PQst", "PQ", "PQth"], "title": "Panoptic Segmentation with a Joint Semantic and Instance Segmentation Network"} {"abstract": "Feature pyramids are widely exploited by both the state-of-the-art one-stage\nobject detectors (e.g., DSSD, RetinaNet, RefineDet) and the two-stage object\ndetectors (e.g., Mask R-CNN, DetNet) to alleviate the problem arising from\nscale variation across object instances. Although these object detectors with\nfeature pyramids achieve encouraging results, they have some limitations due to\nthat they only simply construct the feature pyramid according to the inherent\nmulti-scale, pyramidal architecture of the backbones which are actually\ndesigned for object classification task. Newly, in this work, we present a\nmethod called Multi-Level Feature Pyramid Network (MLFPN) to construct more\neffective feature pyramids for detecting objects of different scales. First, we\nfuse multi-level features (i.e. multiple layers) extracted by backbone as the\nbase feature. Second, we feed the base feature into a block of alternating\njoint Thinned U-shape Modules and Feature Fusion Modules and exploit the\ndecoder layers of each u-shape module as the features for detecting objects.\nFinally, we gather up the decoder layers with equivalent scales (sizes) to\ndevelop a feature pyramid for object detection, in which every feature map\nconsists of the layers (features) from multiple levels. To evaluate the\neffectiveness of the proposed MLFPN, we design and train a powerful end-to-end\none-stage object detector we call M2Det by integrating it into the architecture\nof SSD, which gets better detection performance than state-of-the-art one-stage\ndetectors. Specifically, on MS-COCO benchmark, M2Det achieves AP of 41.0 at\nspeed of 11.8 FPS with single-scale inference strategy and AP of 44.2 with\nmulti-scale inference strategy, which is the new state-of-the-art results among\none-stage detectors. The code will be made available on\n\\url{https://github.com/qijiezhao/M2Det.", "field": ["Proposal Filtering", "Convolutional Neural Networks", "Feature Extractors", "Normalization", "Regularization", "Activation Functions", "Convolutions", "Pooling Operations", "Object Detection Models", "Stochastic Optimization", "Loss Functions", "Feedforward Networks", "Skip Connection Blocks", "Initialization", "Output Functions", "Learning Rate Schedules", "RoI Feature Extractors", "Instance Segmentation Models", "Skip Connections"], "task": ["Object Classification", "Object Detection"], "method": ["Weight Decay", "Average Pooling", "FFMv1", "M2Det", "1x1 Convolution", "RoIAlign", "Scale-wise Feature Aggregation Module", "ResNet", "VGG", "Thinned U-shape Module", "SSD", "Convolution", "ReLU", "FFMv2", "FPN", "Feature Fusion Module v1", "MLFPN", "Residual Connection", "Feature Fusion Module v2", "TUM", "Dense Connections", "Focal Loss", "Non Maximum Suppression", "Batch Normalization", "Residual Network", "Kaiming Initialization", "Step Decay", "Sigmoid Activation", "SGD with Momentum", "Softmax", "Feature Pyramid Network", "Bottleneck Residual Block", "Dropout", "SFAM", "Mask R-CNN", "RetinaNet", "Residual Block", "Linear Warmup", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network"} {"abstract": "The goal of object detection is to determine the class and location of objects in an image. This paper proposes a novel anchor-free, two-stage framework which first extracts a number of object proposals by finding potential corner keypoint combinations and then assigns a class label to each proposal by a standalone classification stage. We demonstrate that these two stages are effective solutions for improving recall and precision, respectively, and they can be integrated into an end-to-end network. Our approach, dubbed Corner Proposal Network (CPN), enjoys the ability to detect objects of various scales and also avoids being confused by a large number of false-positive proposals. On the MS-COCO dataset, CPN achieves an AP of 49.2% which is competitive among state-of-the-art object detection methods. CPN also fits the scenario of computational efficiency, which achieves an AP of 41.6%/39.7% at 26.2/43.3 FPS, surpassing most competitors with the same inference speed. Code is available at https://github.com/Duankaiwen/CPNDet", "field": ["Stochastic Optimization"], "task": ["Object Detection"], "method": ["AdaGrad"], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Corner Proposal Network for Anchor-free, Two-stage Object Detection"} {"abstract": "Aspect-Based Sentiment Analysis (ABSA) studies the consumer opinion on the market products. It involves examining the type of sentiments as well as sentiment targets expressed in product reviews. Analyzing the language used in a review is a difficult task that requires a deep understanding of the language. In recent years, deep language models, such as BERT \\cite{devlin2019bert}, have shown great progress in this regard. In this work, we propose two simple modules called Parallel Aggregation and Hierarchical Aggregation to be utilized on top of BERT for two main ABSA tasks namely Aspect Extraction (AE) and Aspect Sentiment Classification (ASC) in order to improve the model's performance. We show that applying the proposed models eliminates the need for further training of the BERT model. The source code is available on the Web for further research and reproduction of the results.", "field": ["Attention Modules", "Regularization", "Stochastic Optimization", "Learning Rate Schedules", "Output Functions", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Aspect-Based Sentiment Analysis", "Aspect Extraction", "Sentiment Analysis"], "method": ["Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (F1)", "Laptop (Acc)", "Mean F1 (Laptop + Restaurant)", "Restaurant (Acc)", "Restaurant (F1)", "Mean Acc (Restaurant + Laptop)"], "title": "Improving BERT Performance for Aspect-Based Sentiment Analysis"} {"abstract": "Perceiving meaningful activities in a long video sequence is a challenging\nproblem due to ambiguous definition of 'meaningfulness' as well as clutters in\nthe scene. We approach this problem by learning a generative model for regular\nmotion patterns, termed as regularity, using multiple sources with very limited\nsupervision. Specifically, we propose two methods that are built upon the\nautoencoders for their ability to work with little to no supervision. We first\nleverage the conventional handcrafted spatio-temporal local features and learn\na fully connected autoencoder on them. Second, we build a fully convolutional\nfeed-forward autoencoder to learn both the local features and the classifiers\nas an end-to-end learning framework. Our model can capture the regularities\nfrom multiple datasets. We evaluate our methods in both qualitative and\nquantitative ways - showing the learned regularity of videos in various aspects\nand demonstrating competitive performance on anomaly detection datasets as an\napplication.", "field": ["Generative Models"], "task": ["Abnormal Event Detection In Video", "Anomaly Detection"], "method": ["AutoEncoder"], "dataset": ["A3D", "SA", "UBI-Fights"], "metric": ["AUC"], "title": "Learning Temporal Regularity in Video Sequences"} {"abstract": "Shape and texture are two prominent and complementary cues for recognizing objects. Nonetheless, Convolutional Neural Networks are often biased towards either texture or shape, depending on the training dataset. Our ablation shows that such bias degenerates model performance. Motivated by this observation, we develop a simple algorithm for shape-texture debiased learning. To prevent models from exclusively attending on a single cue in representation learning, we augment training data with images with conflicting shape and texture information (e.g., an image of chimpanzee shape but with lemon texture) and, most importantly, provide the corresponding supervisions from shape and texture simultaneously. Experiments show that our method successfully improves model performance on several image recognition benchmarks and adversarial robustness. For example, by training on ImageNet, it helps ResNet-152 achieve substantial improvements on ImageNet (+1.2%), ImageNet-A (+5.2%), ImageNet-C (+8.3%) and Stylized-ImageNet (+11.1%), and on defending against FGSM adversarial attacker on ImageNet (+14.4%). Our method also claims to be compatible to other advanced data augmentation strategies, e.g., Mixup and CutMix. The code is available here: https://github.com/LiYingwei/ShapeTextureDebiasedTraining.", "field": ["Image Data Augmentation"], "task": ["Data Augmentation", "Image Classification", "Representation Learning"], "method": ["CutMix", "Mixup"], "dataset": ["ImageNet"], "metric": ["Top 1 Accuracy"], "title": "Shape-Texture Debiased Neural Network Training"} {"abstract": "Automatic detecting anomalous regions in images of objects or textures without priors of the anomalies is challenging, especially when the anomalies appear in very small areas of the images, making difficult-to-detect visual variations, such as defects on manufacturing products. This paper proposes an effective unsupervised anomaly segmentation approach that can detect and segment out the anomalies in small and confined regions of images. Concretely, we develop a multi-scale regional feature generator that can generate multiple spatial context-aware representations from pre-trained deep convolutional networks for every subregion of an image. The regional representations not only describe the local characteristics of corresponding regions but also encode their multiple spatial context information, making them discriminative and very beneficial for anomaly detection. Leveraging these descriptive regional features, we then design a deep yet efficient convolutional autoencoder and detect anomalous regions within images via fast feature reconstruction. Our method is simple yet effective and efficient. It advances the state-of-the-art performances on several benchmark datasets and shows great potential for real applications.", "field": ["Generative Models"], "task": ["Anomaly Detection"], "method": ["AutoEncoder"], "dataset": ["MVTec AD"], "metric": ["Detection AUROC", "Segmentation AUROC"], "title": "DFR: Deep Feature Reconstruction for Unsupervised Anomaly Segmentation"} {"abstract": "This paper describes our submission to the 1st 3D Face Alignment in the Wild\n(3DFAW) Challenge. Our method builds upon the idea of convolutional part\nheatmap regression [1], extending it for 3D face alignment. Our method\ndecomposes the problem into two parts: (a) X,Y (2D) estimation and (b) Z\n(depth) estimation. At the first stage, our method estimates the X,Y\ncoordinates of the facial landmarks by producing a set of 2D heatmaps, one for\neach landmark, using convolutional part heatmap regression. Then, these\nheatmaps, alongside the input RGB image, are used as input to a very deep\nsubnetwork trained via residual learning for regressing the Z coordinate. Our\nmethod ranked 1st in the 3DFAW Challenge, surpassing the second best result by\nmore than 22%.", "field": ["Output Functions"], "task": ["Depth Estimation", "Face Alignment", "Regression"], "method": ["Heatmap"], "dataset": ["3DFAW"], "metric": ["CVGTCE", "GTE"], "title": "Two-stage Convolutional Part Heatmap Regression for the 1st 3D Face Alignment in the Wild (3DFAW) Challenge"} {"abstract": "We propose a novel semantic segmentation algorithm by learning a\ndeconvolution network. We learn the network on top of the convolutional layers\nadopted from VGG 16-layer net. The deconvolution network is composed of\ndeconvolution and unpooling layers, which identify pixel-wise class labels and\npredict segmentation masks. We apply the trained network to each proposal in an\ninput image, and construct the final semantic segmentation map by combining the\nresults from all proposals in a simple manner. The proposed algorithm mitigates\nthe limitations of the existing methods based on fully convolutional networks\nby integrating deep deconvolution network and proposal-wise prediction; our\nsegmentation method typically identifies detailed structures and handles\nobjects in multiple scales naturally. Our network demonstrates outstanding\nperformance in PASCAL VOC 2012 dataset, and we achieve the best accuracy\n(72.5%) among the methods trained with no external data through ensemble with\nthe fully convolutional network.", "field": ["Regularization", "Output Functions", "Convolutional Neural Networks", "Activation Functions", "Convolutions", "Feedforward Networks", "Pooling Operations"], "task": ["Semantic Segmentation"], "method": ["VGG", "Softmax", "Convolution", "ReLU", "Dropout", "Dense Connections", "Rectified Linear Units", "Max Pooling"], "dataset": ["SCUT-CTW1500"], "metric": ["F-Measure"], "title": "Learning Deconvolution Network for Semantic Segmentation"} {"abstract": "In recent years, single image super-resolution (SR) methods based on deep convolutional neural networks (CNNs) have made significant progress. However, due to the non-adaptive nature of the convolution operation, they cannot adapt to various characteristics of images, which limits their representational capability and, consequently, results in unnecessarily large model sizes. To address this issue, we propose a novel multi-path adaptive modulation network (MAMNet). Specifically, we propose a multi-path adaptive modulation block (MAMB), which is a lightweight yet effective residual block that adaptively modulates residual feature responses by fully exploiting their information via three paths. The three paths model three types of information suitable for SR: 1) channel-specific information (CSI) using global variance pooling, 2) inter-channel dependencies (ICD) based on the CSI, 3) and channel-specific spatial dependencies (CSD) via depth-wise convolution. We demonstrate that the proposed MAMB is effective and parameter-efficient for image SR than other feature modulation methods. In addition, experimental results show that our MAMNet outperforms most of the state-of-the-art methods with a relatively small number of parameters.", "field": ["Activation Functions", "Normalization", "Convolutions", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Super-Resolution", "Super-Resolution"], "method": ["Batch Normalization", "Convolution", "ReLU", "Residual Connection", "Residual Block", "Rectified Linear Units"], "dataset": ["Set5 - 4x upscaling", "Urban100 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "MAMNet: Multi-path Adaptive Modulation Network for Image Super-Resolution"} {"abstract": "Transformers have proved effective in many NLP tasks. However, their training requires non-trivial efforts regarding designing cutting-edge optimizers and learning rate schedulers carefully (e.g., conventional SGD fails to train Transformers effectively). Our objective here is to understand $\\textit{what complicates Transformer training}$ from both empirical and theoretical perspectives. Our analysis reveals that unbalanced gradients are not the root cause of the instability of training. Instead, we identify an amplification effect that influences training substantially -- for each layer in a multi-layer Transformer model, heavy dependency on its residual branch makes training unstable, since it amplifies small parameter perturbations (e.g., parameter updates) and results in significant disturbances in the model output. Yet we observe that a light dependency limits the model potential and leads to inferior trained models. Inspired by our analysis, we propose Admin ($\\textbf{Ad}$aptive $\\textbf{m}$odel $\\textbf{in}$itialization) to stabilize stabilize the early stage's training and unleash its full potential in the late stage. Extensive experiments show that Admin is more stable, converges faster, and leads to better performance. Implementations are released at: https://github.com/LiyuanLucasLiu/Transforemr-Clinic.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Machine Translation"], "method": ["Stochastic Gradient Descent", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "SGD", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["WMT2014 English-French"], "metric": ["BLEU score"], "title": "Understanding the Difficulty of Training Transformers"} {"abstract": "Community structure is ubiquitous in real-world complex networks. The task of community detection over these networks is of paramount importance in a variety of applications. Recently, nonnegative matrix factorization (NMF) has been widely adopted for community detection due to its great interpretability and its natural fitness for capturing the community membership of nodes. However, the existing NMF-based community detection approaches are shallow methods. They learn the community assignment by mapping the original network to the community membership space directly. Considering the complicated and diversified topology structures of real-world networks, it is highly possible that the mapping between the original network and the community membership space contains rather complex hierarchical information, which cannot be interpreted by classic shallow NMF-based approaches. Inspired by the unique feature representation learning capability of deep autoencoder, we propose a novel model, named Deep Autoencoder-like NMF (DANMF), for community detection. Similar to deep autoencoder, DANMF consists of an encoder component and a decoder component. This architecture empowers DANMF to learn the hierarchical mappings between the original network and the final community assignment with implicit low-to-high level hidden attributes of the original network learnt in the intermediate layers. Thus, DANMF should be better suited to the community detection task. Extensive experiments on benchmark datasets demonstrate that DANMF can achieve better performance than the state-of-the-art NMF-based community detection approaches.", "field": ["Image Models"], "task": ["Community Detection", "Local Community Detection", "Network Community Partition", "Node Classification", "Representation Learning"], "method": ["Interpretability"], "dataset": ["Wiki", "Pubmed", "Citeseer"], "metric": ["AUC", "Accuracy"], "title": "Deep Autoencoder-like Nonnegative Matrix Factorization for Community Detection"} {"abstract": "Chest x-rays are a vital tool in the workup of many patients. Similar to most medical imaging modalities, they are profoundly multi-modal and are capable of visualising a variety of combinations of conditions. There is an ever pressing need for greater quantities of labelled data to develop new diagnostic tools, however this is in direct opposition to concerns regarding patient confidentiality which constrains access through permission requests and ethics approvals. Previous work has sought to address these concerns by creating class-specific GANs that synthesise images to augment training data. These approaches cannot be scaled as they introduce computational trade offs between model size and class number which places fixed limits on the quality that such generates can achieve. We address this concern by introducing latent class optimisation which enables efficient, multi-modal sampling from a GAN and with which we synthesise a large archive of labelled generates. We apply a PGGAN to the task of unsupervised x-ray synthesis and have radiologists evaluate the clinical realism of the resultant samples. We provide an in depth review of the properties of varying pathologies seen on generates as well as an overview of the extent of disease diversity captured by the model. We validate the application of the Fr\\'echet Inception Distance (FID) to measure the quality of x-ray generates and find that they are similar to other high resolution tasks. We quantify x-ray clinical realism by asking radiologists to distinguish between real and fake scans and find that generates are more likely to be classed as real than by chance, but there is still progress required to achieve true realism. We confirm these findings by evaluating synthetic classification model performance on real scans. We conclude by discussing the limitations of PGGAN generates and how to achieve controllable, realistic generates.", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Loss Functions", "Latent Variable Sampling", "Convolutions", "Feedforward Networks", "Pooling Operations", "Generative Models", "Skip Connections", "Image Model Blocks"], "task": ["Conditional Image Generation", "Data Augmentation", "Image Generation", "Medical Image Generation"], "method": ["Average Pooling", "1x1 Convolution", "Convolution", "ReLU", "Leaky ReLU", "WGAN-GP Loss", "Latent Optimisation", "Dense Connections", "Dense Block", "ProGAN", "Batch Normalization", "Rectified Linear Units", "Progressively Growing GAN", "Kaiming Initialization", "Concatenated Skip Connection", "Dropout", "DenseNet", "Global Average Pooling", "Local Response Normalization", "Max Pooling"], "dataset": ["ChestXray14 1024x1024"], "metric": ["FID"], "title": "Evaluating the Clinical Realism of Synthetic Chest X-Rays Generated Using Progressively Growing GANs"} {"abstract": "Generative adversarial networks (GANs) have achieved great success at generating realistic images. However, the text generation still remains a challenging task for modern GAN architectures. In this work, we propose RelGAN, a new GAN architecture for text generation, consisting of three main components: a relational memory based generator for the long-distance dependency modeling, the Gumbel-Softmax relaxation for training GANs on discrete data, and multiple embedded representations in the discriminator to provide a more informative signal for the generator updates. Our experiments show that RelGAN outperforms current state-of-the-art models in terms of sample quality and diversity, and we also reveal via ablation studies that each component of RelGAN contributes critically to its performance improvements. Moreover, a key advantage of our method, that distinguishes it from other GANs, is the ability to control the trade-off between sample quality and diversity via the use of a single adjustable parameter. Finally, RelGAN is the first architecture that makes GANs with Gumbel-Softmax relaxation succeed in generating realistic text.", "field": ["Generative Models", "Convolutions"], "task": ["Text Generation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["COCO Captions"], "metric": ["BLEU-3", "BLEU-4", "BLEU-2"], "title": "RelGAN: Relational Generative Adversarial Networks for Text Generation"} {"abstract": "The diagnosis of cardiovascular diseases such as atrial fibrillation (AF) is a lengthy and expensive procedure that often requires visual inspection of ECG signals by experts. In order to improve patient management and reduce healthcare costs, automated detection of these pathologies is of utmost importance. In this study, we classify short segments of ECG into four classes (AF, normal, other rhythms or noise) as part of the Physionet/Computing in Cardiology Challenge 2017. We compare a state-of-the-art feature-based classifier with a convolutional neural network approach. Both methods were trained using the challenge data, supplemented with an additional database derived from Physionet. The feature-based classifier obtained an F1 score of 72.0% on the training set (5-fold cross-validation), and 79% on the hidden test set. Similarly, the convolutional neural network scored 72.1% on the augmented database and 83% on the test set. The latter method resulted on a final score of 79% at the competition. Developed routines and pre-trained models are freely available under a GNU GPLv3 license.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Arrhythmia Detection", "Electrocardiography (ECG)"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["The PhysioNet Computing in Cardiology Challenge 2017"], "metric": ["Accuracy (TEST-DB)", "Accuracy (TRAIN-DB)"], "title": "Comparing feature-based classifiers and convolutional neural networks to detect arrhythmia from short segments of ECG"} {"abstract": "Most tasks in natural language processing can be cast into question answering\n(QA) problems over language input. We introduce the dynamic memory network\n(DMN), a neural network architecture which processes input sequences and\nquestions, forms episodic memories, and generates relevant answers. Questions\ntrigger an iterative attention process which allows the model to condition its\nattention on the inputs and the result of previous iterations. These results\nare then reasoned over in a hierarchical recurrent sequence model to generate\nanswers. The DMN can be trained end-to-end and obtains state-of-the-art results\non several types of tasks and datasets: question answering (Facebook's bAbI\ndataset), text classification for sentiment analysis (Stanford Sentiment\nTreebank) and sequence modeling for part-of-speech tagging (WSJ-PTB). The\ntraining for these different tasks relies exclusively on trained word vector\nrepresentations and input-question-answer triplets.", "field": ["Recurrent Neural Networks", "Working Memory Models", "Output Functions"], "task": ["Part-Of-Speech Tagging", "Question Answering", "Sentiment Analysis", "Text Classification"], "method": ["Gated Recurrent Unit", "Softmax", "GRU", "Dynamic Memory Network"], "dataset": ["SST-2 Binary classification"], "metric": ["Accuracy"], "title": "Ask Me Anything: Dynamic Memory Networks for Natural Language Processing"} {"abstract": "With the success of deep learning in classifying short trimmed videos, more attention has been focused on temporally segmenting and classifying activities in long untrimmed videos. State-of-the-art approaches for action segmentation utilize several layers of temporal convolution and temporal pooling. Despite the capabilities of these approaches in capturing temporal dependencies, their predictions suffer from over-segmentation errors. In this paper, we propose a multi-stage architecture for the temporal action segmentation task that overcomes the limitations of the previous approaches. The first stage generates an initial prediction that is refined by the next ones. In each stage we stack several layers of dilated temporal convolutions covering a large receptive field with few parameters. While this architecture already performs well, lower layers still suffer from a small receptive field. To address this limitation, we propose a dual dilated layer that combines both large and small receptive fields. We further decouple the design of the first stage from the refining stages to address the different requirements of these stages. Extensive evaluation shows the effectiveness of the proposed model in capturing long-range dependencies and recognizing action segments. Our models achieve state-of-the-art results on three datasets: 50Salads, Georgia Tech Egocentric Activities (GTEA), and the Breakfast dataset.", "field": ["Convolutions"], "task": ["Action Segmentation"], "method": ["Convolution"], "dataset": ["50 Salads", "Breakfast", "GTEA"], "metric": ["Acc", "Edit", "F1@10%", "F1@25%", "F1@50%"], "title": "MS-TCN++: Multi-Stage Temporal Convolutional Network for Action Segmentation"} {"abstract": "While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.", "field": ["Image Scaling Strategies", "Output Functions", "Attention Modules", "Stochastic Optimization", "Regularization", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections", "Image Models"], "task": ["Document Image Classification", "Fine-Grained Image Classification", "Image Classification"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "FixRes", "Vision Transformer", "Multi-Head Attention", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["VTAB-1k", "CIFAR-100", "Oxford 102 Flowers", "CIFAR-10", "Oxford-IIIT Pets", "ImageNet ReaL", "Flowers-102", "ImageNet"], "metric": ["Number of params", "Top 1 Accuracy", "Percentage correct", "Top-1 Accuracy", "Params", "Top-1 Error Rate", "PARAMS", "Accuracy"], "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale"} {"abstract": "The variational autoencoder (VAE) is a popular combination of deep latent\nvariable model and accompanying variational learning technique. By using a\nneural inference network to approximate the model's posterior on latent\nvariables, VAEs efficiently parameterize a lower bound on marginal data\nlikelihood that can be optimized directly via gradient methods. In practice,\nhowever, VAE training often results in a degenerate local optimum known as\n\"posterior collapse\" where the model learns to ignore the latent variable and\nthe approximate posterior mimics the prior. In this paper, we investigate\nposterior collapse from the perspective of training dynamics. We find that\nduring the initial stages of training the inference network fails to\napproximate the model's true posterior, which is a moving target. As a result,\nthe model is encouraged to ignore the latent encoding and posterior collapse\noccurs. Based on this observation, we propose an extremely simple modification\nto VAE training to reduce inference lag: depending on the model's current\nmutual information between latent variable and observation, we aggressively\noptimize the inference network before performing each model update. Despite\nintroducing neither new model components nor significant complexity over basic\nVAE, our approach is able to avoid the problem of collapse that has plagued a\nlarge amount of previous work. Empirically, our approach outperforms strong\nautoregressive baselines on text and image benchmarks in terms of held-out\nlikelihood, and is competitive with more complex techniques for avoiding\ncollapse while being substantially faster.", "field": ["Generative Models"], "task": ["Text Generation"], "method": ["VAE", "Variational Autoencoder", "AutoEncoder"], "dataset": ["Yahoo Questions"], "metric": ["KL", "NLL", "Perplexity"], "title": "Lagging Inference Networks and Posterior Collapse in Variational Autoencoders"} {"abstract": "We present PPFNet - Point Pair Feature NETwork for deeply learning a globally\ninformed 3D local feature descriptor to find correspondences in unorganized\npoint clouds. PPFNet learns local descriptors on pure geometry and is highly\naware of the global context, an important cue in deep learning. Our 3D\nrepresentation is computed as a collection of point-pair-features combined with\nthe points and normals within a local vicinity. Our permutation invariant\nnetwork design is inspired by PointNet and sets PPFNet to be ordering-free. As\nopposed to voxelization, our method is able to consume raw point clouds to\nexploit the full sparsity. PPFNet uses a novel $\\textit{N-tuple}$ loss and\narchitecture injecting the global information naturally into the local\ndescriptor. It shows that context awareness also boosts the local feature\nrepresentation. Qualitative and quantitative evaluations of our network suggest\nincreased recall, improved robustness and invariance as well as a vital step in\nthe 3D descriptor extraction performance.", "field": ["3D Representations"], "task": ["Point Cloud Registration"], "method": ["PointNet"], "dataset": ["3DMatch Benchmark"], "metric": ["Recall"], "title": "PPFNet: Global Context Aware Local Features for Robust 3D Point Matching"} {"abstract": "A collection of approaches based on graph convolutional networks have proven success in skeleton-based action recognition by exploring neighborhood information and dense dependencies between intra-frame joints. However, these approaches usually ignore the spatial-temporal global context as well as the local relation between inter-frame and intra-frame. In this paper, we propose a focusing and diffusion mechanism to enhance graph convolutional networks by paying attention to the kinematic dependence of articulated human pose in a frame and their implicit dependencies over frames. In the focusing process, we introduce an attention module to learn a latent node over the intra-frame joints to convey spatial contextual information. In this way, the sparse connections between joints in a frame can be well captured, while the global context over the entire sequence is further captured by these hidden nodes with a bidirectional LSTM. In the diffusing process, the learned spatial-temporal contextual information is passed back to the spatial joints, leading to a bidirectional attentive graph convolutional network (BAGCN) that can facilitate skeleton-based action recognition. Extensive experiments on the challenging NTU RGB+D and Skeleton-Kinetics benchmarks demonstrate the efficacy of our approach.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Focusing and Diffusion: Bidirectional Attentive Graph Convolutional Networks for Skeleton-based Action Recognition"} {"abstract": "Action recognition has seen a dramatic performance improvement in the last few years. Most of the current state-of-the-art literature either aims at improving performance through changes to the backbone CNN network, or they explore different trade-offs between computational efficiency and performance, again through altering the backbone network. However, almost all of these works maintain the same last layers of the network, which simply consist of a global average pooling followed by a fully connected layer. In this work we focus on how to improve the representation capacity of the network, but rather than altering the backbone, we focus on improving the last layers of the network, where changes have low impact in terms of computational cost. In particular, we show that current architectures have poor sensitivity to finer details and we exploit recent advances in the fine-grained recognition literature to improve our model in this aspect. With the proposed approach, we obtain state-of-the-art performance on Kinetics-400 and Something-Something-V1, the two major large-scale action recognition benchmarks.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Action Classification", "Action Recognition"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["Kinetics-400", "Something-Something V1"], "metric": ["Vid acc@1", "Top 1 Accuracy"], "title": "Action recognition with spatial-temporal discriminative filter banks"} {"abstract": "Recent research has shown that modeling the dynamic joint features of the human body by a graph convolutional network (GCN) is a groundbreaking approach for skeleton-based action recognition, especially for the recognition of the body-motion, human-object and human-human interactions. Nevertheless, how to model and utilize coherent skeleton information comprehensively is still an open problem. In order to capture the rich spatiotemporal information and utilize features more effectively, we introduce a spatial residual layer and a dense connection block enhanced spatial temporal graph convolutional network. More specifically, our work introduces three aspects. Firstly, we extend spatial graph convolution to spatial temporal graph convolution of cross-domain residual to extract more precise and informative spatiotemporal feature, and reduce the training complexity by feature fusion in the, so-called, spatial residual layer. Secondly, instead of simply superimposing multiple similar layers, we use dense connection to take full advantage of the global information. Thirdly, we combine the above mentioned two components to create a spatial temporal graph convolutional network (ST-GCN), referred to as SDGCN. The proposed graph representation has a new structure. We perform extensive experiments on two large datasets: Kinetics and NTU-RGB+D. Our method achieves a great improvement in performance compared to the mainstream methods. We evaluate our method quantitatively and qualitatively, thus proving its effectiveness.", "field": ["Convolutions"], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": ["Convolution"], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Spatial Residual Layer and Dense Connection Block Enhanced Spatial Temporal Graph Convolutional Network for Skeleton-Based Action Recognition"} {"abstract": "Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality. These systems mainly rely on community reports for assessing contents, which has serious problems such as the slow handling of violations, the loss of normal and experienced users' time, the low quality of some reports, and discouraging feedback to new users. Therefore, with the overall goal of providing solutions for automating moderation actions in Q&A websites, we aim to provide a model to predict 20 quality or subjective aspects of questions in QA websites. To this end, we used data gathered by the CrowdSource team at Google Research in 2019 and a fine-tuned pre-trained BERT model on our problem. Based on the evaluation by Mean-Squared-Error (MSE), the model achieved a value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. Results confirm that by simple fine-tuning, we can achieve accurate models in little time and on less amount of data.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Community Question Answering", "Question Answering"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["CrowdSource QA"], "metric": ["MSE"], "title": "Predicting Subjective Features of Questions of QA Websites using BERT"} {"abstract": "In recent years there have been many successes of using deep representations\nin reinforcement learning. Still, many of these applications use conventional\narchitectures, such as convolutional networks, LSTMs, or auto-encoders. In this\npaper, we present a new neural network architecture for model-free\nreinforcement learning. Our dueling network represents two separate estimators:\none for the state value function and one for the state-dependent action\nadvantage function. The main benefit of this factoring is to generalize\nlearning across actions without imposing any change to the underlying\nreinforcement learning algorithm. Our results show that this architecture leads\nto better policy evaluation in the presence of many similar-valued actions.\nMoreover, the dueling architecture enables our RL agent to outperform the\nstate-of-the-art on the Atari 2600 domain.", "field": ["Q-Learning Networks", "Convolutions", "Feedforward Networks", "Off-Policy TD Control"], "task": ["Atari Games"], "method": ["Dueling Network", "Double Q-learning", "Dense Connections", "Convolution"], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari-57", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Defender", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score", "Medium Human-Normalized Score"], "title": "Dueling Network Architectures for Deep Reinforcement Learning"} {"abstract": "We present a novel Dynamic Differentiable Reasoning (DDR) framework for\njointly learning branching programs and the functions composing them; this\nresolves a significant nondifferentiability inhibiting recent dynamic\narchitectures. We apply our framework to two settings in two highly compact and\ndata efficient architectures: DDRprog for CLEVR Visual Question Answering and\nDDRstack for reverse Polish notation expression evaluation. DDRprog uses a\nrecurrent controller to jointly predict and execute modular neural programs\nthat directly correspond to the underlying question logic; it explicitly forks\nsubprocesses to handle logical branching. By effectively leveraging additional\nstructural supervision, we achieve a large improvement over previous approaches\nin subtask consistency and a small improvement in overall accuracy. We further\ndemonstrate the benefits of structural supervision in the RPN setting: the\ninclusion of a stack assumption in DDRstack allows our approach to generalize\nto long expressions where an LSTM fails the task.", "field": ["Recurrent Neural Networks", "Activation Functions", "Region Proposal"], "task": ["Question Answering", "Visual Question Answering"], "method": ["RPN", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Region Proposal Network", "Sigmoid Activation"], "dataset": ["CLEVR"], "metric": ["Accuracy"], "title": "DDRprog: A CLEVR Differentiable Dynamic Reasoning Programmer"} {"abstract": "Grammatical error correction can be viewed as a low-resource sequence-to-sequence task, because publicly available parallel corpora are limited. To tackle this challenge, we first generate erroneous versions of large unannotated corpora using a realistic noising function. The resulting parallel corpora are subsequently used to pre-train Transformer models. Then, by sequentially applying transfer learning, we adapt these models to the domain and style of the test set. Combined with a context-aware neural spellchecker, our system achieves competitive results in both restricted and low resource tracks in ACL 2019 BEA Shared Task. We release all of our code and materials for reproducibility.", "field": ["Regularization", "Output Functions", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections"], "task": ["Grammatical Error Correction", "Transfer Learning"], "method": ["Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Rectified Linear Units", "ReLU", "Residual Connection", "Label Smoothing", "Dropout", "Scaled Dot-Product Attention", "Dense Connections"], "dataset": ["BEA-2019 (test)"], "metric": ["F0.5"], "title": "A Neural Grammatical Error Correction System Built On Better Pre-training and Sequential Transfer Learning"} {"abstract": "We present Decentralized Distributed Proximal Policy Optimization (DD-PPO), a method for distributed reinforcement learning in resource-intensive simulated environments. DD-PPO is distributed (uses multiple machines), decentralized (lacks a centralized server), and synchronous (no computation is ever stale), making it conceptually simple and easy to implement. In our experiments on training virtual robots to navigate in Habitat-Sim, DD-PPO exhibits near-linear scaling -- achieving a speedup of 107x on 128 GPUs over a serial implementation. We leverage this scaling to train an agent for 2.5 Billion steps of experience (the equivalent of 80 years of human experience) -- over 6 months of GPU-time training in under 3 days of wall-clock time with 64 GPUs. This massive-scale training not only sets the state of art on Habitat Autonomous Navigation Challenge 2019, but essentially solves the task --near-perfect autonomous navigation in an unseen environment without access to a map, directly from an RGB-D camera and a GPS+Compass sensor. Fortuitously, error vs computation exhibits a power-law-like distribution; thus, 90% of peak performance is obtained relatively early (at 100 million steps) and relatively cheaply (under 1 day with 8 GPUs). Finally, we show that the scene understanding and navigation policies learned can be transferred to other navigation tasks -- the analog of ImageNet pre-training + task-specific fine-tuning for embodied AI. Our model outperforms ImageNet pre-trained CNNs on these transfer tasks and can serve as a universal resource (all models and code are publicly available).", "field": ["Distributed Reinforcement Learning"], "task": ["Autonomous Navigation", "PointGoal Navigation", "Robot Navigation", "Scene Understanding"], "method": ["Decentralized Distributed Proximal Policy Optimization", "DD-PPO"], "dataset": ["Gibson PointGoal Navigation", "Habitat 2020 Object Nav test-std"], "metric": ["SOFT_SPL", "spl", "SPL", "DISTANCE_TO_GOAL", "SUCCESS"], "title": "DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames"} {"abstract": "Surface meshes are widely used shape representations and capture finer geometry data than point clouds or volumetric grids, but are challenging to apply CNNs directly due to their non-Euclidean structure. We use parallel frames on surface to define PFCNNs that enable effective feature learning on surface meshes by mimicking standard convolutions faithfully. In particular, the convolution of PFCNN not only maps local surface patches onto flat tangent planes, but also aligns the tangent planes such that they locally form a flat Euclidean structure, thus enabling recovery of standard convolutions. The alignment is achieved by the tool of locally flat connections borrowed from discrete differential geometry, which can be efficiently encoded and computed by parallel frame fields. In addition, the lack of canonical axis on surface is handled by sampling with the frame directions. Experiments show that for tasks including classification, segmentation and registration on deformable geometric domains, as well as semantic scene segmentation on rigid domains, PFCNNs achieve robust and superior performances without using sophisticated input features than state-of-the-art surface based CNNs.", "field": ["Convolutions"], "task": ["Scene Segmentation", "Semantic Segmentation"], "method": ["Convolution"], "dataset": ["ScanNet"], "metric": ["3DIoU"], "title": "PFCNN: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames"} {"abstract": "This paper presents a new deep learning architecture called PointGrid that is designed for 3D model recognition from unorganized point clouds. The new architecture embeds the input point cloud into a 3D grid by a simple, yet effective, sampling strategy and directly learns transformations and features from their raw coordinates. The proposed method is an integration of point and grid, a hybrid model, that leverages the simplicity of grid-based approaches such as VoxelNet while avoid its information loss. PointGrid learns better global information compared with PointNet and is much simpler than PointNet++, Kd-Net, Oct-Net and O-CNN, yet provides comparable recognition accuracy. With experiments on popular shape recognition benchmarks, PointGrid demonstrates competitive performance over existing deep learning methods on both classification and segmentation.", "field": ["3D Representations"], "task": ["3D Part Segmentation", "3D Point Cloud Classification"], "method": ["PointNet"], "dataset": ["ShapeNet-Part", "ModelNet40"], "metric": ["Overall Accuracy", "Class Average IoU", "Instance Average IoU"], "title": "PointGrid: A Deep Network for 3D Shape Understanding"} {"abstract": "Training a neural network is synonymous with learning the values of the weights. By contrast, we demonstrate that randomly weighted neural networks contain subnetworks which achieve impressive performance without ever training the weight values. Hidden in a randomly weighted Wide ResNet-50 we show that there is a subnetwork (with random weights) that is smaller than, but matches the performance of a ResNet-34 trained on ImageNet. Not only do these \"untrained subnetworks\" exist, but we provide an algorithm to effectively find them. We empirically show that as randomly weighted neural networks with fixed weights grow wider and deeper, an \"untrained subnetwork\" approaches a network with learned weights in accuracy. Our code and pretrained models are available at https://github.com/allenai/hidden-networks.", "field": ["Initialization", "Regularization", "Convolutional Neural Networks", "Learning Rate Schedules", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Image Models", "Skip Connection Blocks"], "task": ["Image Classification"], "method": ["Weight Decay", "Cosine Annealing", "Average Pooling", "Adam", "1x1 Convolution", "ResNet", "Convolution", "ReLU", "Residual Connection", "WideResNet", "Wide Residual Block", "Batch Normalization", "Residual Network", "Kaiming Initialization", "SGD with Momentum", "Bottleneck Residual Block", "Dropout", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 1 Accuracy"], "title": "What's Hidden in a Randomly Weighted Neural Network?"} {"abstract": "Beyond local convolution networks, we explore how to harness various external human knowledge for endowing the networks with the capability of semantic global reasoning. Rather than using separate graphical models (e.g. CRF) or constraints for modeling broader dependencies, we propose a new Symbolic Graph Reasoning (SGR) layer, which performs reasoning over a group of symbolic nodes whose outputs explicitly represent different properties of each semantic in a prior knowledge graph. To cooperate with local convolutions, each SGR is constituted by three modules: a) a primal local-to-semantic voting module where the features of all symbolic nodes are generated by voting from local representations; b) a graph reasoning module propagates information over knowledge graph to achieve global semantic coherency; c) a dual semantic-to-local mapping module learns new associations of the evolved symbolic nodes with local representations, and accordingly enhances local features. The SGR layer can be injected between any convolution layers and instantiated with distinct prior graphs. Extensive experiments show incorporating SGR significantly improves plain ConvNets on three semantic segmentation tasks and one image classification task. More analyses show the SGR layer learns shared symbolic representations for domains/datasets with the different label set given a universal knowledge graph, demonstrating its superior generalization capability.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Image Classification", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ADE20K val"], "metric": ["mIoU"], "title": "Symbolic Graph Reasoning Meets Convolutions"} {"abstract": "Humans recognize the visual world at multiple levels: we effortlessly\ncategorize scenes and detect objects inside, while also identifying the\ntextures and surfaces of the objects along with their different compositional\nparts. In this paper, we study a new task called Unified Perceptual Parsing,\nwhich requires the machine vision systems to recognize as many visual concepts\nas possible from a given image. A multi-task framework called UPerNet and a\ntraining strategy are developed to learn from heterogeneous image annotations.\nWe benchmark our framework on Unified Perceptual Parsing and show that it is\nable to effectively segment a wide range of concepts from images. The trained\nnetworks are further applied to discover visual knowledge in natural scenes.\nModels are available at \\url{https://github.com/CSAILVision/unifiedparsing}.", "field": ["Initialization", "Convolutional Neural Networks", "Activation Functions", "Normalization", "Convolutions", "Pooling Operations", "Skip Connections", "Skip Connection Blocks"], "task": ["Scene Understanding", "Semantic Segmentation"], "method": ["ResNet", "Average Pooling", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Residual Network", "Residual Connection", "Bottleneck Residual Block", "Kaiming Initialization", "Residual Block", "Global Average Pooling", "Rectified Linear Units", "Max Pooling"], "dataset": ["ADE20K val"], "metric": ["mIoU"], "title": "Unified Perceptual Parsing for Scene Understanding"} {"abstract": "In this paper, we introduce a new deep convolutional neural network (ConvNet)\nmodule that promotes competition among a set of multi-scale convolutional\nfilters. This new module is inspired by the inception module, where we replace\nthe original collaborative pooling stage (consisting of a concatenation of the\nmulti-scale filter outputs) by a competitive pooling represented by a maxout\nactivation unit. This extension has the following two objectives: 1) the\nselection of the maximum response among the multi-scale filters prevents filter\nco-adaptation and allows the formation of multiple sub-networks within the same\nmodel, which has been shown to facilitate the training of complex learning\nproblems; and 2) the maxout unit reduces the dimensionality of the outputs from\nthe multi-scale filters. We show that the use of our proposed module in typical\ndeep ConvNets produces classification results that are either better than or\ncomparable to the state of the art on the following benchmark datasets: MNIST,\nCIFAR-10, CIFAR-100 and SVHN.", "field": ["Activation Functions"], "task": ["Image Classification"], "method": ["Maxout"], "dataset": ["SVHN", "MNIST", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Competitive Multi-scale Convolution"} {"abstract": "Neural network models have been demonstrated to be capable of achieving\nremarkable performance in sentence and document modeling. Convolutional neural\nnetwork (CNN) and recurrent neural network (RNN) are two mainstream\narchitectures for such modeling tasks, which adopt totally different ways of\nunderstanding natural languages. In this work, we combine the strengths of both\narchitectures and propose a novel and unified model called C-LSTM for sentence\nrepresentation and text classification. C-LSTM utilizes CNN to extract a\nsequence of higher-level phrase representations, and are fed into a long\nshort-term memory recurrent neural network (LSTM) to obtain the sentence\nrepresentation. C-LSTM is able to capture both local features of phrases as\nwell as global and temporal sentence semantics. We evaluate the proposed\narchitecture on sentiment classification and question classification tasks. The\nexperimental results show that the C-LSTM outperforms both CNN and LSTM and can\nachieve excellent performance on these tasks.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Sentiment Analysis", "Text Classification"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["SST-2 Binary classification", "TREC-6", "SST-5 Fine-grained classification"], "metric": ["Error", "Accuracy"], "title": "A C-LSTM Neural Network for Text Classification"} {"abstract": "Recently, Neural Architecture Search (NAS) has successfully identified neural\nnetwork architectures that exceed human designed ones on large-scale image\nclassification. In this paper, we study NAS for semantic image segmentation.\nExisting works often focus on searching the repeatable cell structure, while\nhand-designing the outer network structure that controls the spatial resolution\nchanges. This choice simplifies the search space, but becomes increasingly\nproblematic for dense image prediction which exhibits a lot more network level\narchitectural variations. Therefore, we propose to search the network level\nstructure in addition to the cell level structure, which forms a hierarchical\narchitecture search space. We present a network level search space that\nincludes many popular designs, and develop a formulation that allows efficient\ngradient-based architecture search (3 P100 GPU days on Cityscapes images). We\ndemonstrate the effectiveness of the proposed method on the challenging\nCityscapes, PASCAL VOC 2012, and ADE20K datasets. Auto-DeepLab, our\narchitecture searched specifically for semantic image segmentation, attains\nstate-of-the-art performance without any ImageNet pretraining.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions"], "task": ["Image Classification", "Neural Architecture Search", "Semantic Segmentation"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["Cityscapes val", "PASCAL VOC 2012 test", "ADE20K", "PASCAL VOC 2012 val", "ADE20K val", "Cityscapes test"], "metric": ["Validation mIoU", "Mean IoU", "Pixel Accuracy", "mIoU", "Mean IoU (class)"], "title": "Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation"} {"abstract": "Prediction tasks over nodes and edges in networks require careful effort in\nengineering features used by learning algorithms. Recent research in the\nbroader field of representation learning has led to significant progress in\nautomating prediction by learning the features themselves. However, present\nfeature learning approaches are not expressive enough to capture the diversity\nof connectivity patterns observed in networks. Here we propose node2vec, an\nalgorithmic framework for learning continuous feature representations for nodes\nin networks. In node2vec, we learn a mapping of nodes to a low-dimensional\nspace of features that maximizes the likelihood of preserving network\nneighborhoods of nodes. We define a flexible notion of a node's network\nneighborhood and design a biased random walk procedure, which efficiently\nexplores diverse neighborhoods. Our algorithm generalizes prior work which is\nbased on rigid notions of network neighborhoods, and we argue that the added\nflexibility in exploring neighborhoods is the key to learning richer\nrepresentations. We demonstrate the efficacy of node2vec over existing\nstate-of-the-art techniques on multi-label classification and link prediction\nin several real-world networks from diverse domains. Taken together, our work\nrepresents a new way for efficiently learning state-of-the-art task-independent\nrepresentations in complex networks.", "field": ["Graph Embeddings"], "task": ["Link Prediction", "Multi-Label Classification", "Node Classification", "Representation Learning"], "method": ["node2vec"], "dataset": ["BlogCatalog", "Wikipedia", "Android Malware Dataset"], "metric": ["Macro-F1", "Accuracy"], "title": "node2vec: Scalable Feature Learning for Networks"} {"abstract": "Robust face detection in the wild is one of the ultimate components to\nsupport various facial related problems, i.e. unconstrained face recognition,\nfacial periocular recognition, facial landmarking and pose estimation, facial\nexpression recognition, 3D facial model construction, etc. Although the face\ndetection problem has been intensely studied for decades with various\ncommercial applications, it still meets problems in some real-world scenarios\ndue to numerous challenges, e.g. heavy facial occlusions, extremely low\nresolutions, strong illumination, exceptionally pose variations, image or video\ncompression artifacts, etc. In this paper, we present a face detection approach\nnamed Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN)\nto robustly solve the problems mentioned above. Similar to the region-based\nCNNs, our proposed network consists of the region proposal component and the\nregion-of-interest (RoI) detection component. However, far apart of that\nnetwork, there are two main contributions in our proposed network that play a\nsignificant role to achieve the state-of-the-art performance in face detection.\nFirstly, the multi-scale information is grouped both in region proposal and RoI\ndetection to deal with tiny face regions. Secondly, our proposed network allows\nexplicit body contextual reasoning in the network inspired from the intuition\nof human vision system. The proposed approach is benchmarked on two recent\nchallenging face detection databases, i.e. the WIDER FACE Dataset which\ncontains high degree of variability, as well as the Face Detection Dataset and\nBenchmark (FDDB). The experimental results show that our proposed approach\ntrained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE\nDataset by a large margin, and consistently achieves competitive results on\nFDDB against the recent state-of-the-art face detection methods.", "field": ["Convolutions"], "task": ["Face Detection", "Face Recognition", "Facial Expression Recognition", "Pose Estimation", "Region Proposal", "Robust Face Recognition", "Video Compression"], "method": ["Convolution"], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)"], "metric": ["AP"], "title": "CMS-RCNN: Contextual Multi-Scale Region-based CNN for Unconstrained Face Detection"} {"abstract": "The field of Grammatical Error Correction (GEC) has produced various systems to deal with focused phenomena or general text editing. We propose an automatic way to combine black-box systems. Our method automatically detects the strength of a system or the combination of several systems per error type, improving precision and recall while optimizing $F$ score directly. We show consistent improvement over the best standalone system in all the configurations tested. This approach also outperforms average ensembling of different RNN models with random initializations. In addition, we analyze the use of BERT for GEC - reporting promising results on this end. We also present a spellchecker created for this task which outperforms standard spellcheckers tested on the task of spellchecking. This paper describes a system submission to Building Educational Applications 2019 Shared Task: Grammatical Error Correction. Combining the output of top BEA 2019 shared task systems using our approach, currently holds the highest reported score in the open phase of the BEA 2019 shared task, improving F0.5 by 3.7 points over the best result reported.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Grammatical Error Correction"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": ["BEA-2019 (test)"], "metric": ["F0.5"], "title": "Learning to combine Grammatical Error Corrections"} {"abstract": "Architecture design has become a crucial component of successful deep learning. Recent progress in automatic neural architecture search (NAS) shows a lot of promise. However, discovered architectures often fail to generalize in the final evaluation. Architectures with a higher validation accuracy during the search phase may perform worse in the evaluation. Aiming to alleviate this common issue, we introduce sequential greedy architecture search (SGAS), an efficient method for neural architecture search. By dividing the search procedure into sub-problems, SGAS chooses and prunes candidate operations in a greedy fashion. We apply SGAS to search architectures for Convolutional Neural Networks (CNN) and Graph Convolutional Networks (GCN). Extensive experiments show that SGAS is able to find state-of-the-art architectures for tasks such as image classification, point cloud classification and node classification in protein-protein interaction graphs with minimal computational cost. Please visit https://www.deepgcns.org/auto/sgas for more information about SGAS.", "field": ["Recurrent Neural Networks", "Activation Functions", "Output Functions"], "task": ["Image Classification", "Neural Architecture Search", "Node Classification"], "method": ["Softmax", "Long Short-Term Memory", "Tanh Activation", "LSTM", "Sigmoid Activation"], "dataset": ["ImageNet", "PPI", "CIFAR-10"], "metric": ["Accuracy", "Search Time (GPU days)", "F1", "Top-1 Error Rate"], "title": "SGAS: Sequential Greedy Architecture Search"} {"abstract": "Synthesizing face sketches from real photos and its inverse have many\napplications. However, photo/sketch synthesis remains a challenging problem due\nto the fact that photo and sketch have different characteristics. In this work,\nwe consider this task as an image-to-image translation problem and explore the\nrecently popular generative models (GANs) to generate high-quality realistic\nphotos from sketches and sketches from photos. Recent GAN-based methods have\nshown promising results on image-to-image translation problems and\nphoto-to-sketch synthesis in particular, however, they are known to have\nlimited abilities in generating high-resolution realistic images. To this end,\nwe propose a novel synthesis framework called Photo-Sketch Synthesis using\nMulti-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution\nto high resolution images in an adversarial way. The hidden layers of the\ngenerator are supervised to first generate lower resolution images followed by\nimplicit refinement in the network to generate higher resolution images.\nFurthermore, since photo-sketch synthesis is a coupled/paired translation\nproblem, we leverage the pair information using CycleGAN framework. Both Image\nQuality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to\ndemonstrate the superior performance of our framework in comparison to existing\nstate-of-the-art solutions. Code available at:\nhttps://github.com/lidan1/PhotoSketchMAN.", "field": ["Discriminators", "Activation Functions", "Normalization", "Loss Functions", "Convolutions", "Generative Models", "Skip Connections", "Skip Connection Blocks"], "task": ["Face Sketch Synthesis", "Image Quality Assessment", "Image-to-Image Translation"], "method": ["Cycle Consistency Loss", "Instance Normalization", "PatchGAN", "GAN Least Squares Loss", "Batch Normalization", "Tanh Activation", "Convolution", "ReLU", "CycleGAN", "Residual Connection", "Leaky ReLU", "Residual Block", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["CUHK"], "metric": ["SSIM", "FSIM"], "title": "High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks"} {"abstract": "3D object detection has recently become popular due to many applications in robotics, augmented reality, autonomy, and image retrieval. We introduce the Objectron dataset to advance the state of the art in 3D object detection and foster new research and applications, such as 3D object tracking, view synthesis, and improved 3D shape representation. The dataset contains object-centric short videos with pose annotations for nine categories and includes 4 million annotated images in 14,819 annotated videos. We also propose a new evaluation metric, 3D Intersection over Union, for 3D object detection. We demonstrate the usefulness of our dataset in 3D object detection tasks by providing baseline models trained on this dataset. Our dataset and evaluation source code are available online at http://www.objectron.dev", "field": ["Regularization", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Image Model Blocks", "Image Models", "Skip Connection Blocks"], "task": ["3D Object Detection", "3D Object Tracking", "3D Shape Representation", "Image Retrieval", "Monocular 3D Object Detection", "Object Detection", "Object Tracking"], "method": ["Depthwise Convolution", "Squeeze-and-Excitation Block", "Average Pooling", "Swish", "RMSProp", "Inverted Residual Block", "EfficientNet", "Batch Normalization", "Convolution", "1x1 Convolution", "ReLU", "Dropout", "Depthwise Separable Convolution", "Pointwise Convolution", "Dense Connections", "Rectified Linear Units", "Sigmoid Activation"], "dataset": ["Google Objectron"], "metric": ["Average Precision at 0.5 3D IoU", "MPE", "AP at 10' Elevation error", "AP at 15' Azimuth error"], "title": "Objectron: A Large Scale Dataset of Object-Centric Videos in the Wild with Pose Annotations"} {"abstract": "Structured belief states are crucial for user goal tracking and database query in task-oriented dialog systems. However, training belief trackers often requires expensive turn-level annotations of every user utterance. In this paper we aim at alleviating the reliance on belief state labels in building end-to-end dialog systems, by leveraging unlabeled dialog data towards semi-supervised learning. We propose a probabilistic dialog model, called the LAtent BElief State (LABES) model, where belief states are represented as discrete latent variables and jointly modeled with system responses given user inputs. Such latent variable modeling enables us to develop semi-supervised learning under the principled variational learning framework. Furthermore, we introduce LABES-S2S, which is a copy-augmented Seq2Seq model instantiation of LABES. In supervised experiments, LABES-S2S obtains strong results on three benchmark datasets of different scales. In utilizing unlabeled dialog data, semi-supervised LABES-S2S significantly outperforms both supervised-only and semi-supervised baselines. Remarkably, we can reduce the annotation demands to 50% without performance loss on MultiWOZ.", "field": ["Recurrent Neural Networks", "Activation Functions", "Sequence To Sequence Models"], "task": ["End-To-End Dialogue Modelling"], "method": ["Long Short-Term Memory", "Tanh Activation", "Sequence to Sequence", "LSTM", "Seq2Seq", "Sigmoid Activation"], "dataset": ["MULTIWOZ 2.1"], "metric": ["MultiWOZ (Inform)", "BLEU", "MultiWOZ (Success)"], "title": "A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised Learning"} {"abstract": "We study the problem of transferring a sample in one domain to an analog\nsample in another domain. Given two related domains, S and T, we would like to\nlearn a generative function G that maps an input sample from S to the domain T,\nsuch that the output of a given function f, which accepts inputs in either\ndomains, would remain unchanged. Other than the function f, the training data\nis unsupervised and consist of a set of samples from each domain. The Domain\nTransfer Network (DTN) we present employs a compound loss function that\nincludes a multiclass GAN loss, an f-constancy component, and a regularizing\ncomponent that encourages G to map samples from T to themselves. We apply our\nmethod to visual domains including digits and face images and demonstrate its\nability to generate convincing novel images of previously unseen entities,\nwhile preserving their identity.", "field": ["Generative Models", "Convolutions"], "task": ["Domain Adaptation", "Image Generation", "Image-to-Image Translation", "Unsupervised Image-To-Image Translation"], "method": ["Generative Adversarial Network", "GAN", "Convolution"], "dataset": ["SVNH-to-MNIST"], "metric": ["Classification Accuracy"], "title": "Unsupervised Cross-Domain Image Generation"} {"abstract": "Time series forecasting is one of the challenging problems for humankind.\nTraditional forecasting methods using mean regression models have severe\nshortcomings in reflecting real-world fluctuations. While new probabilistic\nmethods rush to rescue, they fight with technical difficulties like quantile\ncrossing or selecting a prior distribution. To meld the different strengths of\nthese fields while avoiding their weaknesses as well as to push the boundary of\nthe state-of-the-art, we introduce ForGAN - one step ahead probabilistic\nforecasting with generative adversarial networks. ForGAN utilizes the power of\nthe conditional generative adversarial network to learn the data generating\ndistribution and compute probabilistic forecasts from it. We argue how to\nevaluate ForGAN in opposition to regression methods. To investigate\nprobabilistic forecasting of ForGAN, we create a new dataset and demonstrate\nour method abilities on it. This dataset will be made publicly available for\ncomparison. Furthermore, we test ForGAN on two publicly available datasets,\nnamely Mackey-Glass dataset and Internet traffic dataset (A5M) where the\nimpressive performance of ForGAN demonstrate its high capability in forecasting\nfuture values.", "field": ["Generative Models"], "task": ["Probabilistic Time Series Forecasting", "Regression", "Time Series", "Time Series Forecasting", "Univariate Time Series Forecasting"], "method": ["Generative Adversarial Network", "GAN"], "dataset": ["Internet Traffic dataset (A5M)", "Mackey-Glass dataset", "Lorenz dataset"], "metric": ["CRPS", "KLD"], "title": "Probabilistic Forecasting of Sensory Data with Generative Adversarial Networks - ForGAN"} {"abstract": "We introduce GQA, a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous VQA datasets. We have developed a strong and robust question engine that leverages scene graph structures to create 22M diverse reasoning questions, all come with functional programs that represent their semantics. We use the programs to gain tight control over the answer distribution and present a new tunable smoothing technique to mitigate question biases. Accompanying the dataset is a suite of new metrics that evaluate essential qualities such as consistency, grounding and plausibility. An extensive analysis is performed for baselines as well as state-of-the-art models, providing fine-grained results for different question types and topologies. Whereas a blind LSTM obtains mere 42.1%, and strong VQA models achieve 54.1%, human performance tops at 89.3%, offering ample opportunity for new research to explore. We strongly hope GQA will provide an enabling resource for the next generation of models with enhanced robustness, improved consistency, and deeper semantic understanding for images and language.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Question Answering", "Visual Question Answering", "Visual Reasoning"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["GQA test-std"], "metric": ["Accuracy"], "title": "GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering"} {"abstract": "We show that the YOLOv4 object detection neural network based on the CSP approach, scales both up and down and is applicable to small and large networks while maintaining optimal speed and accuracy. We propose a network scaling approach that modifies not only the depth, width, resolution, but also structure of the network. YOLOv4-large model achieves state-of-the-art results: 55.5% AP (73.4% AP50) for the MS COCO dataset at a speed of ~16 FPS on Tesla V100, while with the test time augmentation, YOLOv4-large achieves 56.0% AP (73.3 AP50). To the best of our knowledge, this is currently the highest accuracy on the COCO dataset among any published work. The YOLOv4-tiny model achieves 22.0% AP (42.0% AP50) at a speed of 443 FPS on RTX 2080Ti, while by using TensorRT, batch size = 4 and FP16-precision the YOLOv4-tiny achieves 1774 FPS.", "field": ["Image Model Blocks", "Image Data Augmentation", "Generalized Linear Models", "Output Functions", "Convolutional Neural Networks", "Feature Extractors", "Regularization", "Activation Functions", "Learning Rate Schedules", "Normalization", "Convolutions", "Clustering", "Pooling Operations", "Skip Connections", "Object Detection Models"], "task": ["Object Detection", "Real-Time Object Detection"], "method": ["Cosine Annealing", "Average Pooling", "Tanh Activation", "1x1 Convolution", "Bottom-up Path Augmentation", "Softplus", "PAFPN", "Mish", "Convolution", "CutMix", "ReLU", "Residual Connection", "FPN", "YOLOv3", "Spatial Attention Module", "Batch Normalization", "Label Smoothing", "Sigmoid Activation", "Logistic Regression", "k-Means Clustering", "DropBlock", "CSPDarknet53", "Softmax", "Feature Pyramid Network", "YOLOv4", "Darknet-53", "Global Average Pooling", "Rectified Linear Units", "Max Pooling", "Spatial Pyramid Pooling"], "dataset": ["COCO", "BDD100k", "COCO test-dev"], "metric": ["APM", "FPS", "MAP", "box AP", "AP75", "APS", "APL", "mAP@0.5", "AP50"], "title": "Scaled-YOLOv4: Scaling Cross Stage Partial Network"} {"abstract": "Detecting test samples drawn sufficiently far away from the training\ndistribution statistically or adversarially is a fundamental requirement for\ndeploying a good classifier in many real-world machine learning applications.\nHowever, deep neural networks with the softmax classifier are known to produce\nhighly overconfident posterior distributions even for such abnormal samples. In\nthis paper, we propose a simple yet effective method for detecting any abnormal\nsamples, which is applicable to any pre-trained softmax neural classifier. We\nobtain the class conditional Gaussian distributions with respect to (low- and\nupper-level) features of the deep models under Gaussian discriminant analysis,\nwhich result in a confidence score based on the Mahalanobis distance. While\nmost prior methods have been evaluated for detecting either out-of-distribution\nor adversarial samples, but not both, the proposed method achieves the\nstate-of-the-art performances for both cases in our experiments. Moreover, we\nfound that our proposed method is more robust in harsh cases, e.g., when the\ntraining dataset has noisy labels or small number of samples. Finally, we show\nthat the proposed method enjoys broader usage by applying it to\nclass-incremental learning: whenever out-of-distribution samples are detected,\nour classification rule can incorporate new classes well without further\ntraining deep models.", "field": ["Output Functions"], "task": ["class-incremental learning", "Incremental Learning", "Out-of-Distribution Detection"], "method": ["Softmax"], "dataset": ["MS-1M vs. IJB-C"], "metric": ["AUROC"], "title": "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks"} {"abstract": "This paper aims at revisiting Graph Convolutional Neural Networks by bridging the gap between spectral and spatial design of graph convolutions. We theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. The obtained general framework allows to lead a spectral analysis of the most popular ConvGNNs, explaining their performance and showing their limits. Moreover, the proposed framework is used to design new convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. We also propose a generalization of the depthwise separable convolution framework for graph convolutional networks, what allows to decrease the total number of trainable parameters by keeping the capacity of the model. To the best of our knowledge, such a framework has never been used in the GNNs literature. Our proposals are evaluated on both transductive and inductive graph learning problems. Obtained results show the relevance of the proposed method and provide one of the first experimental evidence of transferability of spectral filter coefficients from one graph to another. Our source codes are publicly available at: https://github.com/balcilar/Spectral-Designed-Graph-Convolutions", "field": ["Convolutions"], "task": ["Graph Classification", "Graph Learning", "Node Classification"], "method": ["Depthwise Convolution", "Pointwise Convolution", "Convolution", "Depthwise Separable Convolution"], "dataset": ["ENZYMES", "PPI", "Cora: fixed 20 node per class", "Cora with Public Split: fixed 20 nodes per class", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["F1", "Accuracy"], "title": "Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks"} {"abstract": "The Self-Organizing Map (SOM) is a brain-inspired neural model that is very promising for unsupervised learning, especially in embedded applications. However, it is unable to learn efficient prototypes when dealing with complex datasets. We propose in this work to improve the SOM performance by using extracted features instead of raw data. We conduct a comparative study on the SOM classification accuracy with unsupervised feature extraction using two different approaches: a machine learning approach with Sparse Convolutional Auto-Encoders using gradient-based learning, and a neuroscience approach with Spiking Neural Networks using Spike Timing Dependant Plasticity learning. The SOM is trained on the extracted features, then very few labeled samples are used to label the neurons with their corresponding class. We investigate the impact of the feature maps, the SOM size and the labeled subset size on the classification accuracy using the different feature extraction methods. We improve the SOM classification by +6.09\\% and reach state-of-the-art performance on unsupervised image classification.", "field": ["Clustering"], "task": ["Image Classification", "Unsupervised Image Classification", "Unsupervised MNIST"], "method": ["Self-Organizing Map", "SOM"], "dataset": ["MNIST"], "metric": ["Accuracy"], "title": "Improving Self-Organizing Maps with Unsupervised Feature Extraction"} {"abstract": "Machine comprehension(MC) style question answering is a representative\nproblem in natural language processing. Previous methods rarely spend time on\nthe improvement of encoding layer, especially the embedding of syntactic\ninformation and name entity of the words, which are very crucial to the quality\nof encoding. Moreover, existing attention methods represent each query word as\na vector or use a single vector to represent the whole query sentence, neither\nof them can handle the proper weight of the key words in query sentence. In\nthis paper, we introduce a novel neural network architecture called Multi-layer\nEmbedding with Memory Network(MEMEN) for machine reading task. In the encoding\nlayer, we employ classic skip-gram model to the syntactic and semantic\ninformation of the words to train a new kind of embedding layer. We also\npropose a memory network of full-orientation matching of the query and passage\nto catch more pivotal information. Experiments show that our model has\ncompetitive results both from the perspectives of precision and efficiency in\nStanford Question Answering Dataset(SQuAD) among all published results and\nachieves the state-of-the-art results on TriviaQA dataset.", "field": ["Working Memory Models"], "task": ["Question Answering", "Reading Comprehension"], "method": ["Memory Network"], "dataset": ["SQuAD1.1", "TriviaQA"], "metric": ["EM", "F1"], "title": "MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension"} {"abstract": "In this paper, we propose a novel bipartite flat-graph network (BiFlaG) for nested named entity recognition (NER), which contains two subgraph modules: a flat NER module for outermost entities and a graph module for all the entities located in inner layers. Bidirectional LSTM (BiLSTM) and graph convolutional network (GCN) are adopted to jointly learn flat entities and their inner dependencies. Different from previous models, which only consider the unidirectional delivery of information from innermost layers to outer ones (or outside-to-inside), our model effectively captures the bidirectional interaction between them. We first use the entities recognized by the flat NER module to construct an entity graph, which is fed to the next graph module. The richer representation learned from graph module carries the dependencies of inner entities and can be exploited to improve outermost entity predictions. Experimental results on three standard nested NER datasets demonstrate that our BiFlaG outperforms previous state-of-the-art models.", "field": ["Recurrent Neural Networks", "Activation Functions"], "task": ["Named Entity Recognition", "Nested Mention Recognition", "Nested Named Entity Recognition"], "method": ["Tanh Activation", "Long Short-Term Memory", "LSTM", "Sigmoid Activation"], "dataset": ["GENIA", "ACE 2005"], "metric": ["F1"], "title": "Bipartite Flat-Graph Network for Nested Named Entity Recognition"}